text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This prints A8_i, what does this mean? Is this the reason I have to convert this myArray into 'int' using static_cast? Because 'A8_i' is not a valid datatype... or A8_i doesn't make any sense in type... ? A8_i is your compiler's way of telling you that this is an array of 8 integer. The return value of typeid().name() is compiler dependent, so other compilers may produce something different. > Is this the reason I have to convert this myArray into 'int' using static_cast? I don't know what this means. > A8_i is your compiler's way of telling you that this is an array of 8 integer. - So... myArray is ultimately an integer datatype? Ok, I thought so the conversion question won't be understandable, my bad. In for-each loops chapter, you showed this code at the beginning - Well. - What is the purpose of this? numStudents will hold 5 anyway. And it's already an integer type. Then why are you converting numStudents into 'int' again using static_cast anyway? Note: This C++ is blowing my mind up so I am messing with everything. Don't mind for such stupid questions :] This cast is not needed, because `std::size()` is a `constexpr` function, I've removed it from the example. `std::size()` returns a `std::size_t`, which is some unsigned integer type. Converting from `unsigned` to `signed` in list-initialization is not allowed and would cause a compiler error. "Would", because this conversion is allowed for `constexpr` values (Because the compiler can figure out if the conversion succeeds). Okay. But what about this program? Can't make any sense what's going on here. If I use 'int' instead of 'size_t', the compiler complains that - error: conversion to 'std::array<const char*, 5>::size_type' {aka 'long long unsigned int'} from 'int' may change the sign of the result. [Using 'unsigned int' also works fine, like 'size_t'] Why this happens? size_t is ultimately an unsigned int(or alias type something I read earlier), isn't it? Well, I am not using any negative calculation using 'int' here but it's still complaining. One said me that, holding both positive and negative number(in signed type) requires more resource than unsigned, is it true? So, what data type is studentList? Why it's only working with std::size_t/unsigned int? Not with int? [using vector type instead of array in studentList throws the same complain] I am using Code Blocks, the latest one I guess. Warnings are treating as error. `std::size_t` is some unsigned integer type, usually 64 bits wide (eg. `unsigned long`). > I am not using any negative calculation The compiler doesn't know the values of your variables unless they're `constexpr` (or `const` under certain circumstances). If you're using an `int`, the compiler has to assume that you're using negative values. > holding both positive and negative number(in signed type) requires more resource than unsigned, is it true? An `int` has the same memory footprint as an `unsigned int`. There's no difference. > what data type is studentList? `std::array<const char*, 5>` > Why it's only working with std::size_t/unsigned int? `std::array` uses `std::size_t` for indexes, see You can't implicitly convert/compare signed and unsigned integers. > using vector type instead of array in studentList throws the same complain `std::vector` uses some unsigned integer type for indexes as well (Not necessarily `std::size_t` though). See As Alex said, the return of typeid is compiler-specific. This is just the way your compiler tells you the type. A8_i means an "A"rray of size "8" with values of type "i"nt, so putting them together, you get A8_i. It's a bit cryptic, huh? Cryptic indeed. This code is showing only the letter 'y'. What am I supposed to understand from here?? -__- A bit frustrating for me :/ Why is can static_cast be used without the std:: prefix ? `static_cast` is a keyword, like `if` or `constexpr`. Keywords are built into the language. Thank you! I'm sorry for my misspelling, I believe there's no way to edit comments. learncpp - Shows how to use C-style cast. Also learncpp - Avoid using C-style cast. -__- Ok, I know it's good to know. But sometimes it bothers xD Are static variables and static casts the same thing? They're unrelated Hello, thank you for the tutorial. I'd like to point out a small grammar error near the beginning of the lesson. "In the case where you are using literal values (such as 10, or 4), replacing one or both of the integer literal value" "value" should be plural "In the case where you are using literal values (such as 10, or 4), replacing one or both of the integer literal values" You know how in C you could cast structs as function parameters without an identifier like so: What's the "C++ way" of doing this using static_cast? This doesn't work. I don't know C, but if `print_coords` has a `Vector2` parameter you can simply construct the `Vector2` in the call Yep, it's a struct `Vector2` parameter. That makes sense, thanks! // Hello guys, this is simple code // i want to used type conversion (casting) to change type //from integer to double, but I failed. I dont know what is the problem // can you help me, please.. #include <iostream> int divi(int,int); // function prototype using namespace std; int main() { int x=10; int y =3; cout<<x<<" / "<<y<<" is "<< static_cast<double>(divi(x,y)); return 0; } int divi(int a, int b){ return a/b; } The integer division happens in `divi` already, the caller can't do anything about it. You need to cast one of the operands to a `double` and change the return type. So i have an array Using function find that search a number within myarray, the function will return std::end(myarray) if the number is not found, otherwise find will return pointer to other place. If the number is found, the program will print the index within myarray Now this got me thinking "what if i want to find out the number of bytes between std::begin(myarray) and found?". After trial and error, i can do it with this But this is too tedious, so after looking around, it seems that i can do this instead But after looking at this page, it says that reinterpret_cast can be harmful, or in this case it is alright to do this? `reinterpret_cast` to `char` is allowed to observe bytes. Your specific use of `reinterpret_cast` is allowed, but still discouraged. You don't need any pointer casts at all, because you know the number of elements and the size of each element What is decltype? I omitted it and there is no difference in result. `decltype` "returns" the type of its argument, for example `sizeof` also works with variables, so `decltype` can be omitted. 1) I think in the following snippet, you should replace regular initialization to list initialization as that example will compile and doesn't show any compile error with the regular initialization(=). 2) I found that, when we use list {} with assignment operator, the compiler will produce an error regarding narrowing conversion; otherwise it wouldn't with regular assignment. I didn't know we could use {} with assignment as well as initialization! When I ran the following, I got error on narrowing conversion from int to float. But you said you got 2. is it because of the compiler? Thanks for pointing out the error! I reverted the example to direct initialization and added a comment about why we can't use list initialization. You are very welcome. Thanks for your help and reply. In the following snippet, do STILL both literal floating point values convert to 'double' as the highest priority? No, the division is performed using the types of the most precise operand. In your example, the most precise operand is a `float`, so the division is performed with `float`s. Hi, wonderful people! Just noticed a spelling error (paragraph before Type Casting): "A cast represents an request by the programmer to do an explicit type conversion." 'an' should be 'a'. Thanks for all the hard work, guys! Fixed, thanks! How would a typecast like this work: I have seen this done many times but it still evades me as to how you can cast a non-pointer to a pointer. How would this work. A pointer is just an address. You can cast them to integers and back. If the address you're casting doesn't point to an object of the pointer type, you're causing undefined behavior. C-style casts are unsafe, they allow conversions that cause UB. You can do the same with a `reinterpret_cast`, but it's more obvious that this isn't a safe cast. uintptr_t is not a pointer in and of itself, but capable of storing a pointer. I think it was misleading what I wrote. so uintptr_t is a regular data type, unsigned long i believe. But this doesn't make sense to me, because it is never taught how a non-pointer to a pointer cast works. It's not taught, because it's unsafe and rarely useful. Unless you're doing low-level memory operation (Potentially intentionally invoking UB), there's no need to cast pointers to integers and back. Vice-versa, i mean. Integers to pointers. This is what I need to know. I understand now cstyle casts should generally be avoided.. But I need to know how this particular operation works to see if there is a better cast for it. And yes this is dealing with low-level memory operations. Enter undefined behavior land. Virtually every compiler does what you want, but it doesn't have to. A pointer is nothing but a number (The memory address). When you use a pointer, the memory at that address is read from, or written to. Unlike a `static_cast`, a `reinterpret_cast` changes only the type, not the data. First of all, Thanks for awesome tutorial!!! Can anyone help me? So the result of the follwing is 2.5 But if i change it a bit and put i1 and i2 into brackets (i1/i2), the result is 2. Why?????? `i1 / i2` is performed first. Since both are `int`, the result is an `int` (2). You're then casting the 2 to `float`, but the precision has already been lost. `i1` is cast to `float` first. Then you're dividing `float` by `int`, which produces a `float`. Thank you very much for clarification!!! I have a doubt about this sentence: "Because C-style casts are not checked by the compiler at compile time, C-style casts can be inherently misused". In my experience the C-style cast is checked by the compiler; for example this little program cannot compile: the output console of Visual Studio 2017 says me: error C2440: 'type cast': cannot convert from 'float' to 'char **' I've updated the article a bit with better reasons not to use C-style casts, and added a link with more detail on how C-style casts work for the curious. Thanks! "Static_cast takes as input a values" this bit confuses me.. can u explain what u mean Yes he does. @Alex ? Uhm, sorry, my reply doesn't make a lot of sense when I read the message wrong. He means "static_cast takes as input a value". Single value, not values. ahhh.. makes sense now thank you Typo fixed. Thanks! Can you give more examples of why c++ style cast should be preferred? C style casts have always worked fine for c and java for that matter. I don't understand why they took something simple from C and replaced it with this over complicated ugly, and cumbersome mess. What ever happened to if it ain't broke don't fix it? C++ style casts should be preferred because: 1) They allow you to better express your intent. 2) C-style casts might do any number of things (and sometimes different things depending on context), and it's not always clear which one is being invoked from reading the code. 3) They're easier to find in your code precisely because they're "ugly" and differentiated. 4) At least in the case of dynamic_cast<>, they can do things that C-style casts can't. There's nothing that says you have to give up C-style casts if you want to, but C++ styles casts are safer. has a little bit of additional information as well as an example of where C-style casts can go wrong. ``` In order to announce to the compiler that you are explicitly doing something you recognize is potentially unsafe (but want to do anyway), you can use a cast ``` I believe that's not actually what happened there. "static_cast" is an unary operator with high precedence level(2) which takes in one expression of a type and return that expression with the type programmer desired. So, compiler giving a warning for doing unsafe conversion on type-converting operator which purpose is to convert type makes no sense, so it won't. And when we do "i = static_cast<int>(i / 2.5);", "static_cast<int>(i/2.5);" will be evaluated first since "static_cast" operator has higher precedence lever than "operator=". When compiler evaluate "operator=", it see an int being assigned to an int so no warning. I've confirmed it by using -Wconversion I found in a comment by @nascardriver. Try something like "float x= 3; int y = static_cast<double>(x);" and see your compiler still give warning even when you're using "static_cast". I take your point -- the original wording left something to be desired. I've updated the lesson text to be more explanatory around what's happening without losing the "programmer is taking responsibility" angle. Why should casting be avoided? Wouldn't it be better to use (static) casting always instead of letting it happen implicitly? If you convert types, use a cast. What Alex is trying to say it that you should try to stay out of situations where you'd need a cast by using the same type. Understood, thanks! Also, is using smaller variables for values that are going to be a part of computation with longer variables redundant, since they are going to get promoted anyway? Implicit conversion can be the cause of trouble. If you can, use the same type throughout your computation. Hi there, You said that "In the following program, the compiler will typically complain that converting a double to an int may result in loss of data:" however I didn't find any issue by doing this on my side, I'm using to compile my code. what could be the reason for this? Thanks in advance. Add @Alex While you're add it, you can also use a newer standard got it. thanks a lot! what is type cast operator? #include <iostream> using namespace std; class A {}; class B { public: // conversion from A (constructor): B (const A& x) {} // conversion from A (assignment): B& operator= (const A& x) {return *this;} // conversion to A (type-cast operator) operator A() {return A();} }; int main () { A foo; B bar = foo; // calls constructor bar = foo; // calls assignment foo = bar; // calls type-cast operator return 0; } Can we use in C++? Also, what do you mean by getting rid of a const? Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/explicit-type-conversion-casting-and-static-cast/
CC-MAIN-2021-17
refinedweb
2,602
65.32
Hi. Thanks for all the feedback on my last Blog entry. The C# team did a lot of work trying to understand our users, and it’s great to see where you all do and don’t agree with our conclusions. I thought it might be interesting for you all to get a peek into a different part of our product development process, when we usability test new features. In this entry I’ll briefly cover the C# specific usability study that we did for Generics, including a little on how we tested it, and what we found. It was interesting for me to study a language feature as apposed to a set of UI, and I think we learned a lot.. The Set Up Anson wrote a simple application that used a regular old hashtable of objects from the Visual C# 2003 (Everett). We asked the documentation writer to get us the best version of the Generics documentation that they could before the study. We put a copy of the docs in the waiting room so the participants could read as much or as little of the documentation as they wanted before they started coding. We also taped a copy of the documents down to the table next to their computer where we could keep a remote camera on the docs and get an idea about what the users were reading. There were 56 pages of documentation altogether that covered the concepts of Generics, but mostly focused on the syntax of all the different parts of consuming and creating Generic classes. We wrote out some things that we wanted the users to do with the Anson’s program. Here are two of the tasks we gave them: (Note: This task was used to get a sense of how well users understood the Generics concept. The code used a hashtable, and we wanted to see if they could recognize that the hashtable was a good candidate for Generics. Then we wanted to see their first experience using Generics was good and productive.) (Note: This task was used to get a sense for how easy or hard it would be for users to create a class that was generic. We started them with a non-generic version so they could focus on just the generic code in the study.) We also created an interview, and Anson interviewed each participant about their impressions of Generics after they tried it. The Results Here are some of the key results from the study. In general, users will be able to consume generic collection classes. However, they will find the syntax of specifying the type parameters in the instantiation clause to be unintuitive. Basically, what we saw was that when the developers should have written something like this: List<string> myListOfStrings = new List<string>(); They would look at the docs and write: List<string> myListOfStrings; But they struggled to get: myListOfStrings = new List<string>(); There were a few of factors that made this hard it hard to write the full line of code (note that we were using very early bits for this, so all the namespaces and class names have since changed): Included type parameters with declaration clause, but not with instantiation clause: “error CS0691: ‘ArrayList’ type not found. ‘ArrayList’ has the wrong number of type parameters” Included type parameters with instantiation clause, but not with declaration clause: “error CS0029: Cannot implicitly convert type ‘Experimental.Collections.Generic.ArrayList<int>’ to ‘System.Collections.ArrayList’” I think you can see why the error messages didn’t help much for developers trying to learn the syntax. I think the current error messages are slightly better. Users want to use code snippets to learn the syntax Not much to say here. Basically, all the users except 1 skipped everything in the docs, and just skimmed for code until they saw a code fragment that did what they wanted. We see this effect all over the place, not just learning syntax, but it was especially noticeable here. I guess it figures that people would want to learn syntax by reading examples, but one of the developers did use the “grammar” section that describes the usage of the syntax in a general way, not with examples. Users want specific code snippets for the class they are trying to consume This observation followed the last one, but we saw that the developers skimmed the documentation for examples using the particular class the wanted to use, not just any example. For instance, if they were trying to instantiate a HashTable<K,V>, but the first example was for a List<T> they kept skimming. After they found out that the only example was for a List<T> they went back and used that. During the interview they said they would have been happier with examples for every generic class they could use. After consuming a Generic class, users will be able to create their own Generic classes We saw that by following the examples, the developers were able to convert Anson’s LinkedList to be strongly typed. However, a few of the developers felt that they wouldn’t really need to do so, because they don’t need much more then the .NET Framework was providing. Good news, it sounds like we have the right set of classes in the Framework. C# coding will be better with Generics After trying them, all 5 developers said that Generics was definitely a great addition to the language, and would make their code cleaner to read and run better because they would never get runtime exceptions related to casting. This was congruous from the feedback that we got from our other customer channels as well. The difference was that this feedback was based on actual experience of Generics in code, so we felt that our predictions about the customer reaction to Generics were probably true. Templates don’t kill developers, developers kill developers This harkens back a little to the “Battle Scars from C++” comment in my last blog entry. During the interview, the developers recounted stories of working on C++ projects where templates were hideously misused, and caused them endless pain trying to use other developers template code. They thought that this was a risk with Generics too. I think we need to provide strong guidelines on when to use, and especially when not to use, generic classes. Feedback? I've been thinking a bit about Generics lately, with the growing conclusion that I will mostly use the ones that ship in the 2.0 version of the .NET Framework, but wrapping my head around a few situations where I could see myself defining Generic classes
http://blogs.msdn.com/ricksp/archive/2004/02/10/70885.aspx
crawl-002
refinedweb
1,112
64.95
Lets say we want to read the contents of some file and want to operate on these contents using C program. In this case, we first need to read the complete file into buffer and work on to this character buffer to do what we want to do. Following c program, copied the contents of JSON file test_file.json into buffer and then dumps that buffer to stdout. [ You can do whatever you want which this bufffer ] $ vim read_file_to_buffer.c #include <stdio.h> #include <stdlib.h> #include <stdbool.h> /* * 'slurp' reads the file identified by 'path' into a character buffer * pointed at by 'buf', optionally adding a terminating NUL if * 'add_nul' is true. On success, the size of the file is returned; on * failure, -1 is returned and ERRNO is set by the underlying system * or library call that failed. * * WARNING: 'slurp' malloc()s memory to '*buf' which must be freed by * the caller. */ long slurp(char const* path, char **buf, bool add_nul) { FILE *fp; size_t fsz; long off_end; int rc; /* Open the file */ fp = fopen(path, "r"); if( NULL == fp ) { return -1L; } /* Seek to the end of the file */ rc = fseek(fp, 0L, SEEK_END); if( 0 != rc ) { return -1L; } /* Byte offset to the end of the file (size) */ if( 0 > (off_end = ftell(fp)) ) { return -1L; } fsz = (size_t)off_end; /* Allocate a buffer to hold the whole file */ *buf = malloc( fsz+(int)add_nul ); if( NULL == *buf ) { return -1L; } /* Rewind file pointer to start of file */ rewind(fp); /* Slurp file into buffer */ if( fsz != fread(*buf, 1, fsz, fp) ) { free(*buf); return -1L; } /* Close the file */ if( EOF == fclose(fp) ) { free(*buf); return -1L; } if( add_nul ) { /* Make sure the buffer is NUL-terminated, just in case */ *buf[fsz] = '\0'; } /* Return the file size */ return (long)fsz; } /* * Demonstrates a call to 'slurp'. */ int main(void) { long file_size; char *buf; file_size = slurp("./test_file.json", &buf, false); if( file_size < 0L ) { perror("File read failed"); return 1; } (void)fwrite(buf, 1, file_size, stdout); /* Remember to free() memory allocated by slurp() */ free( buf ); return 0; } $ gcc -o read_file_to_buffer read_file_to_buffer.c $ ./read_file_to_buffer Reference :
https://www.lynxbee.com/c-program-to-read-the-contents-of-a-file-into-character-buffer/
CC-MAIN-2020-24
refinedweb
345
68.3
Just in case you are not clear about the problem being solved by using the IN clause, here's an example scenario: You want to allow your users to select any number of items from a list, and to use their choice to filter your next databse query on. For example, you might decide to present a list of categories to a user, and then to find all books in all the categories they choose. Your starting point might look like this: @{ var db = Database.Open("Books"); var categories = db.Query("Select CategoryId, Category FROM Categories"); } <!DOCTYPE html> <html> <head> <title></title> </head> <body> @if(!IsPost){ <h3>Choose your categories</h3> <form method="post"> @foreach(var item in categories){ <input type="checkbox" name="categoryid" value="@item.CategoryId" /> @item.Category<br /> } <input type="submit" value="Choose" /> </form> } @if(IsPost){ <h3>You chose:</h3> foreach(var item in categories){ @item.Category<br /> } } </body> </html> When run, the resulting page presents a series of checkboxes allowing the user to select from multiple categories: Assuming the user chooses the first, third and fifth, the SQL to to fetch related books can come in two flavours. The first is as follows: SELECT * FROM Books WHERE BookId = 1 OR BookId = 3 OR BookId = 5 Taking this approach means dynamically generating the SQL by concatenating multiple OR statements. The code to produce this kind of statement can get messy, but is commonly found, and most often leads to the developer concatenating user input directly into the SQL which is not safe. The second approach is to use an IN clause: SELECT * FROM Books WHERE BookId IN (1,3,5) An IN clause takes a comma-separated string of values, and if you look a the first code example, you see that each checkbox is given the same name attribute: "categoryId". When a form is posted with mutiple identically named elements selected, the result is passed as a comma-separated string, so on post back, Request["categoryId"] give us "1,3,5". However, simply plugging Request["categoryId"] in as a parameter value will not work. this will give you errors: var books = db.Query("SELECT * FROM Books WHERE BookId IN (@0)", Request["categoryId"]); Each value within the IN clause needs to be parameterised on its own. What you really need to end up with is something more like this: var books = db.Query("SELECT * FROM Books WHERE BookId IN (@0, @1, @2)", value1, value2, value3); Web Pages is clever enough to see that the arguments value1, value2 and value3 are separate items which need to be passed in to the parameter placeholders at runtime. This is because the second parameter of the Database.Query() method accepts an array of Objects. So the task is to generate the right number of parameter placeholders, and to pass in an array as the second argument. This is how you can do that, given a comma-separated string: @{ var db = Database.Open("Books"); var categories = db.Query("Select CategoryId, Category FROM Categories"); if(IsPost){ var temp = Request["categoryId"].Split(new[]{','}, StringSplitOptions.RemoveEmptyEntries); var parms = temp.Select((s, i) => "@" + i.ToString()).ToArray(); var inclause = string.Join(",", parms); var sql = "SELECT Category FROM Categories WHERE CategoryId IN ({0})"; categories = db.Query(String.Format(sql, inclause), temp); } The code takes the comma-separated string and generates an array from it, which is stored in the variable "temp". A second array is created containing strings starting a "@0", and then this array is converted to a string representing the parameter placeholders in the SQL. This is then melded with the core SQL using string.Format, and the "temp" array passed in. And it works: However, it's a little untidy, so a Helper method would be of use here. Create a folder called App_Code and within that, add a new C# class file. I called mine DatabaseExtensions. The full code for that file is as follows: using System; using System.Collections.Generic; using WebMatrix.Data; using System.Linq; public static class DatabaseExtensions { public static IEnumerable<dynamic> Query.Query(string.Format(commandText, inclause), temp); } public static int Execute.Execute(string.Format(commandText, inclause), temp); } } These methods extend the Database class to provide support for IN clauses. They effectively add two new methods - Database.QueryIn() and Database.ExecuteIn(). The first parameter in both methods is prefixed with the word "this", which denotes what object you want to extend. The rest of the methods takes all that code concerned with creating arrays out of your Razor section in the actual .cshtml file so that it can be replaced like this: @{ var db = Database.Open("Books"); var categories = db.Query("Select CategoryId, Category FROM Categories"); if(IsPost){ var sql = "SELECT Category FROM Categories WHERE CategoryId IN ({0})"; categories = db.QueryIn(sql, Request["categoryId"]); } } 12 Comments - reav - camus - Mike They are called extension methods. A bit more of an explanation of how extension methods works can be found here: Displaying The First n Characters Of Text - BenC I have a form that has both checkboxes (for which this article has been helpful) and a set of radio buttons. How would I go about passing another parameter (i.e. from the radio buttons) into this query/code. Have made numerous attempts but no success. Thanks in advance. - Mike You can concatenate it on to the inclause variable. - sunny Request.Form["Name of the identifier"] whenever we use form method in new webmatrix 2. - Mike No you don't. You can still use the shorter Request["Name of identifier"]. Nothing has changed. That's a standard part of the ASP.NET framework. - Michael - Gautam I am very new to programming: In the above example if I want to use a delete button along with the choose button.... is the below code ok and good programming practise or is there a better way? if(IsPost) categoryId = Request["categoryid"]; if(Request["button"].Equals("choose")) //QueryIn code goes here if(Request["button"].Equals("delete")) //ExecuteIn code goes here - Mike That seems fine to me. - Robby See also: - Mike I know nothing about Azure. It's not a service I use. Sorry.
http://www.mikesdotnetting.com/article/156/webmatrix-database-helpers-for-in-clauses
CC-MAIN-2016-30
refinedweb
1,015
57.67
Java 2D Graphics, The Point2D Class Java Programming, Lecture Notes # 302, Revised 02/06/00. - Introduction - What is a Point? - Nested Top-Level Classes - What About the Point Class? - Methods of the Point2D Class - Methods of the Nested Subclasses - Methods of the Point Class - Sample Program - Complete Program Listing Introduction What is a Point?. Nested Top-Level Classes. What About the Point Class?. Methods of the Point2D Class The Point2D class provides several methods that are inherited by its subclasses, and can be used to operate on objects instantiated from those subclasses. Most of the methods have several overloaded versions. Generally the methods provide the following capabilities: - Create a new object of the same class and with the same contents as an existing point object. - Determine the distance between two points. - Determine the square of the distance between two points. - Determine whether or not two points are equal. - Get the x and y coordinate values of a point. - Set the x and y coordinate values of a point. - Get the hashcode value for a point. The sample program that I will present later will make use of some of these capabilities. Methods of the Nested Subclasses. Methods of the Point Class The Point class provides methods to accomplish generally the same behavior as described above for the new classes in the 2D API, although in some cases the syntax is different. In addition, the Point class provides methods to - Move a point to a specified location in the (x, y) coordinate plane. - Translates a point, at location (x, y), by dx along the x axis and dy along the y-axis so that it then represents the point (x + dx, y + dy).: - Point objects specify a location of a point in whole (int) units only. - Point2D.Double and Point2D.Float objects specify the location of a point using fractional values of either the double or float variety. The sample program presented later will illustrate the use of the nested subclasses named Point2D.Double and Point2D.Float. Sample Program) import java.awt.geom.Point2D; class Point01{ Point2D doublePointVar; Point2D floatPointVar; Figure 1shows. The main() method Figure 2 publicstaticvoid main(String[] args){ Point01 thisObj = new Point01(); Figure 2shows. An object of the nested subclass Point2D.Double Figure 3 thisObj.doublePointVar = new Point2D.Double(1.0/3,2.0/3); Figure 3instantiates33333... 0.6666666666666666666666666666666666666666... At least these would be the values if we had infinite precision. In reality, these values are stored in the object with the precision afforded by the double type An object of the nested subclass Point2D.Float Similarly, Figure 4 thisObj.floatPointVar = new Point2D.Float( (float)1.0/3,(float)2.0/3); Figure 4instantiates. Getting and displaying coordinate values Figure 5 5applies. Complete Program Listing A listing of the complete program is provided in Figure 6 /*Point01.java 12/07/99 Illustrates use of the static inner classes of the java.awt.geom.Point2D class. This program instantiates an object of each of the following nested classes and populates the X and Y values of those objects with never-ending fractional values consisting of never-ending strings of 33333 and 66666. Point2D.Double Point2D.Float The reference to each of the objects is stored in a reference variable of the type Point2D, which is the superclass of each of the nested classes. Then the getX() and getY() methods of the two classes are used to get and display the values stored in each of the objects. The output from the program is: Data from the object of type Point2D.Double 0.3333333333333333 0.6666666666666666 Data from the object of type Point2D.Float 0.3333333432674408 0.6666666865348816 This output illustrates the manner in which the two different nested classes can be used to instantiate a Point2D object that maintains its coordinate data either as type double or as type float. Note that even though the Point2D.Float class stores its coordinate data as float, the getX() and getY() methods of that class return the coordinate values as type double. The values returned are incorrect beyond about the seventh significant digit. Tested using JDK 1.2.2 under WinNT Workstation 4.0 *********************************************/ import java.awt.geom.Point2D; class Point01{ //Declare two different instance variables, each of // type Point2D. Point2D doublePointVar; Point2D floatPointVar; publicstaticvoid main(String[] args){ //Instantiate a new object of this type containing two // instance variables, both of type Point2D. Point01 thisObj = new Point01(); //Instantiate an object of the type Point2D.Double // and store a reference to the object in one of the // instance variables of the object of this class. // Populate the X and Y values of the object with // a never-ending fraction of the primitive type // double. thisObj.doublePointVar = new Point2D.Double(1.0/3,2.0/3); //Instantiate an object of the type Point2D.Float // and store a reference to the object in one of the // instance variables of the object of this class. // Populate the X and Y values of the object with // a never-ending fraction of the primitive type // float. Note that a cast is required to cause the // division operation to produce a float result // instead of a double result. thisObj.floatPointVar = new Point2D.Float((float)1.0/3,(float)2.0/3); //Get and display the X and Y values stored in each of // the objects. Note that the getX() and getY() // methods of the class Point2D.Float return the X // and Y coordinate values as type double, but the // value is incorrect beyond about the seventh // significant digit. 6<<
https://www.developer.com/net/vb/article.php/626081/Java-2D-Graphics-The-Point2D-Class.htm
CC-MAIN-2018-22
refinedweb
920
58.58
Software Engineer Job Searching in a Pandemic (3 Part Series) TL;DR So an idea from a friend inspired me to do a "little" side project; I've bulit a template rails api that I (and others) can use to kick start any project. It has a few features that most every web app has nowadays like user authentication, password reset, and soon a basic user profile. It took a lot of work to get it just right, super clean, well documented and what feels to be built with as little code as possible. I have now made it a template repo in my GitHub and it is public; I've given credit where credit is due because very little of this project was only me. Below is the full readme within the repo. I really enjoyed working on this little project; I'm going to continue to build upon it and I hope others will find it helpful as well when starting new personal projects. README Template_API This readme can be used not just a reference for the template but also goes into details for how the user authentication works and why it is configured the way it is. It is very important to note that this all comes from a great youtube series from Edutechnical HTTPS session cookies. The password reset and emailer functionality is adapted from two blog posts by Pascales Kurniawan, links to the posts are at the bottom of the readme. Configuration Ruby version 2.6.1 System defaults - localhost port 3001 set within - _api/config/initializers/cors.rb-ln7 - _api/config/puma.rb-ln13 Session Cookies for User Auth - CORS setup with broad range of methods, can be customized per application's needs. File can be found here (_api/config/initializers/cors.rb) Database initialization rails db:create is all that is needed to create the initial databases. By default this will create a deveolpment and test database. User Model Configuration - Used for user authentication User Model Creation rails g model User username email password_digest This generates a User model with a username, email and password; digest works with bcrypt and built in rails functionality to encrypt the string provided as the password. user.rb has_secure_password validates_presence_of :username, :email validates_uniqueness_of :username, :email(16) end Default configuration is to enforce a unique username and email for each user that registers. Password tokens are generated as a random hex number 16 characters long; and a timestamp is created at the time of generation. That timestamp is used to compare against when the password is reset to ensure a timeframe inwhich the token is valid; in this template it is set to a 4 hour window but that can be changed. current_user_concern.rb This module creates a before action to be used universally that sets the current user to the user stored in sessions; allowing front-end pages to check the current user when needed. extend ActiveSupport::Concern included do before_action :set_current_user end def set_current_user if session[:user_id] @current_user = User.find(session[:user_id]) end end sessions_controller.rb Controlls user session creation, logging in and logging out. include CurrentUserConcern def create user = User .find_by(username: params["user"]["username"]) .try(:authenticate, params["user"]["password"]) if user session[:user_id] = user.id render json: { status: :created, logged_in: true, user: user, } else render json: { status: 401 } end end def logged_in if @current_user render json: { logged_in: true, user: @current_user, } else render json: { logged_in: false, } end end def logout reset_session render json: { status: 200, logged_out: true } end registrations_controller.rb Handles the creation of a user account and wraps the json object in a user key. The json string for a user will look like the following. {"user":{"username":"", "email":"", "password":"", "password_confirmation":""}} def create user = User.create!( username: params["user"]["username"], email: params["user"]["email"], password: params["user"]["password"], password_confirmation: params["user"]["password_confirmation"], ) if user session[:user_id] = user.id render json: { status: :created, user: user, } else render json: { status: 500 } end end API Endpoints / Routes routes # Routes for forgotten password post "password/forgot", to: "passwords#forgot" post "password/reset", to: "passwords#reset" # Create a User session # Create a new User via registration resources :sessions, only: [:create] resources :registrations, only: [:create] # Log a user out # Check to make sure a user is logged in # this is used when moving in the app # to check the user remains logged in delete :logout, to: "sessions#logout" get :logged_in, to: "sessions#logged_in" # Static page at root of api to make sure # server is running root to: "static#home" static_controller.rb def home render json: { status: "It's working" } end Render json status at server root for basic checking of function - i.e. is the server running. application_controller.rb ... skip_before_action :verify_authenticity_token ... Makes sure authentication token is verified before anything else. Forgotten password Two posts from Pascales Kurniawan are my sources that helped me build a password reset process including send out emails for a welcome, password reset and a reset confirmation. It starts with creating a passwords controller, there's a lot here but we will go through it all. passwords_controller.rb def forgot # check first if email exists, neede d because # though email is required to register, non-templated app # may allow for email to be removed by user later if params[:username].blank? return render json: { error: "Username not present" } else # find user by email user = User.find_by(username: params[:username]) end # if present then generate a token if user.present? user.generate_password_token! # send out email UserMailer.forgot_password(user).deliver_now render json: { status: "ok" }, status: :ok else render json: { error: "Email address not found. Please check and try again." }, status: :not_found end end def reset token = params[:token].to_s if params[:email].blank? return render json: { error: "Token not present" } else user = User.find_by(reset_password_token: token) end if user.present? && user.password_token_valid? if user.reset_password!(params[:password]) render json: { status: "ok" }, status: :ok # send email UserMailer.password_reset(user).deliver_now else render json: { error: user.errors.full_messages }, status: :unprocessable_entity end else render json: { error: "Link not valid or expired. Try generating a new link." }, status: :not_found end end The forgot method starts by checking if the username params is blank or not. There are no views setup in this api, but this covers a use case of leaving the username field blank but still clicking on a submit button. If the username is found then a password toekn is generated - that method is in the user model - and send an email. Email will be covered later. Finally if the user does not have an email on record then an error is passed forward. This is also an anticapated failsafe, but not one that is needed as is for the template. This is because email is required in the template api, a new user won't be created without one. The reset method functions very similar to the forgot method. This first takes the token that is sent in the email (more on this later) and takes it back as a param. If the email is blank (again, from frontend params, no views are created but the checks are in place) then it passes an error. A check for both a present user and a valid token allows the password to be reset from a given param and then send an email confirming the email has been changed. To send emails from a rails backend you start by creating a mailer like a controller. rails g mailer UserMailer In the _api/app/mailers/application_mailer.rb file there is a change to be made. This is one of the places that follow that will not work right out of the box in this api. I tested everything with an email setup I created for myself and removed that information. It's a simple process of creating a new gmail account that can be used. application_mailer.rb default from: "create-a-new-email@gmail.com" layout "mailer" Lets start with what needs to be configured in _api/config/environments/development.rb. Add the below to the file; this configures the development environment for gmail and sending emails. development.rb # Don't care if the mailer can't send. # Comment out the below as it is used again # if email is not needed in your app uncomment this one # config.action_mailer.raise_delivery_errors = false config.action_mailer.perform_caching = false # configuration for sending out welcome email from Gmail config.action_mailer.delivery_method = :sendmail # config.action_mailer.delivery_method = :test config.action_mailer.default_url_options = { :host => "" } config.action_mailer.perform_deliveries = true config.action_mailer.raise_delivery_errors = true config.action_mailer.default_options = { from: "create-a-new-email@gmail.com" } config.action_mailer.delivery_method = :smtp config.action_mailer.smtp_settings = { address: "smtp.gmail.com", port: 587, domain: "gmail.com", user_name: "create-a-new-email@gmail.com", password: "password", authentication: "plain", enable_starttls_auto: true, } These are simple html.erb files and text.erb files. They are all in the user_mailer folder within the views folder of the rails app. There is a view that corresponds with each method in the _api/app/mailers/user_mailer.rb file. Those methods create the email header; the views files create the email body. <h1>Forgot your password</h1> <p> Don't worry about it, this happens to us all from time to time.<br /> Your username on record is: <%= @user.username %><br /> Your email on record is: <%= @user.email%> </p> <p> Follow this link to create a new password <%= link_to "Reset Your Password", (''+@user.reset_password_token) %> </p> <p> This link will only work once and is only valid for the next 4 hours; so get to it. <br /> Cheers! </p> This specific view is important because it shows how to embed the reset token into the link back to server so the user never has to see or interact with the token. Deployment The only steps needed to deploy this template api are as follows: - fork the repo in GitHub - clone it to your local system - allow the template to be the root of your backend, you can rename it as needed - run bundlefrom api root That is it. The api is set and ready to go. You can start up the server and test the functionality after you update smtp settings to an email you want to test with. Testing Sessions Testing In Terminal, - Start rails server (per template config it will run on port 3001) - New Terminal Tab type the following: curl --header "Content-Type: application/json" \ --request POST \ --data '{"user":{"username":"ronkilav", "email":"ronkilav@email.com","password":"asdfasdf"}}' \ It is just easier to type this in Terminal than to try to test api with an application like Postman or Insomnia. The backslash is just to allow multiple lines in Terminal. Server response if configuration is correct: {"status":"created", "logged_in":true, "user":{"id":1,"username":"ronkilav","email":"ronkilav@email.com", "password_digest":"$2a$12$Fj6ZBybAM15mNdQYeuSzceiKXwH5Knl0VTNmfuU9BxQzyY9yBnncK", "created_at":"2020-05-19T20:45:58.288Z", "updated_at":"2020-05-19T20:45:58.288Z"}} [16:06:22] Output is actually on a single line, I placed it on multiple lines for readablity. Forgotten Password Testing To test the process for reseting a password (and the emailing functionality) use a program like insomnia or postman; or modify the above curl command that checks if there is a user stored in sessions. For password reset test I used insomnia and will include screenshots of the three POST requests needed for full testing. To test the full process start by creating a new user. This can be done in a rails console but doing so will bypass the registrations_controller.rb and therefore the email process. So in insomnia create a POST request with all the needed new user information. As long as you setup the email services first before these tests then you will get an email after registering a new user, requesting a new password and after setting a new password. You will need rails console to get the reset token to add to the reset post request, but with a frontend or a set of views built out it will pass that token along as needed. Links and resources italic bullet points to be made after forking this template Software Engineer Job Searching in a Pandemic (3 Part Series) Posted on Mar 19 by: James Shipman Software Engineer, all around delightful nerd and gamer. Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jbshipman/post-graduation-week-3-4-1boa
CC-MAIN-2020-29
refinedweb
2,032
54.93
This page explains how to create a continuous integration and delivery (CI/CD) pipeline on Google Cloud Platform it to use a Pull Request for this purpose. While we recommend Spinnaker to the teams who want to implement advanced deployment patterns (blue/green, canary analysis, multi-cloud, etc.), its feature set may not be needed for a successful CI/CD strategy for smaller organizations and projects. In this tutorial, you learn how to create a CI/CD pipeline fit for applications hosted on GKE with simple tooling. For simplicity, this tutorial uses a single environment —production— in the env repository, but you can extend it to deploy to multiple environments if needed. Costs Before you begin Select or create a GCP project. GO TO THE MANAGE RESOURCES PAGE Enable billing for your project. Open Cloud Shell to execute the commands listed in this tutorial. If the gcloud config get-value projectcommand does not return the ID of the project you just]" When you finish this tutorial, you can avoid continued billing by deleting the resources you created. See Cleaning up for more detail. Create just cloned contains a simple "Hello World" application. from flask import Flask app = Flask('hello-cloudbuild') @app.route('/') def hello(): return "Hello World!\n" if __name__ == '__main__': app.run(host = '0.0.0.0', port = 8080) Create a container image with Cloud Build The code you cloned already indeed available in Container Registry. Create the continuous integration pipeline In this section, you configure Cloud Build to automatically run a small unit test, build the container image, and then push it to Container Registry. Pushing a new commit to Cloud Source Repositories triggers automatically this pipeline. The cloudbuild.yaml file already Triggers page of Cloud Build. Click Create trigger. Select "Cloud Source Repositories" as source and click Continue. Select the hello-cloudbuild-app repository and click Continue. In the "Triggers settings" screen, enter the following parameters: - Name: hello-cloudbuild - Branch (regex): master - Build configuration: cloudbuild.yaml Click Create. You should see a build running or having recently finished. You can click on the build to follow its execution and examine its logs. Create. Grant Cloud Build access to GKE To deploy the application in your Kubernetes cluster, Cloud Build needs the Container Developer IAM Role. In Cloud Shell, execute: PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')" gcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \ --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role=roles/container.developer Initialize the hello-cloudbuild-env repository You need to initialize the hello-cloudbuild-env repository with two branches (production and candidate) and a Cloud Build configuration file describing the deployment process. In Cloud Shell, clone the hello-cloudbuild-env repository and create the production branch. It is still empty. Create the trigger for the continuous delivery pipeline In this section, you configure Cloud Build to be triggered by a push to the candidate branch of the hello-cloudbuild-env repository. Open the Triggers page of Cloud Build. Click Add trigger. Select "Cloud Source Repositories" as source and click Continue. Select the hello-cloudbuild-env repository and click Continue. In the "Triggers settings" screen, enter the following parameters: - Name: hello-cloudbuild-deploy - Branch (regex): candidate - Build configuration: cloudbuild.yaml Click Create trigger. Modify the continuous integration pipeline to trigger the continuous delivery pipeline In this section, you add some steps to the continuous integration pipeline that will generate a new version of the Kubernetes manifest and push it to the hello-cloudbuild-env repository to trigger the continuous delivery pipeline. Copy the extended version of the cloudbuild.yamlfile for the app repository. cd ~/hello-cloudbuild-app cp cloudbuild-trigger-cd.yaml cloudbuild.yaml The cloudbuild-trigger-cd.yamlis an extended version of the cloudbuild.yamlfile. It adds the steps below:. You should see a build running or having recently finished for the hello-cloudbuild-app repository. You can click on the build to follow its execution and examine its logs. The last step of this pipeline pushes the new manifest to the hello-cloudbuild-env repository, which triggers the continuous delivery pipeline. Examine the continuous delivery build. You should see a build running or having recently finished for the hello-cloudbuild-env repository. You can click on the build to follow its execution and examine its logs. Test the complete pipeline The complete CI/CD pipeline is now configured. In this section, you test it from end to end. Go to the GKE Services page. GO TO GOOGLE KUBERNETES ENGINE SERVICES There should be a single service called hello-cloudbuild in the list. It has been created by the continuous delivery build that just ran. Click on the endpoint for the hello-cloudbuild service. You should see "Hello World!". If there is no endpoint, or if you see a load balancer error, you may. You should now see "Hello Cloud Build!". Test. You should now see "Hello World!" again. the three vertical dots to the right of the trigger and
https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build
CC-MAIN-2019-18
refinedweb
826
50.43
I recently came across something like the following Python code in the wild: def check_flag(flag): if flag == "BEGIN": ... elif flag == "MIDDLE": ... elif flag == "END": ... else: raise ValueError('Invalid flag') There are a few immediate ways to improve this. First, we might consider using constants BEGIN, MIDDLE, and END instead of hardcoded strings. While that doesn’t save us much in this part of the code, it does help us avoid some kinds of bugs, like typos in these strings when we’re passing them around elsewhere. We’re still allowing any valid string to be a flag though. We could solve this with a custom class or even something like namedtuple. Let’s go even simpler, and use an Enum. While it seems like just another way to organize our three constants, this actually gives us a pretty nice benefit: a new type. Using mypy, we can now check that the flag we’re being passed is a valid StoryPart, and do away with the string handling. We’re still condemned to having to remember all possible values of StoryPart if we want to do something different on each of them, but it’s a good start. Of course, some languages handle these kinds of tagged unions (or “sum types”) natively. Here’s some sample Haskell, where we pattern match over a sum type: data StoryPart = Begin | Middle | End checkFlag :: StoryPart -> ... checkFlag Begin = ... checkFlag Middle = ... checkFlag End = ... If we turn on the -fwarn-incomplete-patterns warning, we’ll even get messages that we missed one of the possible cases. Elm goes one step further and doesn’t let your code compile until you’ve handled all cases explicitly. While this can be a bit annoying, I’d usually much rather be somewhat annoyed at compile time than be even more annoyed chasing down bugs at runtime.
https://vitez.me/sum-types-are-better
CC-MAIN-2020-16
refinedweb
307
71.04
#include <deal.II/fe/mapping_q1.h> Implementation of a \(d\)-linear mapping from the reference cell to a general quadrilateral/hexahedron. The mapping implemented by this class maps the reference (unit) cell to a general grid cell with straight lines in \(d\) dimensions. (Note, however, that in 3D the faces of a general, trilinearly mapped cell may be curved, even if the edges are not). This is the standard mapping used for polyhedral domains. It is also the mapping used throughout deal.II for many functions that come in two variants, one that allows to pass a mapping argument explicitly and one that simply falls back to the MappingQ1 class declared here. (Or, in fact, to an object of kind MappingQGeneric(1), which implements exactly the functionality of this class.) The shape functions for this mapping are the same as for the finite element FE_Q of polynomial degree 1. Therefore, coupling these two yields an isoparametric element. Definition at line 56 of file mapping_q1.h. Default constructor. Definition at line 44 of file mapping_q1.cc. Return a pointer to a copy of the present object. The caller of this copy then assumes ownership of it. The function is declared abstract virtual in this base class, and derived classes will have to implement it. This function is mainly used by the hp::MappingCollection class. Reimplemented from MappingQGeneric< dim, spacedim >. Definition at line 53 of file mapping_q1.cc.
http://www.dealii.org/developer/doxygen/deal.II/classMappingQ1.html
CC-MAIN-2016-30
refinedweb
236
58.08
IETF Drops RFC For Cosmetic Carbon Copy 63 paulproteus writes "Say you have an email where you want to send an extra copy to someone without telling everyone. There's always been a field for that: BCC, or Blind Carbon Copy. But how often have you wanted to do the opposite: make everyone else think you sent a copy to somebody without actually having done so? Enter the new IETF-NG RFC: Cosmetic Carbon Copy, or CCC. Now you can conveniently email all of your friends (with a convenient exception or two...) with ease!" This would actually be useful. (Score:2, Insightful) Although it is an April Fool's...this would actually be useful. I can see a couple times where CCCing a Boss or someone else would get things done quicker. I hate being a "tatle" but this would work to scare some people into action. Re: (Score:2) I'm curious as to how an actual implementation would be supposed to handle a Reply-All. This is discussed in the "Security considerations" (Score:2) If you're willing to break the CCC standard, you could mangle the "." in an email address. There are plenty of Unicode characters that look like a dot that aren't the real dot. That way, the reply-all to the CCC'd recipient would bounce. Otherwise, well, um, see the Security considerations section. Re: (Score:1, Redundant) actually, the 'CCC standard' allows obfuscation of Ccc addresses when they get merged with CC. Option is left up to the implementation as obviously both have drawbacks. No obfuscation means reply all would show the CCCed person they were left out, obfuscation would allow the CCed person to know who was CCCed if they looked closely or hit reply all. Re: (Score:2) An easier way to fix this problem is to simply use the existing system, but with misspellings: to: you@dweeb.com cc: tom@dweeb.com, dick@dweeb.com, harry@dweeb.com, thebigbosss@dweeb.com, cutesecretary@dweeb.com Observant people will notice the mis-spelling, but most people won't and they'll think the Boss is watching, so they'd better get off their twinkie-fat ass and do some work. Re: (Score:1) Re: (Score:1, Funny) Since when does a boss read e-mails anyway? In this case, CC and CCC would function the same. Re: (Score:3, Insightful) If you just want to trick people, I've found adding a: CCd To: (addresses here) in the message body accomplishes it well enough, it only fails if they actually think about it, which generally only happens if they already suspect you of being full of shit :) Most people however (at least outside the geek world) are too oblivious to realize its just text in the message. Re: (Score:3, Insightful) I've been amazed lately by the number of regular e-mail users who take no notice of any headers at all. Anything in the Subject: line might as well not be there, and I keep getting replies from people to whom I've Cc:ed something saying, "Who did you send this to originally?" There are are quite a few people out there to whom nothing but the message body exists. Re: (Score:2, Funny) Actually, it's the "CC" field. (Score:3, Funny) Re:This would actually be useful. (Score:4, Insightful) I have actually wanted this feature at times and wondered why there was no way in the MUA UI to do it. Not for keeping people out of the loop, but for resending emails that get bounced (say, misspelled email or delivery failure) or to recipients I forgot the first time. Lets say that you are sending out a move invite to a number of friends. Just after you send it you notice that you forgot Alice. Now you need to send the invite just to her, but you prefer the email to look as the original so that she can see who else is invited. This is a common occurrence! And it would be very convenient if you could just bring up the email again, move everyone from To/Cc into Ccc, and then put her as the only CC. Please explain to me again why this is presented as an Aprils fool, rather than a genuine feature? Re: (Score:2) Please explain to me again why this is presented as an Aprils fool, rather than a genuine feature? Because functionally supporting this header has the exact same result as not functionally supporting the header. An RFC just puts it in the official header namespace, otherwise you could have always used X-Ccc: Personally, I'd like support for multiple Dcc: headers: Disjoint Carbon Copies. I want to send the same message to multiple groups of addresses where I want those in one set to know they were all copied but want to hide that it was sent to the other group, and vice versa. Those in the To: header would) Sounds more like you'd want to recall the original, and re-send the revision. With a CCC, the original group still doesn't know Alice is invited. With a recalled E-Mail, nobody is the wiser- except those who are quick to read the E-Mail, or don't have a "recall this E-Mail" aware client. Re: (Score:2) I am absolutely opposed to a feature that would allow people to alter email I have already received. I often use my email as an historic record of events, and would hate if I could not be sure that it is immutable. Also, I'm pretty sure there is no standard for doing this across mail systems. To roll it out would require a major revamp of email as we know it today, since it requires some careful form of cross-realm authentication. Re: (Score:2) You could always use "Forward", which includes the original message along with the list of original recipients. Re: (Score:2) Which usually requires you to add an awkward "Hi, I forgot to send this to you" to not make the inline headers too confusing. Sure, this is what I do today, but it is less convenient than Ccc: would be, and exposes my mistake, which I'd rather avoid if I could. Re: (Score:2) when I was allowed to use thunderbird for email, it allowed me to open my sent messages, choose forward, then paste all the original recipients into the "Reply To:" box. Thus Alice can see the original email, and contacts, and dates, and no one got a second set of emails. If she chooses to "reply" then she will email the whole list back by default. more honest... Although I don't know the difference in "Reply to" and Follow up" to. Also doesn't matter I am stuck with Lotus notes now at work anyway. (Or is " Re: (Score:2) Select forward, when asked for type of forward choose 'Redirect'. This works Re: (Score:2) Thanks for the tip. I didn't know about this (needs a plugin for thunderbird) But I couldn't find a standard for redirected email, is there a rfc for this? Essentially it is like Ccc: + setting a custom from address. To me this really proves how this Aprils fools joke really isn't a joke. Rather, it would be nice with a standard for headers that covers this functionality.) It'll be interesting to see if management abandons the policy of plunking Flash cookies [wikipedia.org] on our computers when a new day comes. Every click on a discussion sets another one. Let's hope today was not just a convenient way to sneak them in from now on. Please, Slashdot, drop the Flash cookies when the joke is over. Useful (Score:2) Enough April Fool's Already. (Score:5, Interesting) Re: (Score:2) With most sites cought in self-imposed circlejerk of 1 IV stories, I can imagine that...no, there's actually not that much to report :/ (though that wouldn't excuse Slashdot, it being usually late with anything...except for the fraking April Fool's, apparently...sigh)) Do you seriously mean to tell me that there are no important tech stories taking place today? Do you seriously mean to tell me that there could be any actually important stories taking place today? Perhaps you aren't familiar with the concept of April Fool's and more importantly, how people AVOID making announcements on this day to avoid confusion. Re: (Score:2) iirc, gmail was initially announced around this time of year and people weren't exactly sure if it was a joke or not... but that only helped with the initial publicity. but personally, i can't wait until the big joke on april 1st is to do no joke at all. Re: (Score:2) I laughed. But I am not entirely full of hate. There's fortunately a religious holiday of some sort these days, so outside gang-rapes and train derails, there aren't any serious news worth reporting on. Look back a little, and you'll agree this is better than the year of OMGPONIES ;) Re:Enough April Fool's Already. (Score:4, Funny) Just wait ... You're going to see a post from an unexpected new slashdot mod ... his/her post will be something like: I couldn't take it anymore. I've killed them all. Slashdot will now be closed because I had to save the world from them. I'm going to turn myself in now, you're all welcome. To which I suggest we all respond with donations to their legal fund. Re: (Score:1) Jokes are funny. This is just painfully stupid. One day out of the year for funny stories would be awesome. One day out of the year for boring lies... eh. I could do without. Re: (Score:2) Do you seriously mean to tell me that there are no important tech stories taking place today? No, I think they mean to tell you that none of the tech articles were ever all that important, and thus could wait a day. Seriously, if you're life hangs on Slashdot, well, that's just sad. 10CC (Score:2) Re: (Score:2) Re: (Score:2) Do you understand what it does? It DOESN'T CC the person but makes it look like you did. This could actually be more useful than a BCC. Instead of BCCing my boss on an email I'm sending to a Co-worker telling him to get his work done, possibly upsetting my boss with bothering him with trivial stuff to have him review the other person, I could CCC my boss, and so my boss is left ouf of it entirely but the co-worker believes I've told him to get to work and the boss knows. Anti-CC ) Greetings and Salutations.... I fully support this concept, and, would go on to require that it be the DEFAULT for all mail messages that are addressed to more than one or two people at a time. Since a vast majority of the multiple-receiver emails I get are mindless twaddle, this would go a long way towards cutting the excess loads on the InterTubes. Regards Dave Mundt Thank God for Chrome (Score:1) Re: (Score:1) Furthermore you can keep touching yourself without anyone else looking at you. Definitely not bad. not required already possible with RFC822 (Score:1, Interesting) Re: (Score:1) You can already do this.. (Score:3, Interesting) Envelope headers are different than actual recipients. Mail clients don't implement it, but there's nothing in the SMTP protocol preventing you from putting a Cc: header in your message with a list of names/email addresses, but not actually delivering the messages. It's just a matter of a mail client offering this functionality. For now, you'll have to telnet into port 25 ;)
http://tech.slashdot.org/story/10/04/01/156206/IETF-Drops-RFC-For-Cosmetic-Carbon-Copy
CC-MAIN-2015-14
refinedweb
1,997
71.34
- 2.22 Handling class example - 2.23 Handling typed th e m_str deal with using skip the context until it meet ;or } using namespace std; deal with namespace skip context between < and > if the next token is < deal with class handle class if the option handleclass MyClass MyClass ::fun() {} 6) ; Handling class example The main routing of handling class was in ParserThread::HandleClass function If the parserThread meet these statement class AAA{ int m_a; int m_b; } ; It will surely add a Token of "AAA", which has a type Sometimes, you can see these code: typedef class AAA{ int m_a; int m_b; } BBB,CCC; If the parser meet TokensTree. Wait a minute, how can we deal with "BBB" and "CCC", this is not the same case as previous code, we can't regard "BBB","CCC" as variables. In this case, another function "ReadClsName()" will be called. For simplicity of TokensTree, we just regard "BBB" and "CCC" as derived class of "AAA". How does the parser.
http://wiki.codeblocks.org/index.php?title=Talk:Code_Completion_Design&oldid=6018
CC-MAIN-2020-16
refinedweb
164
61.7
The: This appendix covers: Methods of built-in types such as lists, dictionaries, and files Built-in functions The sys module The os and os.path modules The string module The. Any object can be tested for truth value, to use in an if or while condition or as operand of the boolean operations below. The following values are considered false: None Zero of any numeric type, e.g., 0, 0L, 0.0 Any empty sequence, e.g., '', (), [] Any empty mapping, e.g., {} Instances of user-defined classes, if the class defines a __nonzero__() or __len__()- method, when that method returns zero. All other values are considered true, so objects of many types are always true. Operations and built-in functions that have a boolean result always return 0 for false and 1 for true, unless otherwise stated. Important exceptions are the boolean operations or and and, which always return one of their operands. The following table depicts the boolean operations, ordered by ascending priority. 1. These evaluate their second argument only if needed for their outcome. 2. ''Not" has a lower priority than non-boolean operators, e.g., not a == b is interpreted as not(a == b), and a == not b is a syntax error. Comparison operations are supported by all objects. They have the same priority (which is higher than that of the boolean operations). Comparisons can be chained arbitrarily. e.g., x < y <=z is equivalent to x < y and y <=z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). The following table summarizes the comparison operations. 1. <> and != are alternate spellings for the same operator. (We couldn't choose between ABC and C!) Objects of different types, except different numeric types,. Implementation note: objects of different types except numbers are ordered by their type names; objects of the same types that don't support proper comparison are ordered by their address. Two more operations with the same syntactic priority, in and not in , are supported only by sequence types, see the later section "Sequence Types". There. Integer literals with an L or l suffix yield long integers (L is preferred because 11 looks too much like eleven!). Numeric literals containing a decimal point or an exponent sign yield floating-point numbers. Appending j or J to a numeric literal yields a complex number. number of mixed type use the same rule.* The functions int(), long(), float(), and complex() can force numbers to a specific type. All numeric types support the operations in the following table, sorted by ascending priority (operations in the same box have the same priority; all numeric operations have a higher priority than comparison operations). 1. For (plain or long) integer division, the result is an integer. The result is always rounded towards minus infinity: 1/2 is 0, (-1)/2 is -1, 1/(-2) is -1, and (-1)/(-2) is 0. 2. Conversion from floating-point to (long or plain) integer may round or truncate as in C; see functions floor() and ceil() in the math module for well-defined conversions. 3. See the section "Built-in Functions" for an exact definition. Plain and long integer types support additional operations that make sense only for bit strings. Negative numbers are treated as their 2's complement value (for long integers, this assumes a sufficiently large number of bits so that no overflow occurs during the operation). The priorities of the binary bitwise operations are all lower than the numeric operations and higher than the comparisons; the unary operation ~ has the same priority as the other unary numeric operations (+ and -). The following table lists the bit-string operations sorted in ascending priority (operations in the same box have the same priority). 1. Negative shift counts are illegal and cause a ValueError to be raised. 2. A left shift by n bits is equivalent to multiplication by pow(2, n) without over-flow check. 3. A right shift by n bits is equivalent to division by pow(2, n) without overflow check. There are three sequence types: strings, lists, and tuples. String literals are written in single or double quotes: 'xyzzy', "frobozz". See Chapter 2 of the Python reference manual for more about string liter,) Sequence types support the following operations. The in and not in operations have the same priorities as the comparison operations. The + and * operations have the same priority as the corresponding numeric operations.* The following table lists the sequence operations sorted in ascending priority (operations in the same box have the same priority). s and t are sequences of the same type; n, i, and j are integers. 1. If i or j is negative, the index is relative to the end of the string; i.e., len(s) + i or len(s) + j is substituted. But note that -0 is still 0. 2.. 3. Values of n less than 0 are treated as 0 (which yields an empty sequence of the same type as s). String nontuple aren't supported. Since Python strings have an explicit length, %s conversions don't assume that \0 is the end of the string. For safety reasons, floating-point precisions are clipped to 50; %f conversions for numbers whose absolute value is over le25 are replaced by %g conversions.' All other errors raise exceptions. If the right argument is a dictionary (or any kind of mapping), the formats in the string must have a parenthesized key into that dictionary inserted immediately after the % character, and each format then formats the corresponding entry from the mapping. For example: >>> count = 2 >>>>> print'%(language)s has %(count)03d quote types.' % vars() Python has 002 quote types. >>> In this case no * specifiers may occur in a format (since they require a sequential parameter list). Additional string operations are defined in standard module string and in built-in module re. ** These numbers are fairly arbitrary. They are intended to avoid printing endless strings of meaningless digits without hampering correct use and without having to know the exact precision of floating-point values on a particular machine. List objects support additional operations that allow in-place modification of the object. These operations would be supported by other mutable sequence types (when added to the language) as well. Strings and tuples are immutable sequence types, and such objects can't be modified once created. The operations in the following table are defined on mutable sequence types (where x is an arbitrary object). 1. This raises an exception when x is not found in s. 2. The sort() method takes an optional argument specifying a comparison function of two arguments (list items) that should return -1, 0, or 1 depending on whether the first argument is considered smaller than, equal to, or larger than the second argument. Note that this slows the sorting process considerably; e.g., to sort a list in reverse order, it's much faster to use calls to the methods sort() and reverse() than to use the built-in function sort() with a comparison function that reverses the ordering of the elements. 3. The sort() and reverse() methods modify the list in place for economy of space when sorting or reversing a large list. They don't return the sorted or reversed list to remind you of this side effect. 4. The pop() method is experimental and not supported by other mutable sequence types than lists. The optional argument i defaults to -1, so that by default, the last item is removed and returned. 5. This raises an exception when x is not a list object. The extend() method is experimental and not supported by mutable types other than lists. A operations in the following table are defined on mappings (where a is a mapping, k is a key and x is an arbitrary object). 1. This raises an exception if k is not in the map. 2. Keys and values are listed in random order. 3. b must be the same type as a 4. This never raises an exception if k is not in the map, instead it returns f. f is optional; when not provided and k is not in the map, None is returned. The interpreter supports several other kinds of objects. Most of these support only one or two operations. The only special operation on a module is attribute access: m.name, where m is a module and name accesses a name defined in m's symbol table. The import statement is not, strictly speaking, an operation on a module object; import foo doesn't require a module object named foo to exist, rather it requires an (external) definition for a module named foo somewhere. A special member of every module is __dict__. This is the dictionary containing the module's symbol table. Modifying this dictionary changes the module's symbol table, but direct assignment to the __dict__ attribute isn the section "Code objects"), and f.func_globals is the dictionary used as the function's global namespace type defines names for all standard built-in types. Types are written like this: <type int >. This object is returned by functions that don't explicitly return a value. It supports no special operations. There is exactly one null object, named None (a built-in name); it's written as None. This object is used by extended slice notation (see the Python reference manual). It supports no special operations. There is one ellipsis object, named Ellipsis (a built-in name); it's written as Ellipsis. File objects are implemented using C's stdio package and can be created with the built-in function open() (described in the section isn't defined for some reason, such as seek() on a tty device or writing a file opened for reading. Files have the following methods: Closes the file. A closed file can't be read or written. flush() Flushes the internal buffer; like stdio's fflush(). isatty() Returns 1 if the file is connected to a tty (-like) device, else 0. fileno() Returns the integer ''file descriptor'' that's used by the underlying implementation to request I/O operations from the operating system. This can be useful for other, lower-level interfaces that use file descriptors, e.g., module fcntl or os.read() and friends. read([size]) Reads at most size bytes from the file (less if the read hits EOF or no more data is immediately available on a pipe, tty, or similar device)..) readline([size]) Reads one entire line from the file. A trailing newline character is kept in the string* (but may be absent when a file ends with an incomplete line). If the size argument is present and nonnegative, it's a maximum byte count (including the trailing newline), and an incomplete line may be returned. An empty string is returned when EOF is hit immediately. Unlike stdio's fgets(), the returned string contains null characters (\0) if they occurred in the input. readlines([sizehint]) Reads until EOF using readline() and return a list containing the lines thus read. If the optional sizehint argument is present, instead of reading up to EOF, whole lines totaling approximately sizehint bytes (possibly after rounding up to an internal buffer size) are read. seek(offset[, whence]) Sets the file's current position; like stdio's fseek(). The whence argument is optional and defaults to 0 (absolute file positioning); other values are 1 (seek relative to the current position) and 2 (seek relative to the file's end). There's no return value. tell() Returns the file's current position; like stdio's ftell(). truncate([size]) Truncates the file's size. If the optional size argument is present, the file is truncated to (at most) that size. The size defaults to the current position. Availability of this function depends on the operating-system version (e.g., not all Unix versions support this operation). write(str) Writes a string to the file. There is no return value. Due to buffering, the string may not actually show up in the file until the flush() or close() method is called. writelines(list) Writes a list of strings to the file. There is no return value. (The name is intended to match readlines(); writelines() doesn't add line separators.) File objects also offer the following attributes: closed Boolean indicating the current state of the file object. This is a read-only attribute; the close() method changes the value. mode The I/O mode for the file. If the file is created using the open() built-in function, this is the value of the mode parameter. This is a read-only attribute. name If the file object was created using open(), this is the name of the file. Otherwise, it's some string that indicates the source of the file object, of the form < >. This is a read-only attribute. softspace Boolean that indicates whether a space character needs to be printed before another value when using the print statement. Classes that are trying to simulate a file object should also have a writable softspace attribute, which should be initialized to zero. This is automatic for classes implemented in Python; types implemented in C have to provide a writable softspace attribute. See the Python reference manual for this information. It describes code objects, stack frame objects, traceback objects, and slice objects. The implementation adds a few special read-only attributes to several object types, where they are relevant: __dict__ A dictionary of some sort that stores an object's (writable) attributes __methods__ List of the methods of many built-in object types, e.g., [].__methods__ yields ['append', 'count', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] Similar to __methods__, but lists data attributes __class__ The class to which a class instance belongs __bases__ The tuple of base classes of a class object Exceptions can be class or string objects. While most exceptions have traditionally been string objects, in Python 1.5 all standard exceptions have been converted to class objects, and users are encouraged to do the same. The source code for those exceptions is present in the standard library-module exceptions; this module never needs to be imported explicitly. For backward compatibility, when Python is invoked with the -X option, most standard exceptions are strings.* This option can run code that breaks because of the different semantics of class-based exceptions. The -X option will become obsolete in future Python versions, so the recommended solution is to fix the code. Two distinct string objects with the same value are considered different exceptions. This forces programmers to use exception names rather than their string value when specifying exception handlers. The string value of all built-in exceptions is their name, but this isn aren't related via subclassing are never equivalent, even if they have the same name. The built-in exceptions in the following list code can test an exception handler or report an error condition "just like" the situation in which the interpreter raises the same exception; but beware that there is nothing to prevent user code from raising an inappropriate error. The following exceptions are used only as base classes for other exceptions. When string-based standard exceptions are used, they are tuples containing the directly derived classes: Exception The root class for exceptions. All built-in exceptions are derived from this class. All user-defined exceptions should also be derived from this class, but this isn. StandardError The base class for all built-in exception except SystemExit. StandardError itself is derived from the root class Exception. ArithmeticError The base class for those built-in exceptions that are raised for various arithmetic error: OverflowError, ZeroDivisionError, FloatingPointError. LookupError The base class for the exceptions that are raised when a key or index used on a mapping or sequence is invalid: IndexError, KeyError. EnvironmentError The base class for exceptions that can occur outside the Python system: IOError, OSError. When exceptions of this type are created with a two-tuple, the first item is available on the instance's errno attribute (it's assumed to be an error number), and the second item is available on the strerror attribute (it's usually the associated error message). The tuple itself is also available on the args attribute. New in Version 1.5.2. When an EnvironmentError exception is instantiated with a three-tuple, the first two items are available as above, while the third item is available on the filename attribute. However, for backward-compatibility, the args attribute contains only a two-tuple of the first two constructor arguments. The filename attribute is None when this exception is created with other than three arguments. The errno and strerror attributes are also None if the instance was created with other than two or three arguments. In this last case, args contains the verbatim constructor arguments as a tuple. The following exceptions are those actually raised. They are class objects, except when the -X option is used to revert back to string-based standard exceptions: AssertionError Raised when an assert statement fails. AttributeError Raised when an attribute reference or assignment fails. (When an object doesn't support attribute references or attribute assignments at all, TypeError is raised.) EOFError Raised when one of the built-in functions (input() or raw_input()) hits an end-of-file condition (EOF) without reading any data. (Note that the read() and readline() methods of file objects return an empty string when they hit EOF.) FloatingPointError Raised when a floating-point operation fails. This exception is always defined, but can be raised only when Python is configured with the -with-fpectl option or the WANT_SIGFPE_HANDLER symbol is defined in the config.h file. IOError Raised when an I/O operation (such as a print statement, the built-in open() function, or a method of a file object) fails for an I/O-related reason, e.g., file not found or disk full. This class is derived from EnvironmentError. See its previous discussion for more information on exception-instance attributes. ImportError Raised when an import statement fails to find the module definition or when a from import fails to find a name that's to be imported. IndexError Raised when a sequence subscript is out of range. (Slice indexes are silently truncated to fall in the allowed range; if an index isn't a plain integer, TypeError is raised.) KeyError Raised when a mapping (dictionary) key is not found in the set of existing keys. KeyboardInterrupt Raised when the user hits the interrupt key (normally Ctrl-C or Del). During execution, a check for interrupts is made regularly. Interrupts typed when a built-in function input() or raw_input()) is waiting for input also raise this exception. MemoryError completely recover from this situation; it nevertheless raises an exception so that a stack traceback can be printed, in case a runaway program was the cause. NameError Raised when a local or global name is not found. Applies only to unqualified names. The associated value is the name that can't be found. NotImplementedError Derived from RuntimeError. In user-defined base classes, abstract methods should raise this exception when they require derived classes to override the method. New in Version 1.5.2. OSError Derived from EnvironmentError and is used primarily as the os module's os.error exception. See EnvironmentError in the first exception list for a description of the possible associated values. New in Version 1.5.2. OverflowError Raised when the result of an arithmetic operation is too large to be represented. This can't occur for long integers (which would rather raise MemoryError than give up). Because of the lack of standardization of floating-point exception handling in C, most floating-point operations also aren't checked. For plain integers, all operations that can overflow are checked except left shift, where typical applications prefer to drop bits than raise an exception. RuntimeError Raised when an error is detected that doesn't fall in any of the other categories. The associated value is a string indicating what precisely went wrong. (This exception is mostly a relic from a previous version of the interpreter; it isn't used much any more.) SyntaxError Raised when the parser encounters a syntax error. This may occur in an import statement, in an exec statement, in a call to the built-in function eval() or input(), or when reading the initial script or standard input (also interactively). When class exceptions are used, instances of this class have attributes filename, lineno, offset, and text for easier access to the details; for string exceptions, the associated value is usually a tuple of the form (message, (filename, lineno, offset, text). For class exceptions, str() returns only the message. SystemError Raised when the interpreter finds an internal error, but the situation doesn't look so serious to cause it to abandon all hope. The associated value is a string indicating what went wrong (in low-level terms). You should report this to the author or maintainer of your Python interpreter. Be sure to report the version string of the Python interpreter (sys.version, also printed at the start of an interactive Python session), the exact error message (the exception's associated value), and, if possible, the source of the program that triggered the error. SystemExit Raised by the sys.exit() function. When it's not handled, the Python interpreter exits; no stack traceback is printed. If the associated value is a plain integer, it specifies the system exit status (passed to C's exit() function); if it's None, the exit status is zero; if it has another type (such as a string), the object's value is printed, and the exit status is one. When class exceptions are used, the instance has an attribute code that is set to the proposed exit status or error message (defaulting to None). Also, this exception derives directly from Exception and not StandardError, since it isn's absolutely necessary to exit immediately (e.g., after a fork() in the child process). TypeError Raised when a built-in operation or function is applied to an object of inappropriate type. The associated value is a string giving details about the type mismatch. ValueError Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError ZeroDivisionError Raised when the second argument of a division or modulo operation is zero. The associated value is a string indicating the type of the operands and the operation The Python interpreter has a number of built-in functions that are always available. They are listed here in alphabetical order: __import__(name[, globals[, locals[, fromlist]]]) This function is invoked by the import statement. It exists so that you can replace it with another function that has a compatible interface, in order to change the semantics of the import statement. For examples of why and how you'd do this, see the standard library modules ihooks and rexec. See also the built-in module imp that defines some useful operations from which you can build your own __import__() function. For example, the statement import spam results in the call __import__ ( spam , globals(), locals(), []); the statement from spam.ham import eggs results in __import__ ( spam.ham , globals () , locals(), [ eggs ]). Even though locals() and [ eggs ] are passed in as arguments, the __import__() function doesn't set the local variable named eggs; this is done by subsequent code that's generated for the import statement. (In fact, the standard implementation doesn't use its locals argument at all, and uses its globals only to determine the package context of the import statement.) When the name variable is of the form package.module, normally, the top-level package (the name up to the first dot) is returned, not the module named by name. However, when a nonempty abs(x) Returns the absolute value of a number. The argument may be a plain or long integer or a floating-point number. If the argument is a complex number, its magnitude is returned. apply(function, args[, keywords]) The function argument must be a callable object (a user-defined or built-in function or method, or a class object), and the args argument must be a sequence (if it's not a tuple, the sequence is first converted to a tuple). The function is called with args as the argument list; the number of arguments is the length of the tuple. (This is different from just calling func(args), since in that case, there's always exactly one argument.) If the optional keywords argument is present, it must be a dictionary whose keys are strings. It specifies keyword arguments to be added to the end of the argument list. buffer(object[, offset[, size]]) The object argument must be an object that supports the buffer call interface (such as strings, arrays, and buffers). A new buffer object is created that references the object argument; that buffer object is a slice from the beginning of object (or from the specified offset). The slice extends to the end of object (or has a length given by the size argument). callable(object) Returns true if the object argument appears callable, false if not. If it returns true, it's still possible that a call fails, but if it's false, the calling object never succeeds. Note that classes are callable (calling a class returns a new instance); class instances are callable if they have a __call__() method. chr(i) Returns a string of one character whose ASCII code is the integer i, e.g., chr(97) returns the string a. This is the inverse of ord(). The argument must be in the range [0 255], inclusive. cmp(x, y) Compares the two objects x and y and returns an integer according to the outcome. The return value is negative if x < y, zero if x == y, and strictly positive if x > y. coerce(x, y) Returns a tuple consisting of the two numeric arguments converted to a common type, using the same rules used by arithmetic operations. compile(string, filename, kind) Compiles the string into a code object. Code objects can be executed by an exec statement or evaluated by a call to eval(). The filename argument should give the file from which the code was read; pass other than None will print). complex(real[, imag]) Creates a complex number with the value real + imag*j or converts a string or number to a complex number. Each argument may be any numeric type (including complex). If imag is omitted, it defaults to zero, and the function serves as a numeric conversion function like int(), long(), and float(); in this case it also accepts a string argument that should be a valid complex number. delattr(object, name). dir([object]) Without arguments, returns the list of names in the current local symbol table. With an argument, attempts to return a list of valid attribute for that object. This information is gleaned from the object's __dict__, __methods__, and __members__ attributes, if defined. The list is not necessarily complete; e.g., for classes, attributes defined in base classes aren't included, and for class instances, methods aren't included. The resulting list is sorted alphabetically. For example: >>> import sys >>> dir() ['sys'] >>> dir(sys) ['argv', 'exit', 'modules', 'path', 'stderr', 'stdin', 'stdout'] >>> divmod(a, b) Takes two numbers as arguments and returns a pair of numbers consisting of their quotient and remainder when using long division. With mixed operand types, the rules for binary arithmetic operators apply. For plain and long integers, the result is the same as (a / b, a % b). For floating-point numbers, the result is the same as (math.floor(a / b), a % b). eval (expression[, globals[, locals]]) The arguments are a string and two optional dictionaries. The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace. execute arbitrary code objects (e.g., created by compile()). In this case, it passes(). execfile(file[, globals[, locals]]) Similar to the exec statement, but parses a file instead of a string. It's different from the import statement in that it doesn't use the module administration; it reads the file unconditionally and doesn't create a new module.* The arguments are a filename and two optional dictionaries. The file is parsed and evaluated as a sequence of Python statements (similar. filter(function, list) Constructs a list from those elements of list for which function returns true. If list is a string or a tuple, the result also has that type; otherwise it's always a list. If function is None, the identity function is assumed, i.e., all elements of list that are false (zero or empty) are removed. float(x) Converts a string or a number to floating point. If the argument is a string, it must contain a possibly signed decimal or floating-point number, possibly embedded in whitespace; this behaves identically to string.atof(x). Otherwise, the argument may be a plain or long integer or a floating-point number, and a floating-point number with the same value (within Python's floating-point precision) is returned. When passing in a string, values for NaN and Infinity may be returned, depending on the underlying C library. The specific set of strings accepted that cause these values to be returned depends entirely on the C library and is known to vary. getattr(object, name) The arguments are an object and a string. The string must be the name of one of the object's attributes. The result is the value of that attribute. For example, getattr(x, 'foobar ) is equivalent to x.foobar. globals() Returns 1 if the string is the name of one of the object's attributes, 0 if not. (This is implemented by calling getattr(object, name) and seeing whether it raises an exception.) hash (object) Returns the hash value of the object (if it has one). Hash values are integers. They can quickly compare dictionary keys during a dictionary lookup. Numeric values that compare equal have the same hash value (even if they are of different types, e.g., 1 and 1.0). hex(x) Converts an integer number (of any size) to a hexadecimal string. The result is a valid Python expression. This always yields an unsigned literal, e.g., on a 32-bit machine, hex(-1) yields '0xffffffff . When evaluated on a machine with the same word size, this literal is evaluated as -1; at a different word size, it may be a large positive number or raise an OverflowError exception. id(object) Returns the identity of an object. This is an integer that's guaranteed to be unique and constant for this object during its lifetime. Two objects whose lifetimes don't overlap may have the same id() value. (Implementation note: this is the address of the object.) input([prompt]) Equivalent to eval(raw_input(prompt)). intern(string) Enters string in the table of interned strings and returns that hold module, class, or instance attributes have interned keys. Interned strings are immortal (i.e., never get garbage-collected). int(x)) Converts a string or number to a plain integer. If the argument is a string, it must contain a possibly signed decimal number representable as a Python integer, possibly embedded in whitespace; this behaves identically to string. atoi(x). Otherwise, the argument may be a plain or long integer or a floating-point number. Conversion of floating-point numbers to integers is defined by the C semantics; normally the conversion truncates towards zero.* isinstance(object, class) Returns true if the object argument is an instance of the class argument or of a (direct or indirect) subclass thereof. Also returns true if class is a type object and object is an object of that type. If object is not a class instance or an object of the given type, the function always returns false. If class is neither a class object nor a type object, a TypeError exception is raised. issubclass(class1, class2) Returns true if class1 is a subclass (direct or indirect) of class2. A class is considered a subclass of itself. If either argument isn't a class object, a TypeError exception is raised. len(s) Returns the length (the number of items) of an object. The argument may be a sequence (string, tuple, or list) or a mapping (dictionary). list(sequence) Returns a list whose items are the same and in the same order as sequence's items. If sequence is already a list, a copy is made and returned, similar to sequence[:]. For instance, list( abc ) returns [ a , b , c ], and list( (1, 2, 3) ) returns [1, 2, 3]. locals() Returns a dictionary representing the current local symbol table. Warning: the contents of this dictionary should not be modified; changes may not affect the values of local variables used by the interpreter. long(x) Converts a string or number to a long integer. If the argument is a string, it must contain a possibly signed decimal number of arbitrary size, possibly embedded in whitespace; this behaves identically to string.atol(x). Otherwise, the argument may be a plain or long integer or a floating-point number, and a long integer with the same value is returned. Conversion of floating-point numbers to integers is defined by the C semantics; see the description of int(). map (function, list, ) Applies function to every item of list and returns a list of the results. If additional list arguments are passed, function must take that many arguments and is applied to the items of all lists in parallel; if a list is shorter than another, it's assumed to be extended with None items.. max (s[, args ]) With a single argument s, returns the largest item of a nonempty sequence (e.g., a string, tuple, or list). With more than one argument, returns the largest of the arguments. min(s[, args ]) With a single argument s, returns the smallest item of a nonempty sequence (e.g., a string, tuple, or list). With more than one argument, returns the smallest of the arguments. oct(x) Converts an integer number (of any size) to an octal string. The result is a valid Python expression. This always yields an unsigned literal, e.g., on a 32-bit machine, oct(-1) yields '037777777777 . When evaluated on a machine with the same word size, this literal is evaluated as -1; at a different word size, it may be a large positive number or raise an OverflowError exception. open(filename[, model[, bufsize]]) Returns a new file object (described earlier in the section ''Built-in Types''). The first two arguments are the same as for stdio's fopen(): filename is the filename can't be opened, IOError is raised. If mode is omitted, it defaults to r . When opening a binary file, you should append b to the mode value for improved portability. (It. ord(C) Returns the ASCII value of a string of one character. For example, ord( a ) returns the integer 97. This is the inverse of chr() pow(x, y[, z]) Returns x to the power y; if z is present, return x to the power y, modulo z (computed more efficiently than pow(x, y) % z). The arguments must have numeric types. With mixed operand types, the rules for binary arithmetic operators apply. The effective operand type is also the type of the result; if the result isn't expressible in this type, the function raises an exception; e.g., pow(2, -1) or pow(2, 35000) isn't allowed. largest start + i * step greater than stop. step must not be zero (or else ValueError is raised). Here's an]) If the prompt argument is present, it() uses it to provide elaborate line-editing and history features. reduce (function, sequence [, initializer]) Applies function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). If the optional initializer is present, it's placed before the items of the sequence in the calculation and serves as a default when the sequence is empty. reload (module) Reparses and reinitializes an already (i.e., the same as the module argument). There are a number of caveats: If a module is syntactically correct but its initialization fails, the first import statement for it doesn't bind its name locally, but does store a (partially initialized) module object in sys.modules. To reload the module you must first import it again (this binds the name to the partially initialized module object) before you can reload() it. When a module is reloaded, its dictionary (containing the module's global variables) is retained. Redefinitions of names override the old definitions, so this is generally not a problem. If the new version of a module doesn useful to reload built-in or dynamically loaded modules, except for sys, __main__, and __builtin__. In certain cases, however, extension modules aren't designed to be initialized more than once, and may fail in arbitrary ways when reloaded. If a module imports objects from another module using from import , calling reload() for the other module doesn't redefine the objects imported from it; one way around this is to reexecute the from statement, another is to use import and qualified names (module.name) instead. If a module instantiates instances of a class, reloading the module that defines the class doesn't affect the method definitions of the instances; they continue to use the old class definition. The same is true for derived classes. repr(object) Returns a string containing a printable representation of an object. This is the same value yielded by conversions (reverse quotes). It's sometimes useful to be able to access this operation as an ordinary function. For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to eval(). round(x[, n]) Returns (e.g., round(0.5) is 1.0 and round(-0.5) is -1.0). setattr (object, name, value)]) Returns a slice object representing the set of indexes when extended indexing syntax is used, e.g., for a[start:stop:step] or a[start:stop, i]. str (object) Returns a string containing a nicely printable representation of an object. For strings, this returns the string itself. The difference with repr(object) is that str(object) doesn't always attempt to return a string that is acceptable to eval(); its goal is to return a printable string. tuple(sequence) Returns a tuple whose items are the same and in the same order as sequence's items. If sequence is already a tuple, it's returned unchanged. For instance, tuple( abc ) returns ( a , b , c ), and tuple ([1, 2, 3 ]) returns (1, 2, 3). type(object) Returns the type of an object. The return value is a type object. The standard module types defines names for all built-in types. For instance: >>> import types >>> if type(x) == types.StringType: print "It's a string" >>> vars([object]) Without arguments, returns.* xrange ([start,] stop [, step]) Similar to range(), but returns an xrange object instead of a list. This is an opaque sequence type that yields the same values as the corresponding list, without actually storing them all simultaneously. The advantage of xrange() over range() is minimal (since xrange() still has to create the values when asked for them) except when a large range is used on a memory-starved machine (e.g., MS-DOS) or when all of the range's elements are never used (e.g., when the loop is usually terminated with break). This module is always available and provides access to some variables used or maintained by the interpreter and to functions that interact strongly with the interpreter. argv The list of command-line arguments passed to a Python script. argv[0] is the script name (it's operating system-dependent, whether this is a full pathname or not). If the command is executed using the -c command-line option to the interpreter, argv[0] is set to the string -c. If no script name is passed to the Python interpreter, argv has zero length. builtin_module_names A tuple of strings giving the names of all modules that are compiled into this Python interpreter. (This information isn't available in any other way: modules.keys() lists only the imported modules.) A string containing the copyright pertaining to the Python interpreter. exc_info() Returns a tuple of three values that give information about the exception that) that encapsulates the call stack at the point where the exception originally occurred. Note that assigning the traceback return valueto a local variable in a function that is handling an exception causes acircular reference. This prevents anything referenced by a localvariable in the same function or by the traceback from beinggarbage doesn't itself handle an exception. exc_type, exc_value, exc_traceback Deprecated since Release 1.5. Use exc_info() instead. Since they are global variables, they aren't specific to the current thread, and their use is not safe in a multithreaded program. When no exception is being handled, exc_type is set to None, and the other two are undefined. exec_prefix A string giving the site-specific directory prefix where the platform-dependent Python files are installed; by default, this is also dynload , where version is equal to version[:3]. executable A string giving the name of the executable binary for the Python interpreter, on systems that support it. exit( [arg]) Exits from Python. This is implemented by raising the SystemExit exception, so cleanup actions specified by finally clauses of try statements are honored, and it's possible to intercept the exit attempt at an outer level. The optional argument arg can be an integer giving the exit status (defaulting to zero) or another type of object. If itsys. stderr and results in an exit code of 1. In particular, sys.exit( some error message ) is a quick way to exit a program when an error occurs. exitfunc This value is not actually defined by the module but can be set by the user (or by a program) to specify a cleanup action at program exit. When set, it should be a parameterless function. This function is called when the interpreter exits. The exit function is not called when the program is killed by a signal, when a Python fatal internal error is detected, or when os._exit() is called. getrefcount (object) Returns the reference count of the object. The count returned is generally one higher than you might expect, because it includes the (temporary) reference as an argument to getrefcount(). last_type, last_value, last_traceback These three variables aren't always defined; they are set when an exception is not handled, and the interpreter prints an error message and a stack traceback. Their intended use is to allow an interactive user to import a debugger module and engage in postmortem debugging without having to reexecute the command that caused the error. (Typical use is import pdb; pdb.pm() to enter the postmortem debugger.) The meaning of the variables is the same as that of the return values from exc_info(), as seen in the previous entry. (Since there is only one interactive thread, thread-safety is not a concern for these variables, unlike for exc_type, etc.) maxint The largest positive integer supported by Python's regular integer type. This is at least 231-1. The largest negative integer is -maxint-1: the asymmetry results from the use of 2's complement binary arithmetic. modules A dictionary that maps module names to modules that have already been loaded. This can be manipulated to force reloading of modules and other tricks. Removing a module from this dictionary is not the same as calling reload() on the corresponding module object. path A list of strings that specifies the search path for modules. Initialized from the environment variable $PYTHONPATH or an installation-dependent default. The first item of this list, path[0], is the directory containing the script that invoked the Python interpreter. If the script directory isn. platform Contains a platform identifier, e.g., sunos5 or linux1. This can append platform-specific components to path, for instance.]. ps1, ps2 Strings specifying the primary and secondary prompt of the interpreter. These are defined only if the interpreter is in interactive mode. Their initial values in this case are >>> and. If a nonstring object is assigned to either variable, its str() is reevaluated each time the interpreter prepares to read a new interactive command; this can implement a dynamic prompt. setcheckinterval (interval) Sets the interpreter's check interval. This integer value determines how often the interpreter checks for periodic things such as thread switches and signal handlers. The default is 10, meaning the check is performed every 10 Python virtual instructions. Setting it to a larger value may increase performance for programs using threads. Setting it to a value <= 0 checks every virtual instruction, maximizing responsiveness as well as overhead. setprofile (profilefunc) Sets the system's profile function, which allows you to implement a Python source code profiler in Python. The system's profile function is called similarly to the system's trace function (see settrace()), but it isn't called for each executed line of code (only on call and return and when an exception occurs). Also, its return value isn't used, so it can just return None. settrace (tracefunc) Sets the system's trace function, which allows you to implement a Python source code debugger in Python. stdin, stdout,__, __stdout__, __stderr__ Contain the original values of stdin, stderr, and stdout at the start of the program. They are used during finalization and can restore the actual files to known working file objects in case they have been overwritten with a broken object.. version A string containing the version number of the Python interpreter. This module defines some constants that can check character classes, and some useful string functions. See the module re for string functions based on regular expressions. The constants defined in this module are: digits The string '0123456789'. hexdigits The string '0123456789abcdefABCDEF'. letters The concatenation of the strings lowercase() and uppercase() (check their entries in this list). lowercase A string containing all characters considered lowercase letters. On most systems this is the string abcdefghijklmnopqrstuvwxyz . Don't change its definition: the effect on the routines upper() and swapcase() is undefined. octdigits The string 01234567 . uppercase A string containing all characters considered uppercase letters. On most systems this is the string 'ABCDEFGHIJKLMNOPQRSTUVWXYZ . Don't change its definition: the effect on the routines lower() and swapcase() is undefined. whitespace A string containing all characters that are considered whitespace. On most systems this includes the characters space, tab, linefeed, return, formfeed, and vertical tab. Don't change its definition: the effect on the routines strip() and split() is undefined. The functions defined in this module are: atof(s) Converts a string to a floating-point number. The string must have the standard syntax for a floating-point literal in Python, optionally preceded by a sign (+ or -). Note that this behaves identically to the built-in function float() when passed a string. When passing in a string, values for NaN and Infinity may be returned, depending on the underlying C library. The specific set of strings accepted that cause these values to be returned depends entirely on the C library and is known to vary. atoi (s[, base]) Converts string s to an integer in the given base. The string must consist of one or more digits, optionally preceded by a sign (+ or -). The base defaults to 10. If it's 0, a default base is chosen depending on the leading characters of the string (after stripping the sign): 0x or 0X means 16, 0 means 8, anything else means 10. If base is 16, a leading 0x or 0X is always accepted. When invoked without base or with base set to 10, this behaves identically to the built-in function int() when passed a string. (Also note: for a more flexible interpretation of numeric literals, use the built-in function eval().) atol (s[, base]) Converts string s to a long integer in the given base. The string must consist of one or more digits, optionally preceded by a sign (+ or -). The base argument has the same meaning as for atoi(). A trailing 1 or L isn't allowed, except if the base is 0. When invoked without base or with base set to 10, this behaves identically to the built-in function long() when passed a string. capitalize (word) Capitalizes the first character of the argument. capwords (s) Splits the argument into words using split(), capitalizes each word using capitalize(), and joins the capitalized words using join(). This replaces runs of whitespace characters by a single space and removes leading and trailing whitespace. expandtabs (s, [tabsize]) Expands tabs in a string, i.e., replaces them by one or more spaces, depending on the current column and the given tab size. The column number is reset to zero after each newline occurring in the string. This doesn't understand other nonprinting characters or escape sequences. The tab size defaults to 8. find(s, sub[, start[, end]]) Returns the lowest index in s where the substring sub is found such that sub is wholly contained in s[start:end]. Returns -1 on failure. Defaults for start and end, and interpretation of negative values is the same as for slices. rfind(s, sub[, start[, end]]) Like find() but finds the highest index. index(s, sub[, start[, end]]) Like find() but raises ValueError when the substring isn't found. rindex(s, sub[, start[, end]]) Like rfind() but raises ValueError when the substring isn't found. count(s, sub[, start[, end]]) Returns the number of (nonoverlapping) occurrences of substring sub in string s[start:end]. Defaults for start and end, and interpretation of negative values is the same as for slices. lower(s) Returns a copy of s, but with uppercase letters converted to lowercase. maketrans(from, to) Returns a translation table suitable for passing to translate() or regex. compile() that maps each character in from into the character at the same position in to; from and to must have the same length. Don't use strings derived from lowercase and uppercase as arguments; in some locales, these don't have the same length. For case conversions, always use lower() and upper(). split(s[, sep[, maxsplit]]) Returns then has one more item than the number of nonoverlapping occurrences of the separator in the string. The optional third argument maxsplit defaults to 0. If it's nonzero, at most maxsplit number of splits occur, and the remainder of the string is returned as the final element of the list (thus, the list has at most maxsplit+1 elements). splitfields (s[, sep[, maxsplit]]) This function behaves identically to split(). In the past, split() was used with only one argument; splitfields() was used with two. join (words[, sep]) Concatenates a list or tuple of words with intervening occurrences of sep. The default value for sep is a single space character. It's always true that string. join(string.split(s,, sep), sep) equals s. joinfields(words[, sep]) This function behaves identically to join(). In the past, join() was used with only one argument, while joinfields() was used with two arguments. lstrip(s) Returns a copy of s but without leading whitespace characters. rstrip(s) Returns a copy of s but without trailing whitespace characters. strip(s) Returns a copy of s without leading or trailing whitespace. swapcase(s) Returns a copy of s, but with lowercase letters converted to uppercase and vice versa. translate(s, table[, deletechars]) Deletes all characters from s that are in deletechars (if present) and then translates the characters using table, which must be a 256-character string giving the translation for each character value, indexed by its ordinal. upper(s) Returns a copy of s, but with lowercase letters converted to uppercase. ljust(s, width), rjust(s, width), center(s, width) Respectively left justifies, right justifies, and centers a string in a field of given width. They return a string that is at least width characters wide, created by padding the string s with spaces until the given width on the right, left, or both sides. The string is never truncated. zfill(s, width) Pads a numeric string on the left with zero digits until the given width is reached. Strings starting with a sign are handled correctly. replace(str, old, new[, maxsplit]) Returns a copy of string str with all occurrences of substring old replaced by new. If the optional argument maxsplit is given, the first maxsplit occurrences are replaced.. This module provides a more portable way to use operating system-dependent functionality than importing an OS-dependent built-in module such as posix or nt. This module searches for an OS-dependent built-in module such as. After os is imported for the first time, there's no performance penalty in using functions from os instead of directly from the OS-dependent built-in module, so there should be no reason not to use os. error Raised when a function returns a system-related error (e.g., not for illegal argument types). This is also known as the built-in exception OSError. The accompanying value is a pair containing the numeric error code from errno and the corresponding string, as would be printed by the C function perror(). See the errno module, which contains names for the error codes defined by the underlying operating system. When exceptions are classes, this exception carries two attributes, errno and strerror. The first holds the value of the C errno variable, and the latter holds the corresponding error message from strerror(). For exceptions that involve a filesystem path (e.g., chdir() or unlink()), the exception instance contains a third attribute, filename, which is the filename passed to the function. When exceptions are strings, the string for the exception is OSError . name The name of the OS-dependent module imported. The following names have currently been registered: posix , nt , dos , mac , os2 . path The corresponding OS-dependent standard module for pathname operations, e.g., posixpath or macpath. Thus, given the proper imports, os.path. split(file) is equivalent to but more portable than posixpath. split (file). This is also a valid module: it may be imported directly as os. path. These functions and data items provide information and operate on the current process and user: chdir(path) Changes the current working directory to path. Availability: Macintosh, Unix, Windows. environ A mapping representing the string environment. For example, environ[ HOME ] is the pathname of your home directory, equivalent to getenv( HOME ) in C. If the platform supports the putenv() function, this mapping can modify the environment as well as query the environment. putenv() is called automatically when the mapping is modified. If putenv() isn't provided, this mapping can be passed to the appropriate process-creation functions to cause child processes to use a modified environment. getcwd() Returns a string representing the current working directory. Availability: Macintosh, Unix, Windows. getegid() Returns the current process's effective group ID. Availability: Unix. geteuid() Returns the current process's effective user ID. Availability: Unix. getgid() Returns the current process's group ID. Availability: Unix. getpgrp() Returns the current process's group ID. Availability: Unix. getpid() Returns the current process ID. Availability: Unix, Windows. getppid() Returns the parent's process ID. Availability: Unix. getuid() Returns the current process's user ID. Availability: Unix. putenv(varname, value) Sets the environment variable, varname, to the string value. Such changes to the environment affect subprocesses started with os.system(), popen(), or fork() and execv(). Availability: most flavors of Unix, most flavors of Unix, Windows. When putenv() is supported, assignments to items in os.environ are automatically translated into corresponding calls to putenv(); however, calls to putenv() don't update os.environ, so it's actually preferable to assign to items of os.environ. setgid(gid) Sets the current process's group ID. Availability: Unix. setpgrp-() Calls the system call setpgrp() or setpgrp(0, 0) depending on which version is implemented (if any). See the Unix manual for the semantics. Availability: Unix. setpgid(pid, pgrp) Calls the system call setpgid(). See the Unix manual for the semantics. Availability: Unix. setsid() Calls the system call setsid(). See the Unix manual for the semantics. Availability: Unix. setuid(uid) Sets the current process's user ID. Availability: Unix. strerror(code) Returns the error message corresponding to the error code in code. Availability: Unix, Windows. umask(mask) Sets the current numeric umask and returns the previous umask. Availability: Unix, Windows. uname() Returns a five-tuple containing information identifying the current operating system. The tuple contains five strings: (sysname, nodename, release, version, machine). Some systems truncate the nodename to eight characters or to the leading component; a better way to get the hostname is socket. gethostname() or even socket.gethostbyaddr (socket.gethostname()). Availability: recent flavors of Unix. These functions create new file objects: fdopen(fd[, mode[, bufsize]]) Returns an open file object connected to the file descriptor fd. The mode and bufsize arguments have the same meaning as the corresponding arguments to the built-in open() function. Availability: Macintosh, Unix, Windows. popen(command[, mode[, bufsize]]) Opens a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is is returned. Availability: Unix, Windows. These functions operate on I/O streams referred to with file descriptors: close(fd) Closes file descriptor fd. Availability: Macintosh, Unix, Windows. This function is intended for low-level I/O and must be applied to a file descriptor as returned by open() or pipe(). To close a file object returned by the built-in function open(), by popen(), or fdopen(), use its close() method. dup(fd) Returns a duplicate of file descriptor fd. Availability: Macintosh, Unix, Windows. dup2(fd, fd2) Duplicates file descriptor fd to fd2,, closing the latter first if necessary. Availability: Unix, Windows. fstat(fd) Returns status for file descriptor fd, like stat(). Availability: Unix, Windows. fstatvfs(fd) Returns information about the filesystem containing the file associated with file descriptor fd, like statvfs(). Availability: Unix. ftruncate(fd length) Truncates the file corresponding to file descriptor fd, so that it is length bytes in size. Availability: Unix. lseek(fd, pos, how) Sets the current position of file descriptor fd to position pos, modified by how. 0 to set the position relative to the beginning of the file; 1 to set it relative to the current position; and 2 to set it relative to the end of the file. Availability: Macintosh, Unix, Windows. open(file, flags[, mode]) Opens the file file and sets various flags according to flags and, possibly, its mode according to mode. The default mode is 0777 (octal), and the current umask value is first masked out. Returns the file descriptor for the newly opened file. Availability: Macintosh, Unix, Windows. For a description of the flag and mode values, see the C runtime documentation; flag constants (such as O_RDONLY and O_WRONLY) are also defined in this module (see later in this section). This function is intended for low-level I/O. Normally, you should use the built-in function open(), which returns a file object with read() and write() methods (and many more). pipe() Creates a pipe. Returns a pair of file descriptors (r, w) usable for reading and writing, respectively. Availability: Unix, Windows. read(fd, n) Reads at most n bytes from file descriptor fd. Returns a string containing the bytes read. Availability: Macintosh, Unix, Windows. This function is intended for low-level I/O and must be applied to a file descriptor as returned by open() or pipe(). To read a file object returned by the built-in function open() or by popen(), fdopen(), or sys.stdin, use its read() or readline() methods. tcgetpgrp(fd) Returns the process group associated with the terminal given by fd (an open file descriptor as returned by open))). Availability: Unix. tcsetpgrp(fd, pg)) Sets the process group associated with the terminal given by fd(an open file descriptor as returned by open()). to pg. Availability: Unix. ttyname(fd) Returns a string that specifies the terminal device associated with file-descriptor fd. If fd isn't associated with a terminal device, an exception is raised. Availability: Unix. write(fd, str) Writes the string str to file descriptor fd. Returns the number of bytes actually written. Availability: Macintosh, Unix, Windows. This function is intended for low-level I/O and must be applied to a file descriptor as returned by open() or pipe(). To write a file object returned by the built-in function open() or by popen(), fdopen(), sys.stdout, or sys. stderr, use its write() method. The following data items are available for constructing the flags parameter to the open() function: O_RDONLY O_WRONLY O_RDWR O_NDELAY O_NONBLOCK O_APPEND O_DSYNC O_RSYNC O_SYNC O_NOCTTY O_CREAT O_EXCL O_TRUNC These can be bitwise OR'd together. Availability: Macintosh, Unix, Windows. These functions operate on files and directories. access(path, mode) Checks read/write/execute permissions for this process or file path. Returns 1 if access is granted, 0 if not. See the Unix manual for the semantics. Availability: Unix. chmod(path, mode) Changes the mode of path to the numeric mode. Availability: Unix, Windows. chown(path, uid, gid) Changes the owner and group ID of path to the numeric uid and gid. Availability: Unix. link(src, dst) Creates a hard link pointing to src named dst. Availability: Unix. listdir(path) Returns a list containing the names of the entries in the directory. The list is in arbitrary order. It doesn't include the special entries ''.'' and ".." even if they are present in the directory. Availability: Macintosh, Unix, Windows. lstat(path) Like stat(), but doesn't follow symbolic links. Availability: Unix. mkfifo(path[, mode]) Creates a FIFO (a named pipe) named path with numeric mode mode. The default mode is creates the rendezvous point. mkdir(path[, mode]) Creates a directory named path with numeric mode mode. The default mode is 0777 (octal). On some systems, mode is ignored. Where it's used, the current umask value is first masked out. Availability: Macintosh, Unix, Windows. makedirs(path[, mode]) Recursive directory creation function. Like mkdir(), but makes all intermediate-level directories needed to contain the leaf directory. Throws an error exception if the leaf directory already exists or can't be created. The default mode is 0777 (octal). New in version 1.5.2. readlink(path) Returns a string representing the path to which the symbolic link points. Availability: Unix. remove(path) Removes the file path. See the entry for rmdir() to remove directory. This is identical to the unlink() function, documented later. Availability: Macintosh, Unix, Windows. removedirs(path) Recursive directory removal function. Works like rmdir() except that, if the leaf directory is successfully removed, directories corresponding to rightmost path segments are pruned until either the whole path is consumed or an error is raised (which is ignored, because it generally means that a parent directory isn't empty). Throws an error exception if the leaf directory can't be successfully removed. New in Version 1.5.2. rename(src, dst) Renames the file or directory src to dst. Availability: Macintosh, Unix, Windows. renames(old, new) Recursive directory or file renaming function. Works like rename(),except that the creation of any intermediate directories needed to make the new pathname good is attempted first. After the rename, directories corresponding to rightmost path segments of the old name are removed using removedirs(). This function can fail after the new directory structure is created if you lack permissions needed to remove the leaf directory or file. New in Version 1.5.2. rmdir(path) Removes the directory path. Availability: Macintosh, Unix, Windows. stat(path) Performs a stat() system call on the given path. The return value. (On MS Windows, some items are filled with dummy values.) Availability: Macintosh, Unix, Windows. The standard module stat defines functions and constants that are useful for extracting information from a stat structure. statvfs(path) Performs a statvfs() system call on the given path. The return value is a tuple of 10 integers giving the most common members of the statvfs structure, in the order f_bsize, f_frsize, f_blocks, f_bfree, f_bavail, f_files, f_ffree, f_favail, f_flag, f_namemax. Availability: Unix. The standard module statvfs defines constants that are useful for extracting information from a statvfs structure. symlink (src, dst) Creates a symbolic link pointing to src named dst. Availability: Unix. unlink (path) Removes the file path. This is the same function as remove(); the unlink() name is its traditional Unix name. Availability: Macintosh, Unix, Windows. utime (path, (atime, mtime)) Sets the access and modified time of the file to the given values. (The second argument is a tuple of two items.) Availability: Macintosh, Unix, Windows. These functions can create and manage additional processes: execl(path, arg0, arg1, ) This is quivalent to execv (path, (arg0, arg1, )). Availability: Unix, Windows. execle (path, arg0, arg1, , env) This is equivalent to execve (path, (argo, arg1, ), env). Availability: Unix, Windows. execlp(path, arg0, arg1, ) This is equivalent to execvp (path, (arg0, arg1, )). Availability: Unix, Windows. execv(path, args) Executes the executable path with argument list args, replacing the current process (i.e., the Python interpreter). The argument list may be a tuple or list of strings. Availability: Unix, Windows. execve(path, args, env) Executes the executable path with argument list args, and environment env, replacing the current process (i.e., the Python interpreter). The argument list may be a tuple or list of strings. The environment must be a dictionary mapping strings to strings. Availability: Unix, Windows. execvp(path, args) Like execv(path, args) but duplicates the shell's actions in searching for an executable file in a list of directories. The directory list is obtained from environ[ PATH ]. Availability: Unix, Windows. execvpe(path, args, env) A cross between execve() and execvp(). The directory list is obtained from env[ PATH ]. Availability: Unix, Windows. _exit(n) Exits to the system with status n, without calling cleanup handlers, flushing stdio buffers, etc. Availability: Unix, Windows. The standard way to exit is sys.exit(n). _exit() should normally be used only in the child process after a fork(). fork() Forks a child process. Returns 0 in the child, the child's process ID in the parent. Availability: Unix. kill(pid, sig) Kills the process pid with signal sig. Availability: Unix. nice(increment) Adds increment to the process's "niceness." Returns the new niceness. Availability: Unix. plock(op) Locks program segments into memory. The value of op (defined in <sys/lock.h>) determines which segments are locked. Availability: Unix. spawnv(mode, path, args) Executes the program path in a new process, passing the arguments specified in args as command-line parameters. args may be a list or a tuple. mode is a magic operational constant. See the Visual C++ runtime library documentation for further information. Availability: Windows. New in Version 1.5.2. spawnve(mode, path, args, env) Executes the program path in a new process, passing the arguments specified in args as command-line parameters and the contents of the mapping env as the environment. args may be a list or a tuple. mode is a magic operational constant. See the Visual C++ runtime library documentation for further information. Availability: Windows. New in Version 1.5.2. P_WAIT P_NOWAIT P_NOWAITO P_OVERLAY P_DETACH Possible values for the mode parameter to spawnv() and spawnve(). Availability: Windows. New in Version 1.5.2. system(command) Executes the command (a string) in a subshell. This is implemented by calling the standard C function system() and has the same limitations. Changes to posix.environ, sys.stdin, etc., aren't reflected in the environment of the executed command. The return value is the exit status of the process encoded in the format specified for wait(). Availability: Unix, Windows. times() Returns a five-tuple of floating-point numbers indicating accumulated (CPU or other) times, in seconds. The items are: user time, system time, children's user time, children's system time, and elapsed real time since a fixed point in the past, in that order. See the Unix manpage times(2) or the corresponding Windows Platform API documentation. Availability: Unix, Windows. wait() Waits for completion of a child process and returns a tuple containing its process ID. waitpid(pid, options) Waits for completion of a child process given by process ID and returns a tuple containing its process ID and exit status indication (encoded as for wait()). The semantics of the call are affected by the value of the integer options, which should be 0 for normal operation. Availability: Unix. WNOHANG The option for waitpid() to avoid hanging if no child process status is available immediately. Availability: Unix. The following functions take a process status code as returned by waitpid() as a parameter. They can determine the disposition of a process. WIFSTOPPED(status) Returns true if the process has been stopped. Availability: Unix. WIFSIGNALED(status) Returns true if the process exited due to a signal. Availability: Unix. WIFEXITED(status) Returns true if the process exited using the exit(2) system call. Availability: Unix. WEXITSTATUS(status) If WIFEXITED(status) is true, returns the integer parameter to the exit(2) system call. Otherwise, the return value is meaningless. Availability: Unix. WSTOPSIG(status) Returns the signal that caused the process to stop. Availability: Unix. WTERMSIG(status) Returns the signal that caused the process to exit. Availability: Unix. The follow data values can support path-manipulation operations. These are defined for all platforms. Higher-level operations on pathnames are defined in the os.path module. curdir The constant string used by the OS to refer to the current directory, e.g., "." for POSIX or ":" for the Macintosh. pardir The constant string used by the OS to refer to the parent directory, e.g., ".." for POSIX or "::" for the Macintosh. sep The character used by the OS to separate pathname components, e.g., "/" for POSIX or ":" for the Macintosh. This character isn't sufficient to parse or concatenate pathnames (use os.path.split() and os.path.join()) but it's altsep An alternative character used by the OS to separate pathname components or None if only one separator character exists. This is set to "/" on DOS and Windows systems where sep is a backslash. pathsep The character conventionally used by the OS to separate search patch components (as in $PATH), e.g., ":" for POSIX or ";" for DOS and Windows. defpath The default search path used by exec*p*() if the environment doesn't have a PATH key. linesep The string that separates (or, rather, terminates) lines on the current platform. This may be a single character, e.g., /n for POSIX or /r for MacOS, or multiple characters, e.g., \r\n for MS-DOS and MS Windows. This module implements some useful functions on pathnames: abspath(path) Returns a normalized, absolute version of the pathname path. On most platforms, this is equivalent to normpath(join(os.getcwd()), path). New in Version 1.5.2. basename(path) Returns the base name of pathname path. This is the second half of the pair returned by split(path). commonprefix(list) Returns the longest string that is a prefix of all strings in list. If list is empty, returns the empty string (' '). dirname(path) Returns the directory name of pathname path. This is the first half of the pair returned by split(path). exists(path) Returns true if path refers to an existing path. expanduser(path) Returns the argument with an initial component of "~" or "~user" replaced by that user's home directory. An initial "~" is replaced by the environment variable $HOME; an initial "~user" is looked up in the password directory through the built-in module pwd. If the expansion fails, or if the path doesn't begin with a tilde, the path is returned unchanged. On the Macintosh, this always returns path unchanged. expandvars(path) Returns the argument with environment variables expanded. Substrings of the form $name or ${name} are replaced by the value of environment variable name. Malformed variable names and references to nonexisting variables are left unchanged. On the Macintosh, this always returns path unchanged. getatime(path) Returns the time of last access of a filename identified by path. The return value is an integer giving the number of seconds since the epoch (see the time module). Raise os.error if the file doesn't exist or is inaccessible. New in Version 1.5.2. getmtime(path) Returns the time of last modification of a filename identified by path. The return value is an integer giving the number of seconds since the epoch (see the time module). Raise os.error if the file doesn't exist or is inaccessible. New in Version 1.5.2. getsize(path) Returns the size, in bytes, of filename identified by path. Raise os.error if the file doesn't exist or is inaccessible. New in Version 1.5.2. isabs(path) Returns true if path is an absolute pathname (begins with a slash). isfile(path) Returns true if path is an existing regular file. This follows symbolic links, so both islink() and isfile() can be true for the same path. isdir(path) Returns true if path is an existing directory. This follows symbolic links, so both islink() and isdir() can be true for the same path. islink(path) Returns true if path refers to a directory entry that's a symbolic link. Always false if symbolic links aren't supported. ismount(path) Returns true if pathname path is a mount point: a point in a filesystem where a different filesystem has been mounted. The function checks whether path's parent, path/.., is on a different device than path, or whether path/.. and path point to the same i-node on the same device; this detects mount points for all Unix and POSIX variants. join(path1[, path2[, ]]) Joins one or more path components intelligently. If any component is an absolute path, all previous components are thrown away, and joining continues. The return value is the concatenation of path1, and optionally path2, etc., with exactly one slash (/) inserted between components, unless path is empty. normcase(path) Normalizes the case of a pathname. On Unix, this returns the path unchanged; on case-insensitive filesystems, it converts the path to lowercase. On Windows, it also converts forward slashes to backward slashes. normpath(path) Normalizes a pathname. This collapses redundant separators and up-level references, e.g., A//B, A/./B and A/foo/../B all become A/B. It doesn't normalize the case (use normcase() for that). On Windows, it converts forward slashes to backward slashes. samefile(path1, path2) Returns true if both pathname arguments refer to the same file or directory (as indicated by device number and i-node number). It raises an exception if an os.stat() call on either pathname fails. Availability: Macintosh, Unix. sameopenfile(fp1, fp2) Returns true if the file objects fp1 and fp2 refer to the same file. The two file objects may represent different file descriptors. Availability: Macintosh, Unix. samestat(stat1, stat2) Returns true if the stat tuples stat1 and stat2 refer to the same file. These structures may have been returned by fstat(), lstat(), or stat(). This function implements the underlying comparison used by samefile() and sameopenfile(). Availability: Macintosh, Unix. split(path) Splits the pathname path into a pair, (head, tail) where tail is the last pathname component, and head is everything leading up to that. The tail part never contains a slash; if path ends in a slash, tail is empty. If there is no slash in path, head is empty. If path is empty, both head and tail are empty. Trailing slashes are stripped from head unless it's the root (one or more slashes only). In nearly all cases, join (head, tail) equals path (the only exception being when there were multiple slashes separating head from tail). splitdrive(path) Splits the pathname path into a pair (drive, tail) where drive is either a drive specification or the empty string. On systems that don't use drive specifications, drive is always the empty string. In all cases, drive + tail is the same as path. splitext(path) Splits the pathname path into a pair (root, ext) such that root + ext == path, and ext is empty or begins with a period and contains at most one period. walk(path, visit, arg) Calls the function visit with arguments (arg, dirname, names) for each directory in the directory tree rooted at path (including path itself, if it's a directory). The argument dirname specifies the visited directory, the argument names lists the files in the directory (from os.listdir(dirname)). The visit function may modify names to influence the set of directories visited below dirname, e.g., to avoid visiting certain parts of the tree. The object referred to by names must be modified in place, using del or slice assignment.
https://flylib.com/books/en/1.116.1.2/1/
CC-MAIN-2018-51
refinedweb
12,917
57.47
Hi guys I'm new here and basically new to programming. Unfortuanetly i'm probably gonna be quite annoying to you just because you guys know what you're doing and I'm like the little slow train that's trying to catch up. Though hopefully you'll help... Here's my situation I'm supposed to write a program, which I've already started and am halfway done with, what the program is supposed to do is create an array from user input and either 1. just display their input. 2.Just sort the array. 3. show the sorted array (only if they've already sorted. 4. And then show the address of the first element of the array. I'm stuck on the sorting cause I thought I had the bubble sort alorithm right but it doesn't output when i try it... Any ideas? Or any way to make it simpler. //ECET 164 19926 //Lab 11 //This program takes 20 inputs from the user, and gives the choice to show sorted data, //sort the data into descending order, display the sorted data, and display the address of the first element. #include <iostream> using namespace std; #include <iomanip> #include <algorithm> int main() { int i,num[20],sec[20],j,choice;){ { int i,temp,thing; for(i=0; i < i-1; i++) { thing= i; for (j=i+1; j<i; j++) { if (num[j] < num[thing]) thing=j; } temp = num[i]; num[i]= num[j]; num[j] = temp; cout<<"Here are your numbers:"<<temp<<endl; } }}}}
https://www.daniweb.com/programming/software-development/threads/117419/help
CC-MAIN-2017-26
refinedweb
256
69.82
We are using all of our output bandwidth. Tcp stack allocates mbufs for output queues. When mbufs space is exhausted system freezes. Even 'numlock' on keyboard don't work. It looks like kernel is going into an infinite loop or something - system is dead and the only thing you can do is reboot. Fix: This problem doesn't exist in FreeBSD 4.x How-To-Repeat: Set max mbufs to a small number (only for testing). Generate a file about 300kB (i.e. dd if=/dev/urandom of=/tmp/file bs=1024 count=300). Add to inetd.conf line like this: telnet stream tcp nowait root /bin/cat cat /tmp/file Restart inetd with these options (maybe not all are necessary): inetd -c 0 -C 0 -R 0 -s 0 On remote machine run many processes that will open many connections to your serwer on telent port, but WILL NOT download the file. You can use these programs: In python: #!/usr/bin/env python from socket import socket from time import sleep socks = [] for i in xrange(240): try: s = socket() s.connect(('192.168.0.1',23)) socks.append(s) except Exception: pass print "opened" sleep(100) for i in socks: i.close() In php: <? for ($i=0;$i<1000;$i++) { if(!($file[$i]=fsockopen('192.168.0.1',23))) break; fgets($file[$i],1); } ?> FreeBSD 5.3 with default number of mbufs (about 17000) freezes after a few minutes, when both these programs work in several (6 pythons an 15 phps) copies. While these programs are active, "netstat -n" should be showing many "ESTABLISHED" connections with full output queue. In "netstat -m" output number of "mbufs in use" and "mbuf clusters in use" should increase. When current number of "mbuf clusters in use" reaches max value, system freezes in a few seconds. During normal work "netstat -m" looks like this: 334 mbufs in use 198/17088 mbuf clusters in use (current/max) 0/3/4528 sfbufs in use (current/peak/max) 479 KBytes allocated to network 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 104 calls to protocol drain routines I am seeing this too: The fxp driver is receiving a return of ENOBUFS from m_getcl() (no surprise there). However, since there are no clusters available and the fxp driver has frames to DMA from its FIFO; it keeps calling for clusters which never happens successfully. It appears the system is hung but; in fact, it is in a tight loop. I believe BSD 4.x had a patch that fixed a panic for the exhaustion situation since the code would be de-referencing a NULL pointer. So, now we have a test for NULL and the return of ENOBUFS. That's one way of not seeing the current problem ;-) Another way is to fix this. The test for NULL is certainly appropriate however, what happens next isn't. I am looking to contact the relevant committer(s) for a discussion as to how to resolve this problem. Regards, Ernie ;-) Responsible Changed From-To: freebsd-bugs->gnn This shoiuld be verified or closed. Responsible Changed From-To: gnn->freebsd-net 5.3 bug, probably no longer relevant. For bugs matching the following criteria: Status: In Progress Changed: (is less than) 2014-06-01 Reset to default assignee and clear in-progress tags. Mail being skipped
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=78968
CC-MAIN-2019-39
refinedweb
567
65.42
NAME chiark-named-conf - check and generate nameserver configuration SYNOPSIS chiark-named-conf [options] -n|-y|-f chiark-named-conf [options] zone ... DESCRIPTION chiark-named-conf is a tool for managing nameserver configurations and checking for suspected DNS problems. Its main functions are to check that delegations are appropriate and working, that secondary zones are slaved from the right places, and to generate a configuration for BIND, from its own input file. By default, for each zone, in addition to any warnings, the output lists the zone’s configuration type. If the zone is checked, the serial number at each of the nameservers is shown, with any unpublished primary having * after the serial number. OPTIONS MODE OPTIONS If one of the options -n, -y, or -f is supplied then chiark-named-conf will read its main configuration file for the list of relevant zones. It will then check the configuration and delegation for each zone and/or generate and install a new configuration file for the nameserver: -y|--yes Generate and install new nameserver config, as well as checking configuration, for all listed zones. -n|--no Check configuration, for all listed zones, but do not generate new nameserver config. -f|--force Generate and install new nameserver config, without doing any configuration cross-checking. (Syntax errors in our input configuration will still abort this operation.) --nothing Do nothing: do no checks, and don’t write a new config. This can be used to get a list of the zones being processed. --mail-first | --mail-middle | --mail-final Send mails to zone SOA MNAMEs reporting zones with problems. You must call chiark-named-conf at least twice, once with --mail-first, and later with --mail-final, and preferably with one or more calls to --mail-middle in between. All three options carry out a check and store the results; --mail-final also sends a mail to the zone SOA MNAME or local administrator, if too many of the calls had errors or warnings (calls before the most recent --mail-first being ignored). -mail-final-test just like --mail-final except that it always sends mail to the local server admin and never to remote zone contacts, adding (testing!) to the start of the To: field. Alternatively, one or more zone names may be supplied as arguments, in which case their delegations will be checked, and compared with the data for that zone in the main configuration (if any). In this case no new configuration file for the nameserver will be made. ADDITIONAL OPTIONS -A|--all Checks even zones known to be broken. Ie, ignores the ? zone style modifier in the configuration. -C|--config config-file Use config-file instead of /etc/bind/chiark-conf-gen.zones. Also changes the default directory. -D Enables debugging. Useful for debugging chiark-named-conf, but probably not useful for debugging your DNS configuration. Repeat to increase the debugging level. (Maximum is -DD.) -g|--glueless Do not warn about glueless referrals (strictly, makes the zone style modifier ~ the default). Not recommended - see the section GLUELESSNESS, below. -l|--local Only checks for mistakes which are the responsibility of the local administrator (to fix or get fixed). This means that for published and stealth zones we only check that we’re slaving from the right place and that any names and addresses for ourself are right. For primary zones all checks are still done. It is a mistake to specify -l with foreign zones (zones supplied explictly on the command line but not relevant to the local server); doing so produces a warning. -mgroup!*$@~? Overrides a modifiers directive in the configuration file. The modifiers specified in the directive are completely replaced by those specified in this command line option. (Note that modifiers specified in per-zone directives still override these per-group settings.) If more than one modifiers directive specifies the same group, they are all affected. modifiers directives which don’t specify a group cannot be affected. It is an error if the group does not appear in the config file. See ZONE STYLE MODIFIERS, below. The special group foreign is used for zones which don’t appear in the configuration file. -q|--quiet Suppress the usual report of the list of nameservers for each zone and the serial number from each. When specified twice, do not print any information except warnings. -r|--repeat When a problem is detected, warn for all sources of the same imperfect data, rather than only the first we come across -v|--verbose Print additional information about what is being checked, as we go along. USAGE The file /etc/bind/chiark-conf-gen.zones (or other file specified with the -C option) contains a sequence of directives, one per line. Blank lines are permitted. Leading and trailing whitespace on each line is ignored. Comments are lines starting with #. Ending a line with a joins it to the next line, so that long directives can be split across several physical lines. GENERAL DIRECTIVES These directives specify general configuration details. They should appear before directives specifying zones, as each will affect only later zone directives. admin email-address Specifies the email address of the local administrator. This is used in the From: line of mails sent out, and will also receive copies of the reports. There is no default. default-dir directory Makes directory be the default directory (which affects the interpretation of relative filenames). The default is the directory containing the main configuration file, ie /etc/bind if no -C option is specified. forbid-addr [ip-address ...] Specifies the list of addresses that are forbidden as any nameserver for any zone. The default is no such addresses. serverless-glueless domain ... Specifies a list of domains under which we do not expect to find any nameservers; for these zones it is OK to find glueless referrals. Each domain listed names a complete subtree of the DNS, starting at the named point. The default is in-addr.arpa ip6.arpa ip6.int. To avoid indefinitely long or even circularly glueless referrals (which delay or prevent lookups) it is necessary for all sites to effectively implement similar conventions; currently the author believes that only the reverse lookup namespaces are conventionally devoid of nameservers, and therefore fine to provide glueless referrals for. See GLUELESSNESS below. mail-state-dir directory Uses directory for storing information about recent failures for mailing to zone admins. See --mail-first et al. Old files in here should be cleaned up periodically out of cron. There is no default. mail-max-warnfreq percentage When --mail-final is used, a mail will be sent to all zones which had warnings or errors more than percentage% of the times --mail-* was used (since the last --mail-first). The default is 50%. modifiers !*$@~?] [group] Applies the specified zone style modifiers (see below) to subsequently declared zones (until the next modifiers directive), as if the modifiers specified were written out for each zone. You must specify at least one character for the modifiers; if you want to reset everything to the default, just say !. If style modifiers specified in the zone directive conflict with the modifiers directive, those specified in the zone directive take effect. group may contain alphanumerics and underscores, and is used for the -m command-line option. self-addr ip-address ... Specifies the list of addresses that this server may be known by in A records. There is no default. output format filename [format filename ...] Arranges that each filename will be overwritten when -y or -f are used; its new contents will be configuration directives for the zones which follow for the nameserver in question. Currently the only format supported is bind8 which indicates new-style BIND 8. If no zones follow, then each file will still be overwritten, by an effectively empty file. Default: if there is no output directive in the configuration then the default is to use bind8 chiark-conf-gen.bind8; otherwise it is an error for there to be any zones in the configuration before the first output directive. self-ns fqdn ... Specifies the list of names that this server may be known by in NS records. There is no default. Any trailing * is replaced by the name of the zone being checked, so for example self-ns isp.ns.* before the zone example.com would mean to expect us to be listed as isp.ns.example.com in the NS RRset. self-soa fqdn ... Specifies the list of names that this server may be known by in the ORIGIN field of SOA records. There is no default. Any trailing * is replaced by the name of the zone, as for self-ns. self fqdn ... Equivalent to both self-ns and self-soa with the same set of names. slave-dir directory [[prefix] suffix] Specifies the directory in which slave (published and stealth) zonefiles should be placed. The default directory is /var/cache/bind/chiark-slave. The default suffix and prefix are empty; they also will be reset to these defaults by a slave-dir directive which does not specify them. ZONE DIRECTIVES These directives specify one or more zones. primary[!*$@~?] zone filename Specifies that this server is supposed to be the primary nameserver for zone and that the zone data is to be found in filename. primary-dir[!*$@~?] directory[/prefix] [suffix[/subfile]] Search directory for files whose names start with prefix and end with suffix. Each such file is taken to represent a zone file for which this server is supposed to be the primary; the part of the filename between prefix and suffix is the name of the zone. If /subfile is specified, then instead of looking for files, we search for directories containing subfile; directories which do not contain the subfile are simply skipped. If directory[/prefix] exists as specified and is a directory then it is interpreted as directory with an empty prefix; otherwise the final path component is assumed to be the prefix. If no suffix/subfile is specified then the default is _db. published[!*$@~?] zone origin-addr Specifies that this server is supposed to be a published slave nameserver for the zone in question. stealth[!*$@~?] zone server-addr ... Specifies that this server is supposed to be an unpublished secondary (aka stealth secondary) for the zone in question. ZONE STYLE MODIFIERS Each of the zone directives may optionally be followed by one or more of the following characters (each at most once): ! Reverses the meaning of all style modifiers after the !. Only one ! must appear in the modifier list. In this list, other modifiers which default to ‘enabled’ are described by describing the effect of their inverse - see the description for !@ below. * Indicates that the zone is unofficial, ie that it is not delegated as part of the global Internet DNS and that no attempt should be made to find the superzone and check delegations. Note that unofficial, local zones should be created with caution. They should be in parts of the namespace which are reserved for private use, or belong to the actual zone maintainer. $ Indicates that any mails should be sent about the zone to the nameserver admin rather than to the zone SOA MNAME. This is the default unless we are supposedly a published server for the zone. !@ Indicates that no mails should be sent about the zone to anyone. ~ Indicates that the zone’s delegation is known to be glueless, and that lack of glue should not be flagged. Not recommended - see the section GLUELESSNESS, below. ? Indicates that the zone is known to be broken and no checks should be carried out on it, unless the -A option is specified. OTHER DIRECTIVES include file Reads file as if it were included here. end Ends processing of this file; any data beyond this point is ignored. CHECKS chiark-named-conf makes the following checks: Delegations: Each delegation from a server for the superzone should contain the same set of nameservers. None of the delegations should lack glue. The glue addresses should be the same in each delegation, and agree with the local default nameserver. Delegated servers: Each server mentioned in the delegation should have the same SOA record (and obviously, should be authoritative). All published nameservers - including delegated servers and servers named in the zone’s nameserver set: All nameservers for the zone should supply the same list of nameservers for the zone, and none of this authority information should be glueless. All the glue should always give the same addresses. Origin server’s data: The set of nameservers in the origin server’s version of the zone should be a superset of those in the delegations. Our zone configuration: For primary zones, the SOA origin should be one of the names specified with self-soa (or self). For published zones, the address should be that of the SOA origin. For stealth zones, the address should be that of the SOA origin or one of the published nameservers. GLUELESSNESS Glue is the name given for the addresses of nameservers which are often supplied in a referral. In fact, it turns out that it is important for the reliability and performance of the DNS that referrals, in general, always come with glue. Firstly, glueless referrals usually cause extra delays looking up names. BIND 8, when it receives a completely glueless referral and does not have the nameservers’ addresses in its cache, will start queries for the nameserver addresses; but it will throw the original client’s question away, so that when these queries arrive, it won’t restart the query from where it left off. This means that the client won’t get its answer until it retries, typically at least 1 second later - longer if you have more than one nameserver listed. Worse, if the nameserver to which the glueless referral points is itself under another glueless referral, another retry will be required. Even for better resolvers than BIND 8, long chains of glueless referrals can cause performance and reliability problems, turning a simple two or three query exchange into something needing more than a dozen queries. Even worse, one might accidentally create a set of circularly glueless referrals such as example.com NS ns0.example.net.uk example.com NS ns1.example.net.uk example.net.uk NS ns0.example.com example.net.uk NS ns1.example.com Here it is impossible to look up anything in either example.com or example.net.uk. There are, as far as the author is aware, no generally agreed conventions or standards for avoiding unreasonably long glueless chains, or even circular glueless situations. The only way to guarantee that things will work properly is therefore to always supply glue. However, the situation is further complicated by the fact that many implementations (including BIND 8.2.3, and many registry systems), will refuse to accept glue RRs for delegations in a parent zonefile unless they are under the parent’s zone apex. In these cases it can be necessary to create names for the child’s nameservers which are underneath the child’s apex, so that the glue records are both in the parent’s bailiwick and obviously necessary. In the past, the ‘shared registry system’ managing .com, .net and .org did not allow a single IPv4 address to be used for more than one nameserver name. However, at the time of writing (October 2002) this problem seems to have been fixed, and the workaround I previously recommended (creating a single name for your nameserver somewhere in .com, .net or .org, and using that for all the delegations from .com, .net and .org) should now be avoided. Finally, a note about ‘reverse’ zones, such as those in in-addr.arpa: It does not seem at all common practice to create nameservers in in- addr.arpa zones (ie, no NS RRs seem to point into in-addr.arpa, even those for in-addr.arpa zones). Current practice seems to be to always use nameservers for in-addr.arpa which are in the normal, forward, address space. If everyone sticks to the rule of always publishing nameservers names in the ‘main’ part of the namespace, and publishing glue for them, there is no chance of anything longer than a 1-step glueless chain might occur for a in-addr.arpa zone. It is probably best to maintain this as the status quo, despite the performance problem this implies for BIND 8 caches. This is what the serverless-glueless directive is for. Dan Bernstein has some information and examples about this at but be warned that it is rather opinionated. GLUELESSNESS SUMMARY I recommend that every nameserver should have its own name in every forward zone that it serves. For example: zone.example.com NS servus.ns.example.com servus.ns.example.com A 127.0.0.2 2.0.0.127.in-addr.arpa PTR servus.example.net servus.example.net A 127.0.0.2 Domain names in in-addr.arpa should not be used in the right hand side of NS records. SECURITY chiark-named-conf is supposed to be resistant to malicious data in the DNS. It is not resistant to malicious data in its own options, configuration file or environment. It is not supposed to read its stdin, but is not guaranteed to be safe if stdin is dangerous. Killing chiark-named-conf suddenly should be safe, even with -y or -f (though of course it may not complete its task if killed), provided that only one invocation is made at once. Slow remote nameservers will cause chiark-named-conf to take excessively long. EXIT STATUS 0 All went well and there were no warnings. any other There were warnings or errors. FILES /etc/bind/chiark-conf-gen.zones Default input configuration file. (Override with -C.) /etc/bind Default directory. (Override with -C or default-dir.) dir/chiark-conf-gen.bind8 Default output file. /var/cache/bind/chiark-slave Default location for slave zones. ENVIRONMENT Setting variables used by dig(1) and adnshost(1) will affect the operation of chiark-named-conf. Avoid messing with these if possible. PATH is used to find subprograms such as dig and adnshost. BUGS The determination of the parent zone for each zone to be checked, and its nameservers, is done simply using the system default nameserver. The processing of output from dig is not very reliable or robust, but this is mainly the fault of dig. This can lead to somewhat unhelpful error reporting for lookup failures. AUTHOR chiark-named-conf and this manpage were written by Ian Jackson <ian@chiark.greenend.org.uk>. They are Copyright 2002 Ian Jackson. chiark-named-conf and this manpage.
http://manpages.ubuntu.com/manpages/dapper/man8/chiark-named-conf.8.html
CC-MAIN-2013-48
refinedweb
3,109
56.35
Austin/317 Minutes of the 11th Plenary Meeting of the Austin Group 12-15 September 2006, The Open Group, Reading, UK Attendees Andrew Josey called the eleventh meeting (a.k.a. Austin/M12, since this counting includes a teleconference) of the Austin Group to order at 9:30 am Tuesday, September 12th at The Open Group offices, Reading, UK. Meeting Goals The goal of this meeting is to prepare for D2, including all of the aardvark processing from D1, and current standard. Also to address all of the new material to form editing instructions. All the participants introduced themselves. The agenda was approved as published, with the addition of a discussion on approved interpretations/interpretations status (under item 5, Status Reports). Minutes of the last plenary meeting ( Austin/281, February 21-24, 2006) were reviewed. Approved with no objections. SD1 No updates. SD2 - no updates. Matthew Rice responded to the issues raised at the last Plenary by Stephen Michelle on the difficulties that Canada has had holding together a group of experts to review documents. TABLED. An ISO editing group has been formed (and this meeting is a meeting of the official POSIX editing group). No updates. See attendance list above. Austin Group Status - verbal update by Andrew Josey (see also Austin/305 and Austin/306). Main change since last meeting is that Draft 1 has been produced and balloted. A project plan has been produced (Austin/284). October 31 is the next milestone, for D2. Completion still expected in April 2008. We are on track! Organizational Reps Status Austin/285r1 is current scope definition. Austin/284 describes the timeline. Need to review the four new TOG specs this week to understand how to integrate these. Document FormatNick has developed a new toolchain for document production. Editorial style issues related to this will be discussed later in the agenda. People are encouraged to provide feedback on. The four extended API sets are in the final throes of approval. We will review each of these documents in detail this week. CX shading needs to be applied to all changes to headers except dirent.h. 4.1 change UX shading to CX. 4.2 ENOMEM must be CX shaded. alphasort: add pointer page for scandir(). dirfd: no changes dprintf: (possible) merge this into fprintf. getdelim: possible requirment for forwarding page for getline (there are lots of get* functions, so there may be several pages between the two). mbsnrtowcs: (possible) merge this page with mbstowcs. mkdtemp: (possible) merge this page with mkstemp open_memstream: add see also's in the reverse direction. psiginfo: add a see also to perror to this page stpcpy: merge into strcpy stpncpy: merge into strncpy. ACTION Ulrich to provide words for merging stpncpy with strncpy strndup: merge with strdup. Both to be mandatory. strnlen: merge with strlen. strnlen is CX on new page. strsignal: delete see also to perror. wcpcpy: merge with wcscpy wcpncpy: merge with wcsncpy wcscasecmp: separate page, but ensure alignment with words in strcasecmp. wcsdup: no changes wcsncasecmp: merge with wcscasecmp. wcsnlen: merge with wcslen, CX shaded. wcsnrtombs: (possible) merge with wcsrtombs ISSUE for SD1: The standard needs a way to open a directory for searching. While the *at functions which are being added to SUSv4 were being discussed, a proposal was made on a way to open directories for searching; initial attempts to formulate this proposal showed that further thought was necessary, and it was not suitable for standardization at that time. Wording exists in all *at functions: The test for whether fd is searchable is based on whether fd is open for searching, not whether the underlying directory currently permits searches. However, the concept of opening a directory for search is no longer in the API set.Replace with: It is unspecified whether directory searches are permitted based on whether the directory was opened with search permission or on the current permissions of the directory underlying the file descriptor.Also add rationale to XRAT (A.4.4) for this. Since the current standard does not specify a method for opening a directory for searching it is unspecified whether search permission on the fd argument to openat() and related functions is based on whether the file was opened with search mode or on the current permissions allowed by a directory at the time a search is performed. When there is existing practise that supports opening directories for searching it is expected that these functions will be modified to specify that the search permissions will be granted based on the file access modes of the directory's file descriptor fd and not on the mode of the directory at the time the directory is searched. Also add new words to 4.12 and 4.4 (normative text) describing directory search permssions and the concept of a pathname relative to a file descriptor. Any changes to options need to be reflected in 2.1.3 and 2.1.4. Options to move to base ASYNCHRONOUS_IO: useful, can be implemented at user level. No reason not to demand it. Move AIO to base. Remove margin code/shading. Mandate macro value as 200xxxL. Also update Annex E to track this option. POSIX_BARRIERS: no concensus yet. POSIX_MAPPED_FILES (MF): mandatory for UNIX. Move this to base. Only needs an MMU to be able to implement this. Remove the option MF, mandate the value of the macro to 200xxxL. Also update Annex E to track this option. POSIX_MEMORY_PROTECT (MPR): similar to mapped files, do the same thing. POSIX_READER_WRITER_LOCKS already part of threads. Deal with this during THR. POSIX_REALTIME_SIGNALS (RTS): needed for AIO. Leave open for now. Default position is to move to base unless Larry objects soon. POSIX_SEMAPHORES (SEM): some overlap with system V semaphores. Not really a problem. Should go to base. POSIX_SPIN_LOCKS (SPI): this is only intra-process spin-locks, not inter-process. Go to base. THURSDAY: Larry has possible problems here. More research necessary, a lot more than 2 weeks. POSIX_SYNCHRONIZED_IO (SIO): no concensus yet. POSIX_THREAD_ATTR_STACKSIZE (TSS): no concensus. POSIX_THREAD_SAFE_FUNCTIONS (TSF): depends on THR. POSIX_THREADS (THR): Required in XSI. Seems to be required for all profiles. Make this base. This also brings in Reader-writer-locks and TSF. POSIX_TIMEOUTS (TMO): everyone has a clock! Lots of customer demand in all systems. Leave open for now, default position is move to base unless Larry objects soon. XOPEN_ENH_I18N (part of XSI): move to base - may be controversial. Remove XSI shading on these functions, and also on the gencat utility. Functions are catopen, catclose, catgets, nl_langinfo, nl_langinfo_l, setlocale (CX). POSIX_CLOCK_SELECTION (CS): (Thursday) default position is to move to base. Larry may have problems with the thread interfaces here. Conditional variables by default use CLOCK_REALTIME, which can be reset. Using pthread_condattr_setclock it is possible to specify a clock which does not have this problem, such as CLOCK_MONOTONIC. Options to be obsoleted POSIX_SPORADIC_SERVER (SS) and POSIX_THREAD_SPORADIC_SERVER (TSS): treat these together; either both stay or both go. Geoff believes some people are actively developing solutions in this area. Ulrich only knows of problems with it. It is a very specialized set of functions. Really deserves to spin off into its own book (like tracing and possibly even batch). The description of a sporadic server is very vague about what it means. Keep it as an option for now. If nobody is prepared to help support this (i.e. help handle aardvarks against it), then we may revisit this decision and obsolete it before final publication. Batch (BE): this should all be marked as obsolescent (BE OB). There is only one known implementation; not an issue for portability. Tracing (TRC) & suboptions (TEF, TRI & TRL): this should all be marked as obsolescent (OB and the original shading). XOPEN_STREAMS (XSR): should be obsolete. Functions to be deleted Legacy: Delete all legacy functions except utimes (which should not be legacy). XSI Functions to change state _setjmp and _longjmp. Should become obsolete. _tolower and _toupper. Should become obsolete. bsd_signal: marked as obsolete already; delete. No objection. dlopen, dlcose, dlerror, dlsym: all go to base (remove XSI shading) fchdir: move to base (i.e. remove XSI) fstatvfs and statvfs: move to base (remove XSI) (and the related header). ftw: obsolete. Application usage needs "Applications are encouraged to use nftw". Future directions state it wll be removed later. getcontext, setcontext, makecontext and swapcontext are already marked OB and should be withdrawn. And header file <ucontext.h>. getitimer: mark this obsolete in favor of the TMR option functions, and mark the TMR option functions as XSI|TMR. THURSDAY: Move TMR to Base. getpgid, getsid: move to base. getsubopt: move to base gettimeofday: add clock functions from TMR to See Also (clock_getres). Add clock_gettime to Application Usage. Mark interface as obsolete. iconv, iconv_open, iconv_close: move to base, with <iconv.h>. isascii: mark obsolete. Application Usage should note that this cannot be used portably in a localized application. lchown: move to base. mkstemp: move to base. nl_langinfo: move to base. poll: many sighs. Move to base. pread: move to base. pthread_attr_get_guardsize and pthread_attr_set_guardsize: move to base. A future aardvark may be submitted to make the default guardsize implementation defined. ACTION: Larry Dwyer to submit an aardvark against pthread_attr_get_guardsize to allow the default size to ne imp def. pthread_getconcurrency: Leave open for now, default position is obsolete these interfaces unless Don objects soon. ACTION: Don Cragun to submit objection to obsolescence of pthread_[gs]etconcurrency by 2006-10-02. pthread_mutexattr_gettype and settype: move to base along with PTHREAD_MUTEX*. pwrite: see pread. scalb: marked OB already; remove it. setpgrp: Mark this OB. Add additional Application Usage and/or rationale explaining that the behavior is unspecified whether it matches what sepgid(0,0) or setsid() does unless process is already a session leader. Apps should use one or the other of the alternative interfaces depending on what behavior they want. sighold etc: was already moved to OB for D1. siginterrupt: mark this OB. App Usage already says it all. strcasecmp: move to Base. strdup: move to base (see consent list). strfmon: move to base (along with <monetary.h>). tcgetsid: move to base. tempnam: mark obsolete. Application Usage should be strengthened to push users harder down the mkstemp, mkdtemp or tmpfile path. Also mark P_tmpdir in <stdio.h> OB. toascii: mark as OB (same as isascii). truncate: move to base. There are also several XSI shaded parts of ftruncate. First one: unshade and reword this (remove "XSI-conformant systems"). Second one: still XSI. ulimit: Obsolete in favor of [gs]etrlimit. Also uses a long rather than an rlim_t usleep: already OB, should go. utimes: remove LEGACY marking. vfork: already marked OB. Should go. waitid: move to base. Also move the constants etc in <sys/wait.h> from XSI to base. WCONTINUED and WIF_CONTINUED stay XSI. Remove rusage paragraph (only needed for the already removed wait3). Remove <sys/resource.h> throughout sys/wait.h. writev: same as readv. XSI Utilities gencat: move to base. hash: move to base. ls: move the following options to base from XSI: -m -n -p -x m4: move to base, remove from DEVELOPMENT. tsort: move to base. Move all UP to Base except fg, bg, jobs, more, talk, and vi. Rationale to explain why these are left: "UP is now an option for Interactive Utilities". Add UP shading to sh and mailx pages for Extended Description. Headers cpio.h: move to base The fcntl page in XSH should lose the first line of the synopsis (optional header unistd.h). fcntl.h remove XSI shading from "The values used for l_whence ..." and "The symbolic names for file modes ..." and "Inclusion of the ..." (D1 lines 7822 and 7842 and 7875). fnmatch.h: remove OB shaded text. glob.h: remove OB shaded text. iconv.h: move to base. langinfo.h: move to base. limits.h: ATEXIT_MAX move to base. Sort the list of numerical limits into alpha order. Remove XSI shading from WORD_BIT, LONG_BIT, (FLT_DIG, DBL_DIG, FLT_MAX, DBL_MAX). Note DBL_DIG, DBL_MAX, FLT_DIG, and FLT_MAX are mentioned only in an introductory paragraph, and not defined. Earlier editions (XSH5/SUSv2 had these as legacy). Looks like they were removed from SUSv3. Remove them from the intro list. "Other Invariant Values": move NL_SETMAX, NL_MSGMAX, NL_TEXTMAX to base (needed for gencat). In NL_ARGMAX change "Maximum value of digit in calls to the printf..." to "Maximum value of n in conversion specifications using the %n$ sequence in the printf and scanf families of functions". NL_NMAX appears to be an editorial error, and should have been dropped in SUSv2. Remove it now. fprintf rathole For fprintf, fwprintf, fscanf, fwscanf noted that the %n$ stuff in fprintf is XSI shaded. Also %' . Should move this to CX. D1 lines 13688-13701, 13708-9, 13729-13734 13736-13738, 13759-60. Also 13938 EILSEQ should be CX. And 13944-13955 should be CX. In fscanf, D1 lines 14866-877, 14886 14923-24, 15071, 15072, (the ENOMEM from %m aardvark) all CX. In fwprintf, same as fprintf. Also add a shall fail EOVERFLOW same as fprintf. In fwscanf, same as fscanf. math.h: MAXFLOAT should be OB and XSI (same as FLT_MAX). monetary.h: move to base. nl_types.h: move to base. poll.h: move to base. pthread.h: move to base. PTHREAD_MUTEX* move to base. All SPI to base. Note that "TSH|SPI" is now unshaded. HP Issuespthread_mutexattr_[gs]ettype: move to base. Larry has considerable problems with marking SPI as base. HP cannot implement spin locks efficiently. Needs more time to research. setjmp.h: Add OB to _longjmp and _setjmp. signal.h: SIGPOLL should be OB as well as XSR. SIGPROF: should be OB (but SIGSYS SIGTRAP just XSI). SA_RESETHAND: move to base. SA_RESTART: move to base. SA_SIGINFO: move to base. SA_NOCLDWAIT: move to base. SA_NODEFER: move to base. Move the definition of ucontext_t and mcontext_t from ucontext.h into signal.h (in order to support sigaction SA_SIGINFO). This can replace D1 lines 10935-5. This is not shaded. The siginfo_t si_errno field remains XSI, but other XSI only fields in this structure move to base. Noticed bug in current standard XSH 16, line 618. The si_ and SI_ namespace reservations should not be shaded RTS. ACTION: Ulrich to prepare an aardvark (or similar) for sigaction shading changes. D1 line 10957 XSI shading goes. Entire para become CX. In table on D1 page 311, all XSI only move to base. All XSR is "OB XSR". Line 11002 para goes from XSI to CX shading (up to 11016). In the lisdt of functions starting at D1 line 11021, follow same recommendations as made for the functions themselves in earlier pass. stdio.h: va_list should be CX shaded not XSI (D1 line 11622) and stddef.h symbols (line 11691) is CX not XSI. stdlib.h: block starting at line 11746 XSI changes to CX. Also D1 11828-829. string.h: allow stddef.h as CX on line 11904. strings.h: header moves to base. ffs() becomes XSI shaded. sys/time.h: D1 lines 12962-12967 become OB. sys/timeb.h: only function is legacy ... header should go. sys/types.h: remove shading from clock_t. id_t becomes base. The useconds_t type can be removed (not the suseconds_t) because the only functions that use it are obsolete being deleted. unistd.h: remove _XOPEN_LEGACY and _SC_XOPEN_LEGACY. Cathy has other shading changes. wchar.h: wctype_t should become OB XSI together isw*() functions. This is D1 lines 15198-15209 15222-3 (these are all in <wctype.h> ... we are only phasing out these being declared in this header). XSI shading 15177-9 goes from XSI to CX. Also 15182. wordexp.h: D1 line 15419 goes to base. Start with D1 aardvark: XBD ERN 1 Leave open, add to issues list (SD1) XBD ERN 2 Accept XBD ERN 3 Accept XBD ERN 4 Accept XBD ERN 5 Accept as marked; use "File descriptor value too large" in both places. XBD ERN 6 Accept XRAT ERN 1 Accept XCU ERN 1 Accept XCU ERN 2 Accept XCU ERN 3 Accept XCU ERN 4 Accept XCU ERN 5 Accept XCU ERN 6 Accept as marked, see mail 9733 with find . -exec pathck -p -P {} + Also, as suggested in that email, fix the pax example. XCU ERN 7 Accept XCU ERN 8 Accept XCU ERN 9 Accept XCU ERN 10 Accept XCU ERN 11 Accept XCU ERN 12 Accept XCU ERN 13 Accept XCU ERN 14 Accept XSH ERN 1 Accept XSH ERN 2 Accept XSH ERN 3 Accept XSH ERN 4 DUP of 5 (5 is a superset) XSH ERN 5 Accept as marked; change the xref to _exit. Add heading "Consequences of Process Termination" in _Exit(). Also change signal.h XBD 308 change "with all the consequences of _exit()" to "as if by a call to _exit()". XSH ERN 6 DUP of 5 XSH ERN 7 Accept XSH ERN 8 Accept XSH ERN 9 Accept XSH ERN 10 Accept XSH ERN 11 Accept XSH ERN 12 Accept XSH ERN 13 Accept XSH ERN 14 Accept XSH ERN 15 Accept XSH ERN 16 Accept XSH ERN 17 Accept as marked (Andrew has new text) XSH ERN 18 Accept Homework Review Ulrich reporting on changes required to sigaction as a consequence of option reorg yesterday. In D1 line 43087-90, delete "and the implementation supports the Realtime Signals Extension option of the XSI Extension option," and unshade entire para. Line 43106 and 43114 remove XSI shading. Line 43129-43140 remove XSI|RTS shading. Line 43144-43147 remove shading. Line 43150-57 remove shading. 43158-161 remove shading. Line 43165 and 43169-70 remove shading. Section 2.4.3, remove shading from D1 lines 1321-1338. Lines 1340-1 (SI_TIMER) should have been an TMR shading (which moves to base). Line 1342-3 should have been AIO (which moves to base). Line 1344-5 should change from RTS to MSG (which remains an option). Line 1354, SIGBUS is unshaded. Robust Mutexes Interaction with Thread Priority Inheritance and Protection Ulrich noted that there may be a problem with the interaction between Robust Mutexes and TPI/TPP. Non robust mutexes can have priority inheritance/protection, but in the glibc implementation TPP will be extremely hard to add for robust mutexes. Need a new option markings for the combination of RM and TPP or TPI (suggest RPP and RPI). For example, pthread_mutexattr_getprotocol D1 35743 on p1122, should change from TPI to "TRI|TPI". RPI = Priority Inheritance for Robust Mutexes. RPP Priority Protection for Robust Mutexes. Change TPI and TPP to mean "Non Robust Mutext Priority Inheritance/Protection" (this is used on D1 p410 line 14313 and 14317). D1 page 1122 lines 35737 becomes RPI|TPI. 35738 becomes RPP|TPP. Copy 35743-6; in first copy change "mutexes" to "robust mutexes" and shade para "RPI". Second copy change "mutexes" to "non-robust mutexes" and shade "RPP". Do the same thing for para 35747. Para starting at 35760 changes shading to "RPI|TPI". ACTION: Ulrich to file aardvark against pthread_mutexattr_getprotcol for propogation of inheritance for waiters on non-PI mutexes. pthread_mutex_timedlock (D1 p1112) 35397-35400 change TPI to RPI|TPI. In unistd.h change the _POSIX_THREAD_PRIO_INHERIT to use "non-robust mutex". Similarly for _POSIX_THREAD_PRIO_PROTECT. D1 page 410. Then duplicate them for the robust version. Names should be _POSIX_THREAD_ROBUST_PRIO_INHERIT and _POSIX_THREAD_ROBUST_PRIO_PROTECT (and sort appropriately). New paras are shaded RPI and RPP accordingly. Sysconf macros are also needed. Leave to editors. Also the sysconf page needs new macros in table. Undefined Definition In mail sequence 6721 Donn Terry suggested: (This is intended as "for the future" at the moment.) In browsing thru a copy of the C++ standard, I noted that their definition of "undefined" is a bit stronger than ours, in that (in effect) it requires "something reasonable" (although they don't use those words). The way I read it, it would disallow "Rogue-O-Matic" sorts of behaviors (good for the standard, bad for debates about what "undefined" means :-) ) We should probably put this on the list of things to look at in the next revision.We looked at the words in C++: 1.3.12 undefined behavior [defns.undefined]While there are some things we like about this definition, it does not seem to add anything that we really need or want. No action required at this time. without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message). Many erroneous program constructs do not engender undefined behavior; they are required to be diagnosed. ] Editing Notes There is a problem with synopses as noted in XCU ERN 97 against the 2004 edition (about grep). Change Utility Argument Syntax (D1 XBD 201 71491 change "the forms" to "the form", remove 7150. Remove last sentence on 7154-5. Also add at end of current text The form: utility_name -f option_argument [-f option_argument]... [operand...] indicates that the -f option is required at least once, and may occur multiple times.Change "one or more" to "zero or more" on D1 line 7147. Aardvark has been updated with new words. ACTION: Don Cragun to examine every XCU synopsis in the next draft to check for correctness. Cross book xrefs: these would be nice, but there is no funding for more work. Nick will fix a few known problems with xrefs, including turning off the "(on page xxx)" in the SEE ALSOs (selected by an argument to the .cX macro) Open Interpretations Austin AI-016: In the Dec 4th 2003 teleconference it was agreed that the formal interpretations response will say the standard is uncleasr and no conformance distinction can be made, and that the notes to the editor should be based on Don's proposed changes but with additional text derived from the suggestion Geoff made in ,mail sequence number 6337. In the email discussion preceding the teleconference, Geoff raised some minor problems with Don's text that will also need to be addressed. ACTION: Geoff to email the points needing consideration on AI-016 to Don. Geoff also pointed out that part of the proposed rationale change is no longer appropriate given that the interpretation response will say that the standard is unclear. Austin AI-112: Separating the XSI namespace. Geoff and Ulrich agreed to withdraw their objections to the proposed response. This interpretation can now move to APPROVED status. POSIX Certification Status Report Joint IEEE/Open Group effort to certify POSIX implementations. Andrew presented a status report on certification (Austin/278). Work Plan Update No update to Austin 284, other than marking achieved milestones, and addition of next meeting location and date. Draft 2 is expected by the end of October, with a review period of 3 months (till the end of January). Will need another face-to-face in February 2007. Propose Menlo Park, CA (courtesy Sun Microsystems). Week of 2/26-3/2. The meeting adjourned at 15:36, Friday September 15.
http://www.opengroup.org/austin/docs/austin_317.html
CC-MAIN-2014-10
refinedweb
3,797
69.28
Hi, View Complete Post I have found recently with some machines or some user accounts that calling some VB6 code from .Net fails to execute properly. I have two examples: I call a webservices from my javascript and im getting response as error. Hello people. I have a website where i call function in a web service. I am calling it asynchronously. The problem is the data which is returned from webservice is not shown in my label. See here protected void Page_Load(object sender, EventArgs e){ IAsyncResult asyncResult; AsyncCallback callback = new AsyncCallback(MyCallBack); asyncResult = serv.BeginGetImageList(req, callback, null);}private void MyCallBack(System.IAsyncResult asyncResult){ Label1.Text = "TEST";} Very simple but when the page is loaded, the label dont have any text. Please help. I created a very simple webservice (below) namespace PmtCommu { PmoWebService : System.Web.Services.WebService { [WebMethod] public string PmoSvc_Test(string strVal) { string retVal = string.Format("You sent me {0}.", strVal); // my code logging the strVal. return retVal; } I tested with .NET client. NO problem. BUT, my service need to be consumed by Java client. I have trouble passing the parameter in. I can see the "You sent me ." in my log and as a return value. Following is the Java Code: im I have a web method (.asmx): [WebMethod] public DataSet Item_GetAll() Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/49784-problem-seeing-webservice-functions.aspx
CC-MAIN-2016-44
refinedweb
231
61.02
A Correspondence Involving Getters and Setters One of the reasons that I love Haskell is that it leads you to fascinating thought experiments. Here’s one of mine. The conclusions aren’t particularly earth-shattering, but they are interesting. One of the most common things to do in an imperative programming language is to build getters and setters for the properties of an object. In Java, they may look like this: public X getFoo(); public void setFoo(X val); The obvious mapping from there into a purely functional approach gives you this, for a record type R and a field type F: getFoo :: R -> F setFoo :: F -> R -> R The fact that we have two separate functions here is unpleasing to me, though. Without being quite able to explain why, I’d really like to have just one type that completely describes the property “foo”. A product type is definitely cheating… but this would definitely satisfy me, if it works: foo :: forall t. (F -> (F,t)) -> (R -> (R,t)) I’m interested in this type for two reasons: first, because it’s fairly easy to embed both a getter and a setter together into such a type. Suppose you give me the functions getFoo and setFoo. Then I can certainly embed them into a foo, in such a way that they can be recovered. foo g r = let (f,v) = g (getFoo r) in (setFoo f r, v) getFoo’ = snd . foo (\x -> (x,x)) setFoo’ v = fst . foo (\x -> (v,())) It’s a straight-forward matter of substitution to see that getFoo’ and setFoo’ are identical to their original counterparts. So one can construct a value of the form of foo given any getter and setter combination, and given any such value of the type of foo, one can extract a getter and a setter. The second reason I care about that type, though, is that has a natural meaning aside from just embedding a getter/setter pair. Recall that the State monad (with state type S, for example) is a newtype wrapper around (forall t. S -> (S,t)). So this can be seen as a state transformer. It takes a stateful transformation, and changes the type of the state. Now, the rather more involved question is whether there exist state transformers (values of the type of foo) that do not arise in that way as the straightforward embedding of getter and setter functions. In other words, could foo be something more than just the encoding of a getter and a setter into a function? Alas, the answer is yes. It would be nice if the product type of getters and setters were isomorphic to the type of state transformers, and that is very nearly true… but not quite. To see the reasoning work, first note that the type (a -> (b,c)) is isomorphic to (a -> b, a -> c). (This is the type isomorphism version of distributing an exponent over a product). This lets use split up foo into two parts as follows: foo1 :: forall t. (F -> F) -> (F -> t) -> R -> R foo2 :: forall t. (F -> F) -> (F -> t) -> R -> t We can simplify a little by arguing based on the universal quantification. Note that foo1 is given as a parameter a function of type (F -> t), but it cannot possibly make any use of the value, since it does not know anything about the type t. Furthermore, foo2 must produce a value of type t, and can do so only through its parameter of type (F -> t), which can only be used for that purpose. So these turn out to be equivalent to the following simpler types: modifyFoo :: (F -> F) -> R -> R filteredGetFoo :: (F -> F) -> R -> F I’ve named them suggestively, because I have a bit of intuition for what these things tend to mean. Let’s now look at what happens to the getFoo and setFoo functions that we were able to define from the original foo: setFoo v = modifyFoo (const v) getFoo = filteredGetFoo id This all looks as you might expect… but remember that the question is whether modifyFoo and filteredGetFoo are completely determined by the getter / setter pair arising in that way. Clearly they are not. In particular, note that you can iterate a constant function 1 or more times, and always get the same answer no matter the number of iterations; and the identity function similarly for zero or more times. So some interesting logic can be built into modifyFoo or filteredGetFoo with respect to iterating the function passed as a parameter (a constant number of times, or maybe until some predicate holds, or perhaps something more complex), and though this would change the behavior of the modify and filteredGet operations for some inputs, it would have no effect on the get and put operations. Still, we’ve got something interesting here. I wonder if there are interesting “non-standard” definitions of modify and filteredGet for some common record type. If so, then they would lead to interesting transformations on values of the State monad, which don’t arise from get and set in the normal way. Makes you wonder, doesn’t it? The iterability has to do with the fact that you don’t actually have double negation elimination in intuitionistic logic. That is, the type T and the type (forall r. (T -> r) -> r) are not the same; the former can be weakened into the latter, but the latter cannot be strengthened into the former when T doesn’t happen to live in the classical fragment of intuitionistic logic. So if you take a function argument (T -> T) you are free to apply it as many times as you like before returning. But if you take an argument (T -> r) where r is universally bound, then you can only apply it at most once before returning (meaningfully, that is; if you assume a lazy language and ignore seq when defining “meaningful” application, thereby prohibiting side effects) and must call it at least once if the return type is r. You forget that even the finest imperative language in the world has not been able to eliminate _|_ from the language. Thus every function has an implicit forall r. r -> prepended to its type; in your case, this only matters for getFoo; its type is actually F -> R -> F. Thus I’d claim that the actual type of foo is just the State type: F -> R -> (F, R) I’m not sure exactly what you mean here. Certainly what I’ve written above works with the types I’ve given. If you think a different type is needed, perhaps you can explain how? Mathnerd: The problem with that version is that it requires you to supply a _|_ in order to use it as an accessor. The partiality of that approach is somewhat galling, it is used by Henning’s otherwise rather nice Data.Accessor package. Personally, I find the costate/store coalgebra approach advocated by Russell O’Connor to be more appropriate. A function of the form a -> (b, b -> a) — or equivalently a -> Store b a, subject to the side condition that it forms a comonad coalgebra captures the relationship of get to put. Another benefit of merging the a -> b, and a -> b -> a functions into one function a -> (b, b -> a) is that they can be composed more efficiently. The result of the function basically provides everything you need to ‘zip back up’ your a with a new b inside, while the two function version has to descend into the structure a second time. Efficiency aside, both your version and the version stated here miss the get/put laws that such a transformer should satisfy. You can extract such a State b r -> State a r from such a lens: but it strikes me that the use of a lens to transform state is an incidental capacity of a lens, and not its defining property, which seem better expressed in terms of how the primitive operations on a lens interact: put l (get l a) a = a stating that the result of putting the value you received from a lens back into the whole is the same as the original put l b1 (put l b2 a) = put l b1 a stating that putting is idempotent get l (put l b a) = b stating that you get what you put in. But as Russell noted, and Jeremy Gibbons recently posted those 3 laws can be replaced with the statement that you want any lens to be a costate coalgebra, which is to say that for a lens f: extract . f = id duplicate . f = fmap f . f On the other hand, the approach here uses a monad homomorphism between two state monads, which isn’t sufficient to capture the side conditions. Edward Kmett: I’ll agree that lenses could be much nicer. But in Java, State s a is what you’ve got: How would you use lenses from an imperative viewpoint? BTW, what happened to category-extras? Mathnerd: it has been split into about 20 packages. I started working with Brent Yorgey to document the broken out version of things. As for State ‘being what you have’, yes, you can use a lens to transform onestate monad into another by focusing on part of the state, and that is an important operation to offer via a lens, but that isn’t the most efficient representation, and doesn’t capture the side conditions you want a lens to satisfy. As for how I use lenses from an imperative standpoint, I have code in scala at work where I’ve replaced imperative methods with lenses focusing on part of the state. Then I just pass a lens to some portion of my state to my object. Consider the following tuple type describing a bump counter and a map of distinct values to numeric keys. case class Indexee[K](counter: Int, content: Map[K, Int]) We can build lenses out of getter/setter pairs. object Indexee { def counter[K]: Lens[Indexee[K], Int] = Lens ( _.counter, (x,y) => x copy (counter = y)) def content[K]: Lens[Indexee[K], Map[K, Int]] = Lens (_.content, (x,y) => x copy (content = y)) } Then pass that around class Memo[S, K]( indexee: Lens[S, Indexee[K]], … ) { // composing it with other lenses val counter: Lens[S, Int] = Indexee.counter compose indexee // and use it, seemingly, imperatively def fresh: State[S, Int] = counter += 1 … } The lenses used above are the ones I wrote for scalaz which are based on the separate getter/setter model.
https://cdsmith.wordpress.com/2011/04/15/a-correspondence-involving-getters-and-setters/?like=1&source=post_flair&_wpnonce=0772592eec
CC-MAIN-2014-15
refinedweb
1,769
65.35
Package sfnt Overview ▹ Overview ▾ Package sfnt implements a decoder for SFNT font file formats, including TrueType and OpenType. Index ▹ Index ▾ Package files cmap.go data.go postscript.go sfnt.go truetyColoredGlyph indicates that the requested glyph is not a monochrome // vector glyph, such as a colored (bitmap or vector) emoji glyph. ErrColoredGlyph = errors.New("sfnt: colored glyph") // ErrNotFound indicates that the requested value was not found. ErrNotFound = errors.New("sfnt: not found") ) type Buffer ¶ Buffer holds re-usable buffers that can reduce the total memory allocation of repeated Font method calls. See the Font type's documentation comment for more details. type Buffer struct { // contains filtered or unexported fields } type Collection ¶ Collection is a collection of one or more fonts. All of the Collection methods are safe to call concurrently. type Collection struct { // contains filtered or unexported fields } func ParseCollection ¶ func ParseCollection(src []byte) (*Collection, error) ParseCollection parses an SFNT font collection, such as TTC or OTC data, from a []byte data source. If passed data for a single font, a TTF or OTF instead of a TTC or OTC, it will return a collection containing 1 font. func ParseCollectionReaderAt ¶ func ParseCollectionReaderAt(src io.ReaderAt) (*Collection, error) ParseCollectionReaderAt parses an SFNT collection, such as TTC or OTC data, from an io.ReaderAt data source. If passed data for a single font, a TTF or OTF instead of a TTC or OTC, it will return a collection containing 1 font. func (*Collection) Font ¶ func (c *Collection) Font(i int) (*Font, error) Font returns the i'th font in the collection. func (*Collection) NumFonts ¶ func (c *Collection) NumFonts() int NumFonts returns the number of fonts in the collection. type Font ¶ Font is an SFNT font. Many of its methods take a *Buffer argument, as re-using buffers can reduce the total memory allocation of repeated Font method calls, such as measuring and rasterizing every unique glyph in a string of text. If efficiency is not a concern, passing a nil *Buffer is valid, and implies using a temporary buffer for a single call. It is valid to re-use a *Buffer with multiple Font method calls, even with different *Font receivers, as long as they are not concurrent calls. All of the Font methods are safe to call concurrently, as long as each call has a different *Buffer (or nil). The Font methods that don't take a *Buffer argument are always safe to call concurrently. Some methods provide lengths or coordinates, e.g. bounds, font metrics and control points. All of these methods take a ppem parameter, which is the number of pixels in 1 em, expressed as a 26.6 fixed point value. For example, if 1 em is 10 pixels then ppem is fixed.I(10), which equals fixed.Int26_6(10 << 6). To get those lengths or coordinates in terms of font units instead of pixels, use ppem = fixed.Int26_6(f.UnitsPerEm()) and if those methods take a font.Hinting parameter, use font.HintingNone. The return values will have type fixed.Int26_6, but those numbers can be converted back to Units with no further scaling necessary. type Font struct { // contains filtered or unexported fields } func Parse ¶ func Parse(src []byte) (*Font, error) Parse parses an SFNT font, such as TTF or OTF data, from a []byte data source. func ParseReaderAt ¶ func ParseReaderAt(src io.ReaderAt) (*Font, error) ParseReaderAt parses an SFNT font, such as TTF or OTF data, from an io.ReaderAt data source. func (*Font) Bounds ¶ func (f *Font) Bounds(b *Buffer, ppem fixed.Int26_6, h font.Hinting) (fixed.Rectangle26_6, error) Bounds returns the union of a Font's glyphs' bounds. In the returned Rectangle26_6's (x, y) coordinates, the Y axis increases down. func (*Font) GlyphAdvance ¶ func (f *Font) GlyphAdvance(b *Buffer, x GlyphIndex, ppem fixed.Int26_6, h font.Hinting) (fixed.Int26_6, error) GlyphAdvance returns the advance width for the x'th glyph. ppem is the number of pixels in 1 em. It returns ErrNotFound if the glyph index is out of range. func (*Font) GlyphIndex ¶ func (f *Font) GlyphIndex(b *Buffer, r rune) (GlyphIndex, error) GlyphIndex returns the glyph index for the given rune. It returns (0, nil) if there is no glyph for r. says that "Character codes that do not correspond to any glyph in the font should be mapped to glyph index 0. The glyph at this location must be a special glyph representing a missing character, commonly known as .notdef." func (*Font) GlyphName ¶ func (f *Font) GlyphName(b *Buffer, x GlyphIndex) (string, error) GlyphName returns the name of the x'th glyph. Not every font contains glyph names. If not present, GlyphName will return ("", nil). If present, the glyph name, provided by the font, is assumed to follow the Adobe Glyph List Specification: This is also known as the "Adobe Glyph Naming convention", the "Adobe document [for] Unicode and Glyph Names" or "PostScript glyph names". It returns ErrNotFound if the glyph index is out of range. func (*Font) Kern ¶ func (f *Font) Kern(b *Buffer, x0, x1 GlyphIndex, ppem fixed.Int26_6, h font.Hinting) (fixed.Int26_6, error) Kern returns the horizontal adjustment for the kerning pair (x0, x1). A positive kern means to move the glyphs further apart. ppem is the number of pixels in 1 em. It returns ErrNotFound if either glyph index is out of range. func (*Font) LoadGlyph ¶ func (f *Font) LoadGlyph(b *Buffer, x GlyphIndex, ppem fixed.Int26_6, opts *LoadGlyphOptions) ([]Segment, error) LoadGlyph returns the vector segments for the x'th glyph. ppem is the number of pixels in 1 em. If b is non-nil, the segments become invalid to use once b is re-used. In the returned Segments' (x, y) coordinates, the Y axis increases down. It returns ErrNotFound if the glyph index is out of range. It returns ErrColoredGlyph if the glyph is not a monochrome vector glyph, such as a colored (bitmap or vector) emoji glyph. func (*Font) Name ¶ func (f *Font) Name(b *Buffer, id NameID) (string, error) Name returns the name value keyed by the given NameID. It returns ErrNotFound if there is no value for that key. func (*Font) NumGlyphs ¶ func (f *Font) NumGlyphs() int NumGlyphs returns the number of glyphs in f. func (*Font) UnitsPerEm ¶ func (f *Font) UnitsPerEm() Units UnitsPerEm returns the number of units per em for f. type GlyphIndex ¶ GlyphIndex is a glyph index in a Font. type GlyphIndex uint16 type LoadGlyphOptions ¶ LoadGlyphOptions are the options to the Font.LoadGlyph method. type LoadGlyphOptions struct { } type NameID ¶ NameID identifies a name table entry. See the "Name IDs" section of type NameID uint16 const ( NameIDCopyright NameID = 0 NameIDFamily = 1 NameIDSubfamily = 2 NameIDUniqueIdentifier = 3 NameIDFull = 4 NameIDVersion = 5 NameIDPostScript = 6 NameIDTrademark = 7 NameIDManufacturer = 8 NameIDDesigner = 9 NameIDDescription = 10 NameIDVendorURL = 11 NameIDDesignerURL = 12 NameIDLicense = 13 NameIDLicenseURL = 14 NameIDTypographicFamily = 16 NameIDTypographicSubfamily = 17 NameIDCompatibleFull = 18 NameIDSampleText = 19 NameIDPostScriptCID = 20 NameIDWWSFamily = 21 NameIDWWSSubfamily = 22 NameIDLightBackgroundPalette = 23 NameIDDarkBackgroundPalette = 24 NameIDVariationsPostScriptPrefix = 25 ) type Segment ¶ Segment is a segment of a vector path. type Segment struct { // Op is the operator. Op SegmentOp // Args is up to three (x, y) coordinates. The Y axis increases down. Args [3]fixed.Point26_6 } type SegmentOp ¶ SegmentOp is a vector path segment's operator. type SegmentOp uint32 const ( SegmentOpMoveTo SegmentOp = iota SegmentOpLineTo SegmentOpQuadTo SegmentOpCubeTo ) type Units ¶ Units are an integral number of abstract, scalable "font units". The em square is typically 1000 or 2048 "font units". This would map to a certain number (e.g. 30 pixels) of physical pixels, depending on things like the display resolution (DPI) and font size (e.g. a 12 point font). type Units int32
http://docs.activestate.com/activego/1.8/pkg/golang.org/x/image/font/sfnt/
CC-MAIN-2019-04
refinedweb
1,259
57.37
In the words of the immortal Ken Wheeler: GraphQL is kind of like the s***. Actually, it's absolutely the s***. I tend to agree with this sentiment but that doesn't mean I think GraphQL is perfect. One of the most persistent challenges that has faced GraphQL since its introduction 5 years ago is client side caching. Does the GraphQL Specification Address Caching? The GraphQL specification aims to support a broad range of use cases. Caching has been considered out-of-scope for the spec itself since it wants to be as general as possible. Out of the roughly 30,000 words contained in the current working draft the word cache appears exactly once in section 3.5.5 on ID's: The ID scalar type represents a unique identifier, often used to refetch an object or as the key for a cache. In this article I'll try to answer a few high level questions around GraphQL caching including: - Why does GraphQL struggle with client side caching? - Why does this matter in GraphQL more so than REST? - What solutions do we currently have for this problem and what potential solutions are people working on? While the spec leaves caching to the imagination there is the next best thing to the spec, GraphQL.org. They have a page dedicated to explaining caching with GraphQL that I'll summarize after a quick primer on HTTP caching. HTTP Caching Before talking about strategies for GraphQL caching, it's useful to understand HTTP caching. Freshness and validation are different ways of thinking about how to control client and gateway caches. Client side and Gateway caches - Client side caches (browser caches) use HTTP caching to avoid refetching data that is still fresh - Gateway caches are deployed along with a server to check if the information is still up to date in the cache to avoid extra requests Freshness and Validation - Freshness lets the server transmit the time a resource should be considered fresh (through Cache-Controland Expiresheaders) and works well for data that doesn’t change often - Validation is a way for clients to avoid refetching data when they’re not sure if the data is still fresh or not (through Last-Modifiedand Etags) GraphQL Caching Clients can use HTTP caching to easily avoid refetching resources in an endpoint-based API. The URL is a globally unique identifier. It can be leveraged by the client to build a cache by identifying when two resources are the same. Only the combination of those two parameters will run a particular procedure on the server. Previous responses to GET requests can be cached and future requests can be routed through the cache. A historical response can be returned if possible. Globally Unique IDs Since GraphQL lacks a URL-like primitive the API usually exposes a globally unique identifier for clients to use. One possible pattern for this is reserving a field ( id). { starship(id:"3003") { id name } droid(id:"2001") { id name friends { id name } } } The id field provides a globally unique key. This is simple if the backend uses a UUID. But a globally unique identifier will need to be provided by the GraphQL layer if it is not provided by the backend. In simple cases this involves appending the name of the type to the ID and using that as the identifier. Compatibility with existing APIs How will a client using the GraphQL API work with existing APIs? It will be tricky if our existing API accepts a type-specific id while our GraphQL API uses globally unique identifiers. The GraphQL API can expose the previous API in a separate field and GraphQL clients can rely on a consistent mechanism for getting a globally unique identifier. Alternatives The client needs to derive a globally unique identifier for their caching. Having the server derive that id simplifies the client but the client can also derive the identifier. This can require combining the type of the object (queried with __typename) with some type-unique identifier. Dhaivat Pandya wrote and spoke extensively back in 2016 about how Apollo was tackling caching. We'll talk more about Apollo's cache later, but here is a high level summary of Dhaivat Pandya's thoughts. Query result trees represent a way to get trees out of your app data graph. Apollo Client applies two assumptions to cache query result trees. - Same path, same object — Same query path usually leads to the same piece of information - Object identifiers when the path isn't enough — Two results given for the same object identifier represent the same node/piece of information Apollo Client will update the query with a new result if any cache node involved in a query result tree is updated. Apollo Client Apollo Client stores the results of its GraphQL queries in a normalized, in-memory cache for responding sparingly to future queries for the same data. Normalization constructs a partial copy of your data graph on your client. The format is optimized for reading and updating the graph as your application changes state. You can configure the cache's behavior for other use cases: - Specify custom primary key fields - Customize the storage and retrieval of individual fields - Customize the interpretation of field arguments - Define supertype-subtype relationships for fragment matching - Define patterns for pagination - Manage client-side local state InMemoryCache import { InMemoryCache, ApolloClient } from '@apollo/client' const client = new ApolloClient({ cache: new InMemoryCache(options) }) Data normalization InMemoryCache has an internal data store for normalizing query response objects before the objects are saved: - Cache generates a unique ID for every identifiable object in the response - Cache stores objects by ID in a flat lookup table - Whenever an incoming object is stored with a duplicate ID the fields of those objects are merged - If incoming and existing object share fields, cached values for those fields are overwritten by incoming object - Fields in only existing or only incoming object are preserved InMemoryCache can exclude normalization for objects of a certain type for metrics and other transient data that's identified by a timestamp and never receives updates. Objects that are not normalized are embedded within their parent object in the cache. These objects can be accessed via their parent but not directly. readQuery readQuery enables you to run a GraphQL query directly on your cache. If the cache contains all necessary data it returns a data object in the shape of the query, otherwise it throws an error. It will never attempt to fetch data from a remote server. Pass readQuery a GraphQL query string const { todo } = client.readQuery({ query: gql` query ReadTodo { todo(id: 5) { id text completed } } `, }) Provide GraphQL variables to readQuery const { todo } = client.readQuery({ query: gql` query ReadTodo($id: Int!) { todo(id: $id) { id text completed } } `, variables: { id: 5, }, }) readFragment readFragment enables you to read data from any normalized cache object that was stored as part of any query result. Calls do not need to conform to the structure of one of your data graph's supported queries like with readQuery. Fetch a particular item from a to-do list const todo = client.readFragment({ id: 'Todo:5', fragment: gql` fragment MyTodo on Todo { id text completed } `, }) writeQuery, writeFragment You can also write arbitrary data to the cache with writeQuery and writeFragment. All subscribers to the cache (including all active queries) see this change and update the UI accordingly. Same signature as read counterparts except with additional data variable client.writeFragment({ id: '5', fragment: gql` fragment MyTodo on Todo { completed } `, data: { completed: true, }, }) Combining reads and writes readQuery and writeQuery can be combined to fetch currently cached data and make selective modifications. Create a new Todo item that is cached without sending it to the remote server.], }, }) cache.modify cache.modify of InMemoryCache enables you to directly modify the values of individual cached fields, or even delete fields entirely. This is an escape hatch you want to avoid. Although, as we'll see at the end of the article, some people think we should only have an escape hatch. urql Urql also modifies __typename like Apollo but it caches at the query level. It keeps track of the types returned for each query. If data modifications are performed on a type, the cache is cleared for all queries that hold that type. mutation { updateTask(id: 2, assignedTo: "Bob") { Task { id assignedTo } } } The metadata returned will show that a task was modified, and so all queries holding task results will be invalidated, and run against the network the next time they’re needed. But urql has no way of knowing what the query holds. This means that if you run a mutation creating a task that’s assigned to Fred instead of Bob, the mutation result will not be able to indicate that this particular query needs to be cleared. micro-graphql-react According to Adam Rackis, Urql's problem can actually be solved with a build step that manually introspects the entire GraphQL endpoint. Adam couldn't get other GraphQL client cache's to behave the way he wanted. He decided to build a GraphQL client with low-level control called micro-graphql-react. It provides the developer with building blocks for managing cache instead of adding metadata to queries to form a normalized, automatically-managed cache. Import client for global subscriptions to keep cache correct graphqlClient.subscribeMutation([ { when: /updateX/, run: (op, res) => syncUpdates(Y, res.update, "allX", "X") }, { when: /deleteX/, run: (op, r) => syncDeletes(Y, r.delete, "allX", "X") } ]) let { loading, loaded, data } = useQuery( buildQuery( Y, { publicUserId, userId }, { onMutation: { when: /(update|delete)X/, run: ({ refresh }) => refresh() } } ) ) Sync changes when relevant mutations happen let { loading, loaded, data } = useQuery( buildQuery( AllSubjectsQuery, { publicUserId, userId }, { onMutation: { when: /(update|delete)Subject/, run: ({ refresh }) => refresh() } } ) ) Cache Resetting micro-graphql-react was written with the assumption that managing cache invalidation should not be a framework concern. It should be easy to manage yourself with a set of primitives for different types of cache resetting. - Hard reset to clear cache and reload the query - Soft reset to clear cache, but update, and leave current results on screen - Can also update the raw cache It does not parse your queries or mutations on the client-side like Apollo and Urql. This keeps the library small and omits the GraphQL queries from your bundle. Section and Distributed GraphQL I know nothing about this and this article's length is already out of control but I found one nascent approach that seems worth mentioning. A company called Section is trying to build a distributed GraphQL solution. It is fully configurable to address caching challenges without having to maintain a distributed system as the distributed system would be managed by them. They say that it's simultaneously similar to Apollo Federation but also solving a problem Apollo Federation doesn't solve, so I'm curious how exactly that works. On first look it seems like they are taking the approach of micro-graphql-react and giving more cache control back to the developers. Persistent Queries One more thing getting thrown around in this conversation that I'll need an addition article to cover is persistent queries. The idea is to send a query id or hash instead of an entire GraphQL query string. This reduces bandwidth utilization and speeds up loading times for end-users. Resources Caching GraphQL - Mark Nottingham - Caching Tutorial for Web Authors and Webmasters - GraphQL.org - Caching - Sam Silver - GraphQL Client-Side Caching - Scott Walkinshaw - Caching GraphQL APIs - Tanmai Gopal - An approach to automated caching for public & private GraphQL APIs Apollo - Dhaivat Pandya - GraphQL Concepts Visualized - Marc-André Giroux - GraphQL & Caching: The Elephant in the Room - Blessing Krofegha - Understanding Client-Side GraphQl With Apollo-Client In React Apps - John Haykto - GraphQL Client-Side Caching with Apollo Links - Marc-André Giroux - Caching & GraphQL: Setting the Story Straight - Ben Newman - Fine Tuning Apollo Client Caching for Your Data Graph - Khalil Stemmler - Using Apollo Client 3 as a State Management Solution urql - Kurt Kemple - Intro to Urql - Ben Awad - Urql - a new GraphQL Client - Ken Wheeler - Introduction to urql - A new GraphQL Client for React - Gerard Sans - Comparing Apollo vs Urql - Phil Pluckthun, Jovi De Croock - Client-Side GraphQL Using URQL - Ryan Gilbert - Taking Flight with URQL micro-graphql-react - Adam Rackis - A Different Approach to GraphQL Caching - Adam Rackis - An Alternate Approach to GraphQL Caching Discussion (2) In case you don't just want to cache in the client, but also in a CDN, you may wanna check out graphcdn.io Ooooo, I'm all about that CDN life, so if someone stuck GraphQL on a CDN I'm definitely game for that. Added myself to the waiting list.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/ajcwebdev/graphql-caching-42ep
CC-MAIN-2021-17
refinedweb
2,110
57.5
Let's talk about an unnecessary but popular Vue plugin heftyhead ・1 min read A few days ago some news about a popular npm package containing malicious code went viral. The whole incident is a reminder that we should think twice before adding another package to our dependencies. It also reminded me of an unnecessary Vue plugin that I’ve seen pop up a few times. Vue‘s gentle learning curve makes it a popular choice with beginner developers, for whom it is even harder to figure out what to write themselves and what to install. The offender The package/plugin that I want to talk about is vue-axios. If you google “vue axios” it’s the first result. And I think that’s the main reason of it's popularity. imcvampire / vue-axios A small wrapper for integrating axios to Vuejs vue-axios A small wrapper for integrating axios to Vuejs How to install: CommonJS: npm install --save axios vue-axios And in your entry file: import Vue from 'vue' import axios from 'axios' import VueAxios from 'vue-axios' Vue.use(VueAxios, axios) Script: Just add 3 scripts in order: vue, axios and vue-axios to your document. Usage: This wrapper bind axios to Vue or this if you're using single file component. You can axios like this: Vue.axios.get(api).then((response) => { console.log(response.data) }) this.axios.get(api).then((response) => { console.log(response.data) }) this.$http.get(api).then((response) => { console.log(response.data) }) Let’s see what a plugin with 1000+ Github stars and 23,000 weekly downloads does. We can start by reading a description: Usage: This wrapper bind axios to Vue or this if you're using single file component. There's also a code example which makes the use of the plugin even more clear: Vue.axios.get(api).then((response) => { console.log(response.data) }) this.axios.get(api).then((response) => { console.log(response.data) }) this.$http.get(api).then((response) => { console.log(response.data) }) Basically, this package allows you to import axios once and then use it in every component. It’s actually quite useful. Not only don't you have to import axios in every component but also you can create an axios instance with a custom config and use it in all of them. However, it’s not really mentioned in the plugin's description, therefore I’m not sure if people installing the plugin are even aware of that. An alternative We determined that this plugin can be really useful. So what is the problem? Let's code the same functionality without using the plugin: import Vue from 'vue' import axios from "axios"; Vue.prototype.$http = axios; Vue.prototype.axios = axios; Let's compare it with the code required to configure the plugin: import Vue from 'vue' import axios from 'axios' import VueAxios from 'vue-axios' Vue.use(VueAxios, axios) As we can see it takes the same amount of lines to write the whole functionality ourselves as it takes to configure the plugin. Let's finish by showing slightly supercharged version of this approach of using axios with Vue: import Vue from 'vue' import axios from "axios"; const instance = axios.create({ baseURL: '' }); const instanceUserApi = axios.create({ baseURL: '' }); instanceUserApi.defaults.headers.common["Authorization"] = "Token" + localStorage.getItem("authToken"); Vue.prototype.$http = instance; Vue.prototype.$httpUserApi = instanceUserApi; We can create several axios instances each with a different configuration. Not only the plugin doesn't provide any value but it also is less flexible than our code. Just to make it clear the plugin allows you to create many axios instances by passing an object during configuration. The difference and the excuse As described in this Github issue: the different between Vue.prototype and vue-axios? #18 The plugin makes properties(axios and $http) immutable. Which for some may be an advantage over approach described in the previous paragraph. Nevertheless, I'm quite confident that the significant majority of developers using this plugin doesn't really care about immutability. Conclusion Vue-axios plugin does what it's description says. There's no dishonesty or anything malicious here in my opinion. Just some uninformed developers that don't think twice about what they add to their projects. What do you think about such small plugins/packages? Do you think that creator of such plugins should disclose the alternative? What are the least intuitive fundamentals and best practices in software development? Some things we do kind of make sense in and of themself. Some things have evolv... Unfortunately I feel it might be very popular to beginners. People who learn from videos, blogs first before official docs. I've had a different but similar scenario with VeeValidate. Unlike vue-axios, it offers so much more. It has so many useful features. But it comes at a price, its bundle size, which is larger than vue itself. A plugin like VeeValidate though can be avoided/skipped in favour of just regular expressions and v-bind tricks. You can also use vuelidate which is good enough for most cases an smaller than veevalidate... I started with it. But it doesn't play fair with typescript. Issues filed on the repo and PRs submitted as solutions but the repo owners also didn't meet devs halfway. On top of that, the site that introduced me to VeeValidate had a section after its intro that rants about the latter. Here just scroll to the last section. Read the rant, he does have a point but I feel like also TypeScript can be a burden on maintainers that don't know/use/like it. He should have probably accepted the patch after making sure that the contributor was available on maintaining TS support from then on. Very true. I wonder if anyone cared to fork the project just for typescript intergation 🤔 It's always tricky. Everytime you accept a big patch you're also going to maintain it in the future. If you fork a project like this because of the lack of a "small" feature, you're also going to need to keep it in sync, updating the types definition everytime there's something new. There's no perfect solution I use both VeeValidate and Vuelidate on several production apps. I love that Vuelidate is model-based and not DOM-based... But it's significantly harder to use for more complex things. The documentation for anything past the basics is utterly worthless and they keep changing validators so that, apps that worked just fine in the past, now fail validations all over the place if updated because of changed behavior. Because of this, I've mostly converted back to VeeValidate. I'd prefer to stick with Vuelidate, as its API makes more sense IMO and it's ridiculously simple to add custom validations to... But inconsistencies in functionality and the fact that writing conditional validations can be an utter mess has me saying away from it. I wish there were more good options out there, honestly. After trying both for my side project. I ended up with just this A little bit more work but I believe it's worth it I don't agree entirely but I can understand your frustration. Lately though I'm thinking more about the cost of open source from all sides. Vuelidate is a perfectly fine library and as you say, its API is nice and users can use it easily. But it has limitations, because it's aimed at simple scenarios that can be mapped on top of it. If you want full control you'll end up looking for alternatives. This doesn't make VueValidate bad, it just makes it geared to some cases but not all. At the same time as you said you have VeeValidate which is more complicated to use but can scale a little better than Vuelidate. Unfortunately it's also massive in size. Definitely, or maybe VeeValidate needs to be split up in plugins? Don't know, just an idea. The issue with custom tailored validation logic (reminds me of the hacks on top of jQuery back in the day :D) is that it doesn't scale as in: it will get complicated quickly, especially if your validation is complex (and mine was, that's why I had to leave Vuelidate for VeeValidate) or if you don't want to start from scratch everytime you start a new app. I'm not saying your case should use a library, but that a library has its purposes. Again, there's no perfect solution :D Or the way how i described it - shamelessly plugging my own article :d Small details make a difference André Excellent subject-choice! As a novice, I've been particularly guilty of this "Plugin First" approach, only to find that wrapping my head around the use of the plugin is often no time-saver compared with the unwrapped lib. Some great insights here and as far as your questions go, my opinions are: 1) Small plugins/packages can be really useful, and I think this aspect of web development is a carry-over from the jQuery days where you could easily import a script to handle some kind of UI feature without writing your own. It has an appeal in the sense that it is a time saver as well as having some sort of "credibility" as a published open source tool that others have used in their own projects. However, this leads to relying on plugins being a crutch, and I think that it can hinder developers from improving their skills by leaning on them too much. I know when I first started, I tended to use these scripts a lot, and became frustrated when they didn't do exactly what I needed it to do. This led me to start writing my own solutions and I think that exercise is what helped me improve my own skills. 2) I don't think the onus of offering alternatives should be placed on the plugin developer. Most well-curated plugins have some kind of documentation, and a lot of the times, it has some high-level description of what it does, and perhaps why it was created in the first place. It's often phrased like a sales pitch, but I can understand why. The plugin developer wants people to use their tool; after all, that's why they open sourced it in the first place, right? I think a "buyer beware" policy should be followed by any who want to use these types of tools. Omg, "jQuery plugins" make my blood boil. Like: - here's a vanilla-DOM library - do you have a version for jQuery? Or even: - here's a library of pure functions that don't touch the DOM. - so modern, so modular! But how do I use it with jQuery? That's because so many people learned jQuery without learning JS, so they were scared of if. Dark times... Well, I was so scared of jQuery, I just avoided the clientside all together until it left. Don't worry, you should just use nuxt-plugin-axios, which wraps vue-plugin-axiosfor nuxt;) What do you think about this approach? In my opinion, it's a valid approach. Declaring global variables is generally considered a bad coding practice. However, I feel like if you don't abuse it and you are not working on a really big project it doesn't matter as much. I think that the advantage of using the prototype approach is that usually, you want to make API calls in relation to the component life cycles and events. So making axios available only by components code ensures that API calls are made only in the right moment(you can still pass axios as an argument to a function or a class). This is a quite broad topic so you can easily argue that what I wrote is not always true. Nevertheless, I hope that you can see some advantages of using the prototype approach. Thank you for clarification. Plugins: as many as you need, but no more than necessary. Before looking for a plugin, I ask myself: 1) How do you code it yourself? Learn something! 2) Is it worth the time to learn a fix? I usually put a time limit, if I'm not getting somewhere in 30 minutes with this problem, move on. 3) Will this plugin fix other problems? I recently used a date plugin because dates are nasty to work with. Life got better. 4) Does the plugin really save time? If you spend hours fighting with a plugin, find another solution. 5) How many moving parts does this project have, should we add more? The more plugins you have, the more likely there is to be a conflict... with code you didn't write. I agree with your last sentence - authors of this kind of pretty trivial package, even when they create their stuff with the best of intentions, would do good to point out the alternative to using their package, rather than selling it as the best thing since sliced bread (which, I'm hasty to add, the author of vue-axios is NOT doing). Authors doing so are benefiting their potential users by educating them. Heh. I hadn't heard of that one, even though I've used axios. Will definitely stick with plain axios. At work, we create an Axios object, and just do all the API work in VueX - that way we don't have to worry about writing the axios element all over the code! :D Couldn't agree with this article more. I've seen Vue Axios used so much, I'm just confused as to why anyone needs it. It doesn't really make setting up Axios any easier and it's just unnecessary. Just went looking into my personal project, thankfully I am using the base axios package. However I did notice I am using vue-lodash. I wonder if that is a similar example. Wow! I had no idea adding axios would be so simple. I'm done with this "plugin". :P Your examples are misleading as that is not the correct way to construct and load a Vue plugin Can you provide a working/correct example? It's not claiming to be a 'plugin'. But if you want to attach a utill, he says it's fine to use the base lib and just attach it to the prototype. And I tend to agree in the case of Axios. Vue.use()just checks if it wasn't already registered and calls the install()method. And in the docs (vuejs.org/v2/guide/plugins.html) is mentioned; "want to bind something to the instance?" -> Vue.prototype.$myMethod = .... So I would argue there isn't a big difference. Just use it with care. I'm not sure what you mean. Could you explain?
https://practicaldev-herokuapp-com.global.ssl.fastly.net/heftyhead/lets-talk-about-an-unnecessary-but-popular-vue-plugin-1ied?utm_source=additional_box&amp;utm_medium=internal&amp;utm_campaign=regular&amp;booster_org=
CC-MAIN-2019-22
refinedweb
2,495
64.2
IDL (and therefore tlb) file generation for COM shared libraries Hi, I'm trying to use Qt/C++ for a project which will be used for .NET and Delphi. As C++/CLI wrapper is only good for .NET I need to find a better way, which seems like ActiveQt. I've investigated the examples and almost get everything about it. But there is a strange thing. If I build a simple QAxServer example with the settings "TEMPLATE = lib" for creating a dll, the compiler whines about the missing idl file, which could not be created due to missing "DumpIDL" meta object information. If I simply change the template to "TEMPLATE = app". I get an exe and a fully successful compile with both idl and tlb files. Here's the code I'm testing. QT -= gui QT += axserver TARGET = Foo TEMPLATE = app SOURCES += foo.cpp HEADERS += foo.h RC_FILE = Foo.rc target.path = . INSTALLS += target Foo.h #pragma once #include <QObject> class Foo : public QObject { Q_OBJECT Q_CLASSINFO("ClassID", "{A4C17D65-7896-4214-8679-74BE1CE2B35E}") Q_CLASSINFO("InterfaceID", "{B743C617-81D8-4015-BD38-4E1F671388F3}") public: Foo(QObject *parent = nullptr); ~Foo(); }; Foo.cpp #include <QAxFactory> #include "foo.h" QAXFACTORY_BEGIN("{A17FF66F-97EB-42F8-A95F-16E0395A0B7E}", "{D2385374-5F51-4950-BEB7-C5EC4D0F527C}") QAXCLASS(Foo) QAXFACTORY_END() Foo::Foo(QObject *parent) { (void)parent; } Foo::~Foo() { } Foo.rc 1 TYPELIB "Foo.rc" I know I can use the exe as a dll but there are some caveats. AFAIK importing exe methods are slower than importing dll methods. Thanks in advance. Looks like qmake is not running idc automatically. can you try running it manually and see if that works? Just a note (never tested) documentation states that Borland Delphi is not supported as client don't know if it's just the pre-embarcadero or even the new one but be aware of it qmake is running the idc tool fine. I had approved that with calling the tool manually. Got the same error message. The size of the final image is also quite different for a simple test code. Probably compiler is not putting some relevant information (metadata, i guess) into the image, but why? I'm just changing the template and rerunning the build process cleanly (otherwise the idc tool finds the old idl file and completes successfully). Regarding to the other statement, I'm also curious about the result with Delphi. As I couldn't export the methods and types properly yet, not have seen the final result (I'm open to the ideas. Doing almost the same thing with 'comapp' example but can not get them on the otherside. comapp is fine). Maybe the "not working" side was the widgets side. I won't be using any UI therefore may get away from it. Do you know how to exclude QWidgets dependency? (Although, I've investigated the ActiveQt code and seen the dependency to QWidgets inside, no matter necessary or not.) Also I've confirmed it with opening the final images via a hex editor (exe and dll). The information is really not there (dll). Smells like a bug, isn't it? Note: using "Q_INVOKABLE" macro for the methods made them visible in the COM assembly. Let me answer my question. You need to add a def file to your build process: Add this line to Foo.pro: DEF_FILE = Foo.def And create a Foo.def including these lines: EXPORTS DllCanUnloadNow PRIVATE DllGetClassObject PRIVATE DllRegisterServer PRIVATE DllUnregisterServer PRIVATE DumpIDL PRIVATE In the "app" template, these information is automatically put in the final image, but if you choose to create a dynamic library, you have to tell the compiler that you need them explicitly.
https://forum.qt.io/topic/67581/idl-and-therefore-tlb-file-generation-for-com-shared-libraries
CC-MAIN-2017-51
refinedweb
605
67.76
Advanced JBoss Class Loading Introduction One of the main concerns of a developer writing hot re-deployable JBoss applications is to understand how JBoss? or even the most fundamental of them all Why do I need to mess with all this class loading stuff anyway? This article tries to provide the reader with the knowledge required to answer these questions. It will start by trying to answer the last one, and then it will present several often encountered use cases and explain the behavior of the JBoss class loading mechanism when faced with those situations. The Need for Class Loaders and Class Loading Management Class Namespace Isolation An application server should ideally give its deployed applications the freedom to use whatever utility library and whatever version of the library they see fit, regardless of the presence of concurrent applications that want to use the same library. This is mandated by the J2EE specifications, which calls it class namespace isolation (Java EE 5 Specifications, Section EE.8.4). The fact that different applications load their classes in different class name spaces or class loading domains allow them to do just that: run whatever class version they like, oblivious to the fact that their neighbors use the same class. Java doesn't provide the formal notion of class version. So how is it possible to implement a class loading domain? The runtime identity of a class in Java 2 is defined by the fully qualified class name and its defining class loader. This means that the same class, loaded by two different class loaders, is seen by the Virtual Machine as two completely different types. If you like history, you probably know that this wasn't always the case. In Java 1.1 the runtime identity of a class was defined only by its fully qualified class name. That made Vijay Saraswat declare in 1997 that "[Java is not type-safe|]" and Sheng Liang and Gilad Bracha fixed it by strengthening the type system to include a class's defining class loader in addition to the name of the class to fully define the type. This is good and ... not so good. It is good because now its not possible anymore that a rogue class loader would re-define your "java.lang.String" class. The VM will detect that and throw a ClassCastException. It is also good because now it is possible to have class loading domains within the same VM. Not so good however, is the fact that passing an object instance by reference between two class loading domains is not possible. Doing so results in the dreaded ClassCastException. If you would like to know more details about how this happens, please follow this link. Not being able to pass an Object by reference means you have to fall back on serialization and serialization means performance degradation. Hot Redeployment Returning to application servers, the need for class loaders becomes probably obvious: this is how an application server implements class namespace isolation: each application gets its own class loader at deployment, and hence, its own "version" of classes. To extend this even more, it would be nice if we could re-deploy an application (i.e. instantiate a newer version of a class), at run-time, without necessarily bringing down the VM and re-starting it to reload the class. That would mean 24x7 uptime. Java 4 doesn't intrinsically support the concept of hot re-deployment. Once a dependent class reference was added to the runtime constant pool of a class, it is not possible to drop that reference anymore. However, it is possible to trick the server (or the VM) into doing this. If application A interacts with application B, but doesn't have any direct references to the B classes, and B changes, let's say a newer and better version becomes available, it is possible to create a new class loader, load the new B classes and have the invocation bus (the application server) route all invocations from A to the new B classes. This way, A deals with a new version of B without even knowing it. If the application server is careful to drop all explicit references to the old B classes, they will eventually be garbage collected and the old B will eventually disappear from the system. Sharing Classes Isolating class loading domains is nice. Our applications will run happily and safe. But very slowly, when it comes to interacting with each other. This is because each interaction involves passing arguments by value, which means serialization, which means overhead. We're sometimes (actually quite often) faced with the situation where in we would like to allow applications to share classes. We know precisely, for example, that in our environment, application A and B, otherwise independent, will always use the same version of the utility library and doing so, they could pass references among themselves without any problem. The added benefit in this case is that the invocations will be faster, given the fact serialization is cut out. One word of caution though. In this situation, if we hot redeploy the utility library, we also must re-deploy the application A and B: the current A and B classes used direct references to the utility classes, so they're tainted forever, they won't ever be able to use the new utility classes. JBoss makes possible for applications to share classes. JBoss 3.x does that by default. JBoss 4.0 does this for the "standard" configuration, but maintains class namespace isolation between applications for its "default" configuration. JBoss 4.0.1 reverts to the 3.x convention. Class Repositories or How JBoss Class Loading Works JBoss makes sharing classes possible by introducing the concept of class loader repository. The central piece of the class loading mechanism is the org.jboss.mx.loading.UnifiedClassLoader3 (UCL). UnifiedClassLoader3 extends URLClassLoader. Each UCL is associated with a shared repository of classes and resources, usually an instance of org.jboss.mx.loading.UnifiedLoaderRepository3. Every UCL is associated with a single instance of UnifiedLoaderRepository3, but a repository can have multiple UCLs. A UCL may have multiple URLs associated with it for class and resource loading. By default, there is a single UnifiedLoaderRepository3 shared across all UCL instances. UCLs form a single flat class namespace. The class loader parent for each UCL is a NoAnnotationClassLoader instance. The NoAnnotationClassLoader extends URLClassLoader. A singleton NoAnnotationClassLoader instance is created during the server's boot process and its job is to define classes available in the $JBOSS_HOME/lib libraries (ex: commons-logging.jar, concurrent.jar, dom4j.jar, jboss-common.jar, jboss-jmx.jar, jboss-system.jar, log4j-boot.jar, xercesImpl.jar, etc.). The NoAnnotationClassLoader's parent is the system class loader (sun.misc.Launcher$AppClassLoader). When a new UCL is created and associated with the repository, it contributes to the repository a map of packages it can potentially serve classes from. It doesn't add any class to the repository's class cache yet, because nobody has requested any class at this stage. The repository just walks through the class loader's URL to see what packages that UCL is capable of handling. So, the UCL just declares that it can potentially serve classes from the packages that are present in its classpath. When requested to load a class, a UCL overrides the standard Java2 class loading model by first trying to load a class from its associated repository's cache. If it doesn't find it there, it delegates the task of loading the class to the first UCL associated with the repository that declared it can load that class. The order in which the UCLs have been added to the repository becomes important, because this is what defines "first" in this context. If no "available" UCL is found, the initiating UCL falls back to the standard Java2 parent delegation. This explains why you are still able to use "java.lang.String", for example. At the end of this process, if no class definition is found in the bootstrap libraries, in the $JBOSS_HOME/lib libraries nor among the libraries associated with the repository's UCL, the UCL throws a ClassNotFoundException. However, if one of the pair UCL is able to load the class, the class will be added to the repository's class cache and from this moment on, it will be returned to any UCL requesting it. Even if the Java bootstrap packages or $JAVA_HOME/lib packages are not added to the repository's package map, the classes belonging to those packages can be loaded through the process described above and they are added to the repository too. This explains why you'll find "java.lang.String" in the repository. Class sharing can be turned off. J2EE-style class namespace isolation is available. You get an "isolated" application by scoping the application's deployment. At the JBoss class loading management system level, scoping translates into creating a child repository. A scoped application still can load the classes present in the classpaths of the UCLs or the root repository. Depending on whether repository's "Java2ParentDelegation" flag is turned on or off, a scoped application even has access to the class instances available in the root repository's cache. However, sibling child repositories can never share classes. Note: Even if an HierarchicalLoaderRepository3$NoParentClassLoader instance has its parent set to be an instance of NoAnnotationURLClassLoader, as represented above, the NoParentClassLoader implementation of loadClass() always throws a ClassNotFoundException to force the UCL to only load from its URLs. We will look closer at how NoParentClassLoader works and how a scoped application loads a class available in the system's bootstrap libraries when we present the Cases 3 and 4, below. Real World Scenarios We will explore the complex interactions presented above based on concrete use cases. We start by assuming that we want to deploy our own application (be it a JBoss service, a complex enterprise archive or a simple stateless session bean), and this application relies on an external library. For simplicity, we could assume that the utility library contains only a single class, org.useful.Utility. The Utility class can be packed together with the application classes inside the application archive, or it could be packed in its own archive, utility.jar. We could also assume that we always use the JBoss' default configuration. Our hypothetical application consists of a single class, org.pkg1.A. We will consider several common situations: Case 1. The Utility.class is present in the application's archive, but nowhere else on the server. The short story: The current UCL will become the defining class loader of the class, and the class will be added to the repository's class cache. The details of the process are presented below. First time the application needs to use a strong-typed Utility reference, the VM asks the current UCL to load the class. The UCL tries to get the class from the repository's cache (1). If it is found, the class is returned and the process stops right here. If the class is not found, the UCL queries the repository for UCLs capable to load classes from the package the unknown class is part of (3). Being the single UCL able to define the class, the control returns to it and load manager calls loadClassLocally() on it (4). loadClassLocally() first tries to call super.loadClass() (5), which ends by involving the NoAnnotationClassLoader in the loading process. If the class is present in the bootstrap libraries or $JBOSS_HOME/lib (the URLs associated with the NoAnnotationClassLoader instance), it is loaded. Otherwise, the class is loaded from the URLs associated with the current UCL. Finally, the class is added to the repository's class cache (6). This is the configuration of the UnifiedLoaderRepository after the class loading takes place. Case 2. The Utility.class is present both in the application's archive AND server/default/lib. The deployment is non-scoped. The short story: The version of the class available in server/default/lib/utility.jar will be used by the new deployment. The version of the class packed with the deployment will be ignored. The key element here is that when getPackageClassLoaders() is invoked on the repository, the method calls returns two potential classloaders that can load org.useful.Utility: UCL0 and UCL1. The UCL0 is chosen, because it was added to the repository before UCL1 and it will be used to load org.useful.Utility. This is the configuration of the UnifiedLoaderRepository after the class loading takes place. Case 3. The Utility.class is present both in the application's archive AND server/default/lib. The deployment is scoped and Java2ParentDelegation is turned off (default). The short story: The utility class is loaded from the application's archive. Because Java2ParentDelegation is turned off by default, the Step (1.1) is never executed, parentRepository.getCachedClass() never gets called, so the UCL doesn't have access to the repository's cached classes. Within the scope of the call to getPackageClassLoaders() at Step 3, the child repository also calls getPackageClassLoaders() on its parent, and also includes into the returned class loader set a UCL (constructed on the spot and associated to the child repository) that has among its ancestors an instance of NoAnnotationURLClassLoader, which ultimately can reach the system class loader. Why is that? Remember that the UCL's parent, HierarchicalLoaderRepository3$NoParentClassLoader, overrides loadClass() to always throw a ClassNotFoundException, thus forcing the UCL to only load from its URLs. If the UCL relies only on its class loader parent to load bootstrap classes, it will throw ClassNotFoundException and fail when your application wants to load "java.lang.String", for example. The NoAnnotationURLClassLoader-delegating UCL instance included in the return set provides a way load bootstrap library classes. Always. Case 4. The Utility.class is present both in the application's archive AND server/default/lib. The deployment is scoped, but java2ParentDelegation is turned on. When Java2ParentDelegation is turned on, the Step (1.1) is executed, and if a cached class is found in the parent repository, it is returned and the process stops here. Within the scope of the call to getPackageClassLoaders() at Step (3), the child repository also calls getPackageClassLoaders() on its parent, but does not include into the returned class loader set a UCL with a parent to the system class loader. If there are no class loaders in the repository capable of handling the request ask the class loader itself in the event that its parent(s) can load the class (repository.loadClassFromClassLoader()). Question: What happens if the parent delegation is true and a classloader already loaded the class in the parent repository's class cache? Answer: My scoped application will use the already loaded class from the parent repository's class cache. - TO_DO Create similar use cases for WARs. Deploying a WAR involves an extra class loader (WebappClassLoader) created by the servlet container, that can be independently configured using the WAR's jboss-web.xml. Explain how UCL3 are created, by whom and why. Explain the relationship between UCL created during an EAR deployment. Multiple UCLs are created (the one corresponding to the EAR and then for the embedded JARs, WARs, etc). More details on Java2ParentDelegation flag. Make it clear that it only affects the relationship between repositories not classloaders. If a class is not found in any repository, then the standard delegation model is invoked (i.e. the parent classloader is asked to find the class). - Referenced by: ClassloadedByOneUCLIsVISIBLEToAnotherAlthoughTheyAreSIBLINGSHOW
https://community.jboss.org/wiki/jbossclassloadingusecases
CC-MAIN-2015-22
refinedweb
2,588
54.02
(For more resources related to this topic, see here.) Meet AngularJS. Getting familiar with the framework AngularJS is a recent addition to the client-side MVC frameworks list, yet it has managed to attract a lot of attention, mostly due to its innovative templating system, ease of development, and very solid engineering practices. Indeed, its templating system is unique in many respects: - It uses HTML as the templating language - It doesn't require an explicit DOM refresh, as AngularJS is capable of tracking user actions, browser events, and model changes to figure out when and which templates to refresh - It has a very interesting and extensible components subsystem, and it is possible to teach a browser how to interpret new HTML tags and attributes The templating subsystem might be the most visible part of AngularJS, but don't be mistaken that AngularJS is a complete framework packed with several utilities and services typically needed in single-page web applications. AngularJS also has some hidden treasures, dependency injection (DI) and strong focus on testability. The built-in support for DI makes it easy to assemble a web application from smaller, thoroughly tested services. The design of the framework and the tooling around it promote testing practices at each stage of the development process. Finding your way in the project AngularJS is a relatively new actor on the client-side MVC frameworks scene; its 1.0 version was released only in June 2012. In reality, the work on this framework started in 2009 as a personal project of Miško Hevery, a Google employee. The initial idea turned out to be so good that, at the time of writing, the project was officially backed by Google Inc., and there is a whole team at Google working full-time on the framework. AngularJS is an open source project hosted on GitHub () and licensed by Google, Inc. under the terms of the MIT license. The community At the end of the day, no project would survive without people standing behind it. Fortunately, AngularJS has a great, supportive community. The following are some of the communication channels where one can discuss design issues and request help: - angular@googlegroups.com mailing list (Google group) - Google + community at - #angularjs IRC channel - [angularjs] tag at AngularJS teams stay in touch with the community by maintaining a blog () and being present in the social media, Google + (+ AngularJS), and Twitter (@angularjs). There are also community meet ups being organized around the world; if one happens to be hosted near a place you live, it is definitely worth attending! Online learning resources AngularJS has its own dedicated website () where we can find everything that one would expect from a respectable framework: conceptual overview, tutorials, developer's guide, API reference, and so on. Source code for all released AngularJS versions can be downloaded from. People looking for code examples won't be disappointed, as AngularJS documentation itself has plenty of code snippets. On top of this, we can browse a gallery of applications built with AngularJS (). A dedicated YouTube channel () has recordings from many past events as well as some very useful video tutorials. Libraries and extensions While AngularJS core is packed with functionality, the active community keeps adding new extensions almost every day. Many of those are listed on a dedicated website:. Tools AngularJS is built on top of HTML and JavaScript, two technologies that we've been using in web development for years. Thanks to this, we can continue using our favorite editors and IDEs, browser extensions, and so on without any issues. Additionally, the AngularJS community has contributed several interesting additions to the existing HTML/JavaScript toolbox. Batarang Batarang is a Chrome developer tool extension for inspecting the AngularJS web applications. Batarang is very handy for visualizing and examining the runtime characteristics of AngularJS applications. We are going to use it extensively in this article to peek under the hood of a running application. Batarang can be installed from the Chrome's Web Store (AngularJS Batarang) as any other Chrome extension. Plunker and jsFiddle Both Plunker () and jsFiddle () make it very easy to share live-code snippets (JavaScript, CSS, and HTML). While those tools are not strictly reserved for usage with AngularJS, they were quickly adopted by the AngularJS community to share the small-code examples, scenarios to reproduce bugs, and so on. Plunker deserves special mentioning as it was written in AngularJS, and is a very popular tool in the community. IDE extensions and plugins Each one of us has a favorite IDE or an editor. The good news is that there are existing plugins/extensions for several popular IDEs such as Sublime Text 2 (), Jet Brains' products (), and so on. AngularJS crash course Now that we know where to find the library sources and their accompanying documentation, we can start writing code to actually see AngularJS in action. Hello World – the AngularJS example Let's have a look at the typical "Hello, World!" example written in AngularJS to get the first impression of the framework and the syntax it employs. <html> <head> <script src = " 1.0.7/angular.js"> </script> </head> <body ng-app <h1>Hello, {{name}}!</h1> </body> </html> First of all, we need to include the AngularJS library to make our sample run correctly in a web browser. It is very easy as AngularJS, in its simplest form, is packaged as a single JavaScript file. AngularJS library is a relatively small one: a minified and gzipped version has a size of around 30 KB. A minified version without gzip compression has a size of around 80 KB. It doesn't require any third-party dependencies. For the short examples in this article we are going to use an un-minified, developer-friendly version hosted on Google's content delivery network (CDN). Source code for all versions of AngularJS can be also downloaded from. Including the AngularJS library is not enough to have a running example. We need to bootstrap our mini application. The easiest way of doing so is by using the custom ng-app HTML attribute. Closer inspection of the <body> tag reveals another non-standard HTML attribute: ng-init. We can use ng-init to initialize model before a template gets rendered. The last bit to cover is the {{name}} expression which simply renders model value. Even this very first, simple example brings to light some important characteristics of the AngularJS templating system, which are as follows: - Custom HTML tags and attributes are used to add dynamic behavior to an otherwise static HTML document - Double curly braces ({{expression}}) are used as a delimiter for expressions outputting model values In the AngularJS, all the special HTML tags and attributes that the framework can understand and interpret are referred to as directives. Two-way data binding Rendering a template is straightforward with AngularJS; the framework shines when used to build dynamic web applications. In order to appreciate the real power of AngularJS, let's extend our "Hello World" example with an input field, as shown in the following code: <body ng-app Say hello to: <input type="text" ng- <h1>Hello, {{name}}!</h1> </body> There is almost nothing special about the <input> HTML tag apart from the additional ng-model attribute. The real magic happens as soon as we begin to type text into the <input> field. All of a sudden the screen gets repainted after each keystroke, to reflect the provided name! There is no need to write any code that would refresh a template, and we are not obliged to use any framework API calls to update the model. AngularJS is smart enough to detect model changes and update the DOM accordingly. Most of the traditional templating system renders templates in a linear, one-way process: a model (variables) and a template are combined together to produce a resulting markup. Any change to the model requires re-evaluation of a template. AngularJS is different because any view changes triggered by a user are immediately reflected in the model, and any changes in the model are instantly propagated to a template. The MVC pattern in AngularJS Most existing web applications are based on some form of the well-known model-view-controller (MVC) pattern. But the problem with the MVC is that it is not a very precise pattern, but rather a high-level, architectural one. Worse yet, there are many existing variations and derivatives of the original pattern (MVP and MVVM seem to be the most popular ones). To add to the confusion, different frameworks and developers tend to interpret the mentioned patterns differently. This results in situations where the same MVC name is used to describe different architectures and coding approaches. Martin Fowler summarizes this nicely in his excellent article on GUI architectures ():. The AngularJS team takes a very pragmatic approach to the whole family of MVC patterns, and declares that the framework is based on the MVW (model-view-whatever) pattern. Basically one needs to see it in action to get the feeling of it. Bird's eye view All the "Hello World" examples we've seen so far didn't employ any explicit layering strategy: data initialization, logic, and view were all mixed together in one file. In any real-world application, though, we need to pay more attention to a set of responsibilities assigned to each layer. Fortunately, AngularJS provides different architectural constructs that allows us to properly build more complex applications. All the subsequent examples throughout the article omit the AngularJS initialization code (scripts inclusion, ng-app attribute, and so on) for readability. Let's have a look at the slightly modified "Hello World" example: <div ng- Say hello to: <input type="text" ng-<br> <h1>Hello, {{name}}!</h1> </div> The ng-init attribute was removed, and instead we can see a new ng-controller directive with a corresponding JavaScript function. The HelloCtrl accepts a rather mysterious $scope argument as follows: var HelloCtrl = function ($scope) { $scope.name = 'World'; } Scope A $scope object in AngularJS is here to expose the domain model to a view (template). By assigning properties to scope instances, we can make new values available to a template for rendering. Scopes can be augmented with both data and functionality specific to a given view. We can expose UI-specific logic to templates by defining functions on a scope instance. For example, one could create a getter function for the name variable, as given in the following code: var HelloCtrl = function ($scope) { $scope.getName = function() { return $scope.name; }; } And then use it in a template as given in the following code: <h1>Hello, {{getName()}}!</h1> The $scope object allows us to control precisely which part of the domain model and which operations are available to the view layer. Conceptually, AngularJS scopes are very close to the ViewModel from the MVVM pattern. Controller The primary responsibility of a controller is to initialize scope objects. In practice, the initialization logic consists of the following responsibilities: - Providing initial model values - Augmenting $scope with UI-specific behavior (functions) Controllers are regular JavaScript functions. They don't have to extend any framework-specific classes nor call any particular AngularJS APIs to correctly perform their job. Please note that a controller does the same job as the ng-init directive, when it comes to setting up initial model values. Controllers make it possible to express this initialization logic in JavaScript, without cluttering HTML templates with code. Model AngularJS models are plain, old JavaScript objects. We are not obliged to extend any of the framework's base classes nor construct model objects in any special way. It is possible to take any existing, pure JavaScript classes or objects and use them in the same way as in the model layer. We are not limited to model properties being represented by primitive values (any valid JavaScript object or an array can be used). To expose a model to AngularJS you simply assign it to a $scope. AngularJS is not intrusive and lets us keep model objects free from any framework-specific code. Scopes in depth Each $scope is an instance of the Scope class. The Scope class has methods that control the scope's lifecycle, provide event-propagation facility, and support the template rendering process. Hierarchy of scopes Let's have another look at the simple HelloCtrl example, which we've examined already: var HelloCtrl = function ($scope) { $scope.name = 'World'; } The HelloCtrl looks similar to a regular JavaScript constructor function, there is absolutely nothing special about it apart from the $scope argument. Where might this argument might be coming from? A new scope was created by the ng-controller directive using the Scope.$new() method call. Wait a moment; it looks like we need to have at least one instance of a scope to create a new scope! Indeed, AngularJS has a notation of the $rootScope (a scope that is a parent of all the other scopes). The $rootScope instance gets created when a new application is bootstrapped. The ng-controller directive is an example of a scope-creating directive. AngularJS will create a new instance of the Scope class whenever it encounters a scope-creating directive in the DOM tree. A newly-created scope will point to its parent scope using the $parent property. There can be many scope-creating directives in the DOM tree and as a result many scopes will be created. Scopes form a parent-child, tree-like relationship rooted at the $rootScope instance. As scopes' creation is driven by the DOM tree, it is not surprising that scopes' tree will mimic the DOM structure. Now that we know that some directives create new child scopes you might be wondering why all this complexity is needed. To understand this, let's have a look at the example that makes use of a ng-repeat repeater directive. The controller is as follows: var WorldCtrl = function ($scope) { $scope.population = 7000; $scope.countries = [ {name: 'France', population: 63.1}, {name: 'United Kingdom', population: 61.8}, ]; }; And the markup fragment looks in the following manner: <ul ng- <li ng- {{ country.name }} has population of {{ country.population }} </li> <hr> World's population: {{ population }} millions </ul> The ng-repeat directive allows us to iterate over a collection of countries and create new DOM elements for each item in a collection. The syntax of the ng-repeat directive should be easy to follow; a new variable country is created for each item and exposed on a $scope to be rendered by a view. But there is a problem here, that is, a new variable needs to be exposed on a $scope for each country and we can't simply override previously exposed values. AngularJS solves this problem by creating a new scope for each element in a collection. Newly created scopes will form a hierarchy closely matching the DOM tree structure, and we can visualize this by using the excellent Batarang extension for Chrome as shown in the following screenshot: As we can see in the screenshot, each scope (boundaries marked with a rectangle) holds its own set of model values. It's possible to define the same variable on different scopes without creating name collisions (different DOM elements will simply point to different scopes and use variables from a corresponding scope to render a template). This way each item has its own namespace, in the previous example every <li> element gets its own scope where the country variable can be defined. Scopes hierarchy and inheritance Properties defined on one scope are visible to all child scopes, provided that a child scope doesn't redefine a property using the same name! This is very useful in practice, since we don't need to redefine over and over again properties that should be available throughout a scope hierarchy. Building on our previous example, let's assume that we want to display the percentage of the world's population that lives in a given country. To do so, we can define the worldsPercentage function on a scope managed by the WorldCtrl as given in the following code: $scope.worldsPercentage = function (countryPopulation) { return (countryPopulation / $scope.population)*100 ; } And then call this function from each scope instance created by the ng-repeat directive as follows: <li ng- {{country.name}} has population of {{country.population}}, {{ worldsPercentage(country.population )}} % of the World's population </li> Scope's inheritance in AngularJS follows the same rules as prototypical inheritance in JavaScript (when we try to read a property, the inheritance tree will be traversed upwards till a property is found). Perils of the inheritance through the scopes hierarchy Inheritance through the scopes hierarchy is rather intuitive and easy to understand when it comes to the read access. When it comes to the write access, however, things become a little bit complicated. Let's see what happens if we define a variable on one scope and omit if from a child scope. The JavaScript code is as follows: var HelloCtrl = function ($scope) { }; And the view code is as follows: <body ng-app <h1>Hello, {{name}}</h1> <div ng- Say hello to: <input type="text" ng- <h2>Hello, {{name}}!</h2> </div> </body> If you try to run this code, you will observe that the name variable is visible across the whole application; even if it was defined on the top-most scope only! This illustrates that variables are inherited down the scope hierarchy. In other words, variables defined on a parent scope are accessible in child scopes. Now, let's observe what will happen if we start to type text into the <input> box, as shown in the following screenshot: You might be a bit surprised to see that a new variable was created in the scope initialized by the HelloCtrl controller, instead of changing a value set up on the $rootScope instance. This behavior becomes less surprising when we realize that scopes prototypically inherit from each other. All the rules that apply to the prototypical inheritance of objects in JavaScript apply equally to scopes prototypical inheritance. Scopes are just JavaScript objects after all. There are several ways of influencing properties defined on a parent scope from a child scope. Firstly, we could explicitly reference a parent scope using the $parent property. A modified template would look as follows: <input type="text" ng- While it is possible to solve an issue using this example by directly referencing a parent scope, we need to realize that this is a very fragile solution. The trouble is that an expression used by the ng-model directive makes strong assumptions about the overall DOM structure. It is enough to insert another scope-creating directive somewhere above the <input> tag and $parent will be pointing to a completely different scope. As a rule of thumb, try to avoid using the $parent property as it strongly links AngularJS expressions to the DOM structure created by your templates. An application might easily break as a result of simple changes in its HTML structure. Another solution involves binding to a property of an object and not directly to a scope's property. The code for this solution is as follows: <body ng-app <h1>Hello, {{thing.name}}</h1> <div ng- Say hello to: <input type="text" ng- <h2>Hello, {{thing.name}}!</h2> </div> </body> This approach is much better as it doesn't assume anything about the DOM tree structure. Avoid direct bindings to scope's properties. Two-way data binding to object's properties (exposed on a scope) is a preferred approach. As a rule of thumb, you should have a dot in an expression provided to the ng-model directive (for example, ng-model="thing.name"). AngularJS core services and directives make use of this event bus to signal important changes in the application's state. For example, we can listen to the $locationChangeSuccess event (broadcasted from the $rootScope instance) to be notified whenever a location (URL in a browser) changes, as given in the following code: $scope.$on('$locationChangeSuccess', function(event, newUrl, oldUrl){ //react on the location change here //for example, update breadcrumbs based on the newUrl }); The $on method available on each scope instance can be invoked to register a scope-event handler. A function acting as a handler will be invoked with a dispatched event object as its first argument. Subsequent arguments will correspond to the event's payload and are event-type dependent. Similar to the DOM events, we can call the preventDefault() and stopPropagation() methods on event object. The stopPropagation() method call will prevent an event from bubbling up the scopes' hierarchy, and is available only for events dispatched upwards in the hierarchy ($emit). While AngularJS event system is modeled after the DOM one, both event propagation systems are totally independent and have got no common parts. While events propagated through the scopes' hierarchy are very elegant solutions to several problems (especially when it comes to notifications related to global, asynchronous state changes), those should be used sparingly. Usually we can rely on the two-way data binding to end up with a cleaner solution. In the entire AngularJS framework, there are only three events being emitted ($includeContentRequested, $includeContentLoaded, $viewContentLoaded), and seven events being broadcasted ($locationChangeStart, $locationChangeSuccess, $routeUpdate, $routeChangeStart, $routeChangeSuccess, $routeChangeError, $destroy). As you can see, scope events are used very sparingly and we should evaluate other options (mostly the two-way data binding) before sending custom events. Don't try to mimic the DOM event-based programming model in AngularJS. Most of the time there are better ways of structuring your application, and you can go very far with the two-way data binding. Scopes lifecycle Scopes are necessary to provide isolated namespaces and avoid variable name collisions. Scopes which are smaller and organized in a hierarchy help in managing memory usage. When one of the scopes is no longer needed, it can be destroyed. As a result, model and functionality exposed on this scope will be eligible for garbage collection. New scopes are usually brought to life and destroyed by the scope-creating directives. It is also possible to manually create and destroy scopes by calling the $new() and $destroy() methods, respectively (both methods are defined on the Scope type). View We've seen enough examples of AngularJS templates to realize that it is not yet another templating language, but quite a different beast. Not only does the framework rely on the HTML for its template syntax and allow us to extend the HTML vocabulary, but it has the unique ability to refresh parts of the screen without any manual intervention! In reality, AngularJS has even more intimate connections to HTML and the DOM as it depends on a browser to parse the template's text (as a browser would do with any other HTML document). After a browser is done transforming the markup's text to the DOM tree, AngularJS kicks in and traverses the parsed DOM structure. Each time it encounters a directive, AngularJS executes its logic to turn directives into dynamic parts of the screen. Since AngularJS depends on a browser to parse templates, we need to ensure that the markup represents valid HTML. Pay special attention to close the HTML tags properly (failing to do so won't produce any error messages, but the view won't be rendered correctly). AngularJS works using the live, valid DOM tree! AngularJS makes it possible to enrich HTML's vocabulary (we can add new attributes or HTML elements and teach a browser how to interpret them). It is almost similar to creating a new domain-specific language (DSL) on top of HTML and instructing a browser on how to make sense of the new instructions. You can often hear that AngularJS "teaches browsers new tricks". Declarative template view – the imperative controller logic What is probably more important, however, is not the syntax and functionality of individual directives but rather the underlying AngularJS philosophy of building UIs. AngularJS promotes a declarative approach to UI construction. What it means in practice is that templates are focused on describing a desired effect rather than on ways of achieving it. It all might sound a bit confusing, so an example might come in handy here. Let's imagine that we were asked to create a form where a user can type in a short message, and then send it by clicking on a button. There are some additional user-experience (UX) requirements such as message size should be limited to 100 characters, and the Send button should be disabled if this limit is exceeded. A user should know how many characters are left as they type. If the number of remaining characters is less than ten, the displayed number should change the display style to warn users. It should be possible to clear text of a provided message as well. A finished form looks similar to the following screenshot: The preceding requirements are not particularly challenging and describe a fairly standard text form. Nevertheless, there are many UI elements to coordinate here such as we need to make sure that the button's disabled state is managed correctly, the number of remaining characters is accurate and displayed with an appropriate style, and so on. The very first implementation attempt looks as follows: <div class="container" ng- <div class="row"> <textarea ng-{{message}}</textarea> </div> <div class="row"> <button ng-Send</button> <button ng-Clear</button> </div> </div> Let's use the preceding code as a starting point and build on top of it. Firstly, we need to display the number of remaining characters, which is easy enough, as given in the following code: <span>Remaining: {{remaining()}}</span> The remaining() function is defined in the TextAreaWithLimitCtrl controller on the $scope as follows: $scope.remaining = function () { return MAX_LEN - $scope.message.length; }; Next, we need to disable the Send button if a message doesn't comply with the required length constraints. This can be easily done with a little help from the ng-disabled directive as follows: <button ng-disabled="!hasValidLength()"...>Send</button> We can see a recurring pattern here. To manipulate UI, we only need to touch a small part of a template and describe a desired outcome (display number of remaining characters, disable a button, and so on) in terms of the model's state (size of a message in this case). The interesting part here is that we don't need to keep any references to DOM elements in the JavaScript code and we are not obliged to manipulate DOM elements explicitly. Instead we can simply focus on model mutations and let AngularJS do the heavy lifting. All we need to do is to provide some hints in the form of directives. Coming back to our example, we still need to make sure that the number of remaining characters changes style when there are only a few characters left. This is a good occasion to see one more example of the declarative UI in action, as given in the following code: <span ng- Remaining: {{remaining()}}</span> where the shouldWarn() method is implemented as follows: $scope.shouldWarn = function () { return $scope.remaining() < WARN_THRESHOLD; }; The CSS class change is driven by the model mutation, but there is no explicit DOM manipulation logic anywhere in the JavaScript code! UI gets repainted based on a declaratively expressed "wish". What we are saying using the ng-class directive is this: "the text-warning CSS class should be added to the <span> element, every time a user should be warned about exceeded character limit". This is different from saying that "when a new character is entered and the number of characters exceeds the limit, I want to find a <span> element and change the text-warning CSS class of this element". What we are discussing here might sound like a subtle difference, but in fact declarative and imperative approaches are quite opposite. The imperative style of programming focuses on describing individual steps leading to a desired outcome. With the declarative approach, focus is shifted to a desired result. The individual little steps taken to reach to this result are taken care of by a supporting framework. It is like saying "Dear AngularJS, here is how I want my UI to look when the model ends up in a certain state. Now please go and figure out when and how to repaint the UI". The declarative style of programming is usually more expressive as it frees developers from giving very precise, low-level instructions. The resulting code is often very concise and easy to read. But for the declarative approach to work, there must be machinery that can correctly interpret higher-level orders. Our programs start to depend on these machinery decisions and we need to give up some of the low-level control. With the imperative approach, we are in full control and can fine tune each and every single operation. We've got more control, but the price to pay for "being in charge" is a lot of lower-level, repetitive code to be written. People familiar with SQL language will find all this sounding familiar (SQL is a very expressive, declarative language for adhoc data querying). We can simply describe the desired result (data to be fetched) and let a (relational) database figure out how to go about retrieving specified data. Most of the time, this process works flawlessly and we quickly get what we have asked for. Still there are cases where it is necessary to provide additional hints (indexes, query planner hints, and so on) or take control over data-retrieval process to fine tune performance. Directives in AngularJS templates declaratively express the desired effect, so we are freed from providing step-by-step instructions on how to change individual properties of DOM elements (as is often the case in applications based on jQuery). AngularJS heavily promotes declarative style of programming for templates and imperative one for the JavaScript code (controllers and business logic). With AngularJS , we rarely apply low-level, imperative instructions to the DOM manipulation (the only exception being code in directives). As a rule of thumb, one should never manipulate the DOM elements in AngularJS controllers. Getting a reference to a DOM element in a controller and manipulating element's properties indicates imperative approach to UI - something that goes against AngularJS way of building UIs. Declarative UI templates written using AngularJS directives allow us to describe quickly complex, interactive UIs. AngularJS will take all the low-level decisions on when and how to manipulate parts of the DOM tree. Most of the time AngularJS does "the right thing" and updates the UI as expected (and in a timely fashion). Still, it is important to understand the inner workings of AngularJS, so that we can provide appropriate hints to the framework if needed. Using the SQL analogy once again here, most of the time we don't need to worry about the work done by a query planner. But when we start to hit performance problems, it is good to know how query planner arrived at its decisions so that we can provide additional hints. The same applies to UIs managed by AngularJS: we need to understand the underlying machinery to effectively use templates and directives. Modules and dependency injection Vigilant readers have probably noticed that all the examples presented so far were using global constructor functions to define controllers. But global state is evil, it hurts application structure, makes code hard to maintain, test, and read. By no means is AngularJS suggesting usage of global state. On the contrary, it comes with a set of APIs that make it very easy to define modules and register objects in those modules. Modules in AngularJS Let's see how to turn an ugly, globally-defined controller definition into its modular equivalent, before a controller is declared as follows: var HelloCtrl = function ($scope) { $scope.name = 'World'; } And when using modules it looks as follows: angular.module('hello', []) .controller('HelloCtrl', function($scope){ $scope.name = 'World'; }); AngularJS itself defines the global angular namespace. There are various utility and convenience functions exposed in this namespace, and module is one of those functions. A module acts as a container for other AngularJS managed objects (controllers, services, and so on). As we are going to see shortly, there is much more to learn about modules than simple namespacing and code organization. To define a new module we need to provide its name as the very first argument to the module function call. The second argument makes it possible to express a dependency on other modules (in the preceding module we don't depend on any other modules). A call to the angular.module function returns an instance of a newly created module. As soon as we've got access to this instance, we can start defining new controllers. This is as easy as invoking the controller function with the following arguments: - Controller's name (as string) - Controller's constructor function Globally-defined controller's constructor functions are only good for quick-code examples and fast prototyping. Never use globally-defined controller functions in larger, real life applications. A module is defined now, but we need to inform AngularJS about its existence. This is done by providing a value to the ng-app attribute as follows: <body ng- Forgetting to specify a module's name in the ng-app attribute is a frequent mistake and a common source of confusion. Omitting a module name in the ng-app attribute will result in an error indicating that a controller is undefined. Collaborating objects As we can see, AngularJS provides a way to organize objects into modules. A module can be used not only to register objects that are directly invoked by the framework (controllers, filters, and so on) but any objects defined by applications' developers. Module pattern is extremely useful to organize our code, but AngularJS goes one step further. In addition to registering objects in a namespace, it is also possible to declaratively describe dependencies among those objects. Dependency injection We could already see that the $scope object was being mysteriously injected into controllers' instances. AngularJS is somehow able to figure out that a new scope instance is needed for a controller, and then creates a new scope instance and injects it. The only thing that controllers had to do was to express the fact that it depends on a $scope instance (no need to indicate how a new $scope object should be instantiated, should this $scope instance be a newly created one or reused from previous calls). The whole dependency management boils down to saying something along those lines: "To function correctly I need a dependency (collaborating object): I don't know from where it should be coming or how it should be created. I just know that I need one, so please provide it". AngularJS has the dependency injection (DI) engine built in. It can perform the following activities: - Understand a need for a collaborator expressed by objects - Find a needed collaborator - Wire up objects together into a fully-functional application The idea of being able to declaratively express dependencies is a very powerful one; it frees objects from having to worry about collaborating objects' lifecycles. Even better, all of a sudden it is possible to swap collaborators at will, and then create different applications by simply replacing certain services. This is also a key element in being able to unit test components effectively. Benefits of dependency injection To see the full potential of a system integrated using dependency injection, let's consider an example of a simple notification service to which we can push messages and retrieve those messages later on. To somewhat complicate the scenario, let's say that we want to have an archiving service. It should cooperate with our notifications service in the following way, as soon as the number of notifications exceeds a certain threshold the oldest notifications should be pushed to an archive. The additional trouble is that we want to be able to use different archiving services in different application. Sometimes dumping old messages to a browser's console is all that is needed; other times we would like to send expired notifications to a server using XHR calls. The code for the notifications service could look as follows: var NotificationsService = function () { this.MAX_LEN = 10; this.notificationsArchive = new NotificationsArchive(); this.notifications = []; }; NotificationsService.prototype.push = function (notification) { var newLen, notificationToArchive; newLen = this.notifications.unshift(notification); if (newLen > this.MAX_LEN) { notificationToArchive = this.notifications.pop(); this.notificationsArchive.archive(notificationToArchive); } }; NotificationsService.prototype.getCurrent = function () { return this.notifications; }; The preceding code is tightly coupled to one implementation of an archive (NotificationsArchive), since this particular implementation is instantiated using the new keyword. This is unfortunate since the only contract to which both classes need to adhere to is the archive method (accepting a notification message to be archived). The ability to swap collaborators is extremely important for testability. It is hard to imagine testing objects in isolation without the ability to substitute real implementations with fake doubles (mocks). On the following pages of this article, we are going to see how to refactor this tightly-coupled cluster of objects into a flexible and testable set of services working together. In the process of doing so, we are going to take full advantage of the AngularJS dependency injection subsystem. Registering services AngularJS is only capable of wiring up objects it is aware of. As a consequence, the very first step for plugging into DI machinery is to register an object with an AngularJS module. We are not registering the object's instance directly, rather we are throwing object-creation recipes into the AngularJS dependency injection system. AngularJS then interprets those recipes to instantiate objects, and then connects them accordingly. The end effect is a set of instantiated, wired-up objects forming a running application. In AngularJS there is a dedicated $provide service that allows us to register different recipes for objects creation. Registered recipes are then interpreted by the $injector service to provide fully-baked, ready-to-be-used object instances (with all the dependencies resolved and injected). Objects that were created by the $injector service are referred to as services. AngularJS will interpret a given recipe only once during the application's lifecycle, and as a result will create only a single instance of an object. Services created by $injector are singletons. There will be only one or instance of a given service per instance of a running application. At the end of the day, AngularJS module just holds a set of object instances but we can control how those objects are created. Values The easiest way of having AngularJS to manage an object is to register a pre-instantiated one as follows: var myMod = angular.module('myMod', []); myMod.value('notificationsArchive', new NotificationsArchive()); Any service managed by AngularJS' DI mechanism needs to have a unique name (for example, notificationsArchive in the preceding example). What follows is a recipe for creating new instances. Value objects are not particularly interesting, since object registered via this method can't depend on other objects. This is not much of the problem for the NotificationArchive instance, since it doesn't have any dependencies. In practice, this method of registration only works for very simple objects (usually expressed as instances of built-in objects or object literals). Summary We've covered a lot in this article. Our journey started by getting familiar with the AngularJS project and the people behind it. We've learned where to find the library's sources and documentation, so that we could write our first "Hello World" application. It is a pleasant surprise that AngularJS is very light-weight and easy to start with. Most of this article, though was about building solid foundations for the rest of this article, we saw how to work with the AngularJS controllers, scopes and views, and how those elements play together. A big chunk of this article was devoted to the way AngularJS services can be created in AngularJS modules and wired up using dependency injection. Resources for Article : Further resources on this subject: - Introducing the Ember.JS framework [Article] - Understanding Backbone [Article] - Top two features of GSON [Article]
https://www.packtpub.com/books/content/angular-zen
CC-MAIN-2016-50
refinedweb
6,687
51.78
- Hi All, I hope that this is the correct group for this kind of question. I m having problems using the tag in XSL. The code below is the kind ofMessage 1 of 2 , May 13, 2001View SourceHi All, I hope that this is the correct group for this kind of question. I'm having problems using the <attribute> tag in XSL. The code below is the kind of thing that I'm trying to use, however with no luck. <?xml version="1.0"?> . Thanks for any help, Marc - ... Yes, that s the correct namespace declaration. (However, you seem to be transforming to (X)HTML rather than to XSL-FO. If that s the case, the messageMessage 2 of 2 , May 13, 2001View SourceAt 01:59 PM 05/13/2001 +0000, evans_marc@... wrote: ><?xml version="1.0"?>Yes, that's the correct namespace declaration. (However, you seem to be >. transforming to (X)HTML rather than to XSL-FO. If that's the case, the message should probably have gone to either XSL-List or XHTML-L. Be that as it may....) There are two possible problems with the XSLT fragment you posted. In descending order of likelihood (I think): (1) The select attribute to that xsl:value-of element says (in English) to transfer to the result tree the ColumnScaleValue child of a Column child of the context node, as long as that Column child is in position #0 among *all* Column children of the context node. I'm having a difficult time imagining the 0th child of a context node, and I'll bet IE5.5 is, too. :) Positions (such as used in the [0] predicate in your example) are not 0-based; they start with 1 and go up to the value returned by the last() function at that point. You probably want [1] instead. (2) IE5.5 has in the past had problems with newlines embedded in attribute values. In your sample code, the content of the xsl:attribute element consists of a newline (immediately following the start tag), followed by the value of the xsl:value-of element, followed by a newline (preceding the end tag). Try getting rid of the newlines. Assuming that my assumption in (1) is correct -- that you want the first child, not the 0th (which doesn't exist) -- then I think you've got a couple choices. First, you can turn the xsl:attribute element into an *empty* element with a select attribute of its own. Thus: <xsl:attribute Or you can avoid using xsl:attribute altogether, using instead an attribute value template (AVT) in the <IMG...> start tag. Thus: <IMG src="{Column[1]/ColumnScaleValue}" /> Note that those are curly braces within the value of the src attribute; they're what makes it an AVT. You can use an AVT in the value of any result-tree attribute, which obviates the need to use the xsl:attribute element in such cases (except where you must compute the the attribute's *name*). ================================================================ John E. Simpson | "I can levitate birds. No one cares." | (Steven Wright) XML Q&A: | Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/XSL-FO/conversations/topics/382?l=1
CC-MAIN-2014-10
refinedweb
528
72.46
<DDOStatement>. DDOStatement is the main class to handle the results from the database. There are 3 main ways to retrieve information from a DDOStatement: fetch(), fetchAll(), and fetchColumn(). needs to exist, the connection parameters need to be correct and there needs to be a 'user' table. import 'package:ddo/ddo.dart'; import 'package:ddo/drivers/ddo_mysql.dart'; main() { Driver driver = new DDOMySQL('127.0.0.1', 'example', 'root', ''); DDO ddo = new DDO(driver); ddo.query('select * from user').then((DDOStatement stmt){ DDOResults results = (stmt.fetchAll(DDO.FETCH_ASSOC) as DDOResults); for(DDOResult row in results.results) { for(String cName in row.row.keys) { print("Column '${cName}' has value '${row.row[cName]}'"); } } }); }
https://www.dartdocs.org/documentation/ddo/0.2.5/index.html
CC-MAIN-2017-13
refinedweb
110
55.4
I have some json data that are key value pairs with ints as the keys and a lists of ints as the values. I want to read this data into a map and then broadcast it so it can be used by another RDD for quick lookup. I have code that worked on a 1.6.1 spark cluster that is in a data center, but the same code won't work in a 2.0.1 spark cluster in AWS. The 1.6.1 code that works: import scala.collection.mutable.WrappedArray sc.broadcast(sqlContext.read.schema(mySchema).json(myPath).map(r => (r.getInt(0), r.getAs[WrappedArray[Int]].toArray)).collectAsMap) val myData = sqlContext.read.schema(mySchema).json(myPath).map(r => (r.getInt(0), r.getSeq[Int].toArray)) org.apache.spark.sql.Dataset[(Int, Array[Int])] = [_1: int, _2: array<int>] sc.broadcast(myData.rdd.collectAsMap) I figured out that my problem was with spark shell in 2.0.1. The code I posted works fine if I use the existing sc and sqlContext that is part of the spark session that the shell creates. If I call stop and create a new session with a custom config, I will get the strange error above. I don't like this as I want to change the spark.driver.maxResultSize. Anyway, the lesson is: if you are testing code using spark shell, use the existing session or it may not work.
https://codedump.io/share/Rl5hZxZ30qNq/1/can39t-collect-data-from-datasetdataframe-in-spark-201-get-classcastexception
CC-MAIN-2016-50
refinedweb
241
65.32
The following function implements the fixed point iteration algorithm: from pylab import plot,show from numpy import array,linspace,sqrt,sin from numpy.linalg import norm def fixedp(f,x0,tol=10e-5,maxiter=100): """ Fixed point algorithm """ e = 1 itr = 0 xp = [] while(e > tol and itr < maxiter): x = f(x0) # fixed point equation e = norm(x0-x) # error at the current step x0 = x xp.append(x0) # save the solution of the current step itr = itr + 1 return x,xpLet's find the fixed point of the square root funtion starting from x = 0.5 and plot the result f = lambda x : sqrt(x) x_start = .5 xf,xp = fixedp(f,x_start) x = linspace(0,2,100) y = f(x) plot(x,y,xp,f(xp),'bo', x_start,f(x_start),'ro',xf,f(xf),'go',x,x,'k') show()The result of the program would appear as follows: In a similar way, we can compute the fixed point of function of multiple variables: # 2 variables function def g(x): x[0] = 1/4*(x[0]*x[0] + x[1]*x[1]) x[1] = sin(x[0]+1) return array(x) x,xf = fixedp(g,[0, 1]) print ' x =',x print 'f(x) =',g(xf[len(xf)-1])In this case g is a function of two variables and x is a vector, so the fixed point is a vector and the output is as follows: x = [ 0. 0.84147098] f(x) = [ 0. 0.84147098] This comment has been removed by a blog administrator.
http://glowingpython.blogspot.com/2012/01/fixed-point-iteration.html
CC-MAIN-2016-30
refinedweb
254
62.11
I'm trying to plot performance metrics of various assets in a back test. I have imported the 'test_predictions.json' into a pandas data frame. It is a list of dictionaries and contains results from various asset (listed one after the other), Here is a sample is the data: trading_pair return timestamp prediction [u'Poloniex_ETH_BTC' 0.003013302628677 1450753200L -0.157053292753482] [u'Poloniex_ETH_BTC' 0.006013302628677 1450753206L -0.187053292753482] ... [u'Poloniex_FCT_BTC' 0.006013302628677 1450753100L 0.257053292753482] Each backtest starts and ends at different times. Here' is the data for the assets of interest ''' #These are the assets I would like to analyse Poloniex_DOGE_BTC 2015-10-21 02:00:00 1445392800 Poloniex_DOGE_BTC 2016-01-12 05:00:00 1452574800 Poloniex_XRP_BTC 2015-10-28 06:00:00 1446012000 Poloniex_XRP_BTC 2016-01-12 05:00:00 1452574800 Poloniex_XMR_BTC 2015-10-21 14:00:00 1445436000 Poloniex_XMR_BTC 2016-01-12 06:00:00 1452578400 Poloniex_VRC_BTC 2015-10-25 07:00:00 1445756400 Poloniex_VRC_BTC 2016-01-12 00:00:00 1452556800 ''' So i'm trying to make an new array that contains the data for these assets. Each asset must be sliced appropriately so they all start from the latest start time and end at earliest end time (other wise there will be incomplete data). #each array should start and end: #start 2015-10-28 06:00:00 #end 2016-01-12 00:00:00 So the question is: How can I search for an asset ie Poloniex_DOGE_BTC then acquire the index for start and end times specified above ? I will be plotting the data via numpy so maybe its better turn into a numpy array, df.valuesand the conduct the search? Then i could use np.hstack(df_index_asset1, def_index_asset2) so it's in the right form to plot. So the problem is: using either pandas or numpy how do i retrieve the data for the specified assets which fall into the master start and end times? On a side note here the code i wrote to get the start and end dates, it's not to most efficient so improving that would be a bonus. EDIT: From Kartik's answer I tried to obtain just the data for asset name: 'Poloniex_DOGE_BTC' using the follow code: import pandas as pd import numpy as np preds = 'test_predictions.json' df = pd.read_json(preds) asset = 'Poloniex_DOGE_BTC' grouped = df.groupby(asset) print grouped But throws this error EDIT2: I have changed the link to the data so it is test_predictions.json` EDIT so that it starts and ends from the above timestamps ? Firstly, why like this? data = pd.read_json(preds).values df = pd.DataFrame(data) You can just write that as: df = pd.read_json(preds) And if you want a NumPy array from df then you can execute data = df.values later. And it should put the data in a DataFrame. (Unless I am much mistaken, because I have never used read_json() before. The second thing, is getting the data for each asset out. For that, I am assuming you need to process all assets. To do that, you can simply do: # To convert it to datetime. # This is not important, and you can skip it if you want, because epoch times in # seconds will perfectly work with the rest of the method. df['timestamp'] = pd.to_datetime(df['timestamp'], unit='s') # This will give you a group for each asset on which you can apply some function. # We will apply min and max to get the desired output. grouped = df.groupby('trading_pair') # Where 'trading_pair' is the name of the column that has the asset names start_times = grouped['timestamp'].min end_times = grouped['timestamp'].max Now start_times and end_times will be Series. The index of this series will be your asset names, and the value will be the minimum and maximum times respectively. I think this is the answer you are looking for, from my understanding of your question. Please let me know if that is not the case. EDIT If you are specifically looking for a few (one or two or ten) assets, you can modify the above code like so: asset = ['list', 'of', 'required', 'assets'] # Even one element is fine. req_df = df[df['trading_pair'].isin(asset)] grouped = req_df.groupby('trading_pair') # Where 'trading_pair' is the name of the column that has the asset start_times = grouped['timestamp'].min end_times = grouped['timestamp'].max EDIT for that it starts from the above starts and ends at the above timestamps ? As an aside, plotting datetimes from Pandas is very convenient as well. I use it all the time to produce most of the plots I create. And all of my data is timestamped.
http://m.dlxedu.com/m/askdetail/3/8fa3936fe59b87193e9ed99a34fa487a.html
CC-MAIN-2019-22
refinedweb
767
74.39
This article explains how to create and use external helper methods in an ASP.NET MVC application. My article How to Create Custom Inline Helper Methods in ASP.NET MVC explained what helper methods are and how to create and use inline helper methods in MVC applications. (Note: if you have not read the article regarding custom inline helpers in MVC applications, I recommend reading it first since it is related to this article). Before getting started with how to create custom external helper methods, let`s just explain external helper methods and when we should use them.External Helper MethodHelper methods are of two types, inline helper methods and external helper methods. Inline helper methods are convenient, since they are declared in the view and the complexity of the code is low. But if the code is becoming complex, then it will be difficult to read and understand the code in our view. The alternative for such a situation is to create an external helper method. External helper methods are expressed as C# extension methods. Now let`s get started with an example.Getting StartedWe will use the same example that we used in my article How to Create Custom Inline Helper Methods in ASP.NET MVC of displaying the list of fruits and flowers (for simplicity purposes I am using the same example). Let`s recall our example. We added the following code to the index action of the home Controller. The result of our helper method is the MvcHtmlString object, the content of which are directly written to the response. It takes the HTML markup that we generated using the TagBuilder class as a parameter and encodes that string so that it is safe to display. Now let`s update the index view to use our external helper method. The content of the updated index view is as shown below. There is an alternative solution for adding the namespace for our helper method. We can add that namespace to the Views/web.config file. So that our helper method will always be available for use and we don`t need to add the using statement in each and every view. The content of the Views/web.config file after adding the namespace is as shown below (as in Figure 1).Add the namespace to the Views/web.config file (as in Figure 1).ConclusionWe can encapsulate the complex code in an external helper method (extension method) and use it in multiple views. This also makes our view simpler and more readable. We can also create a class library project and include these external helper methods in that project. So that next time if we need the same functionality we just need to add the reference to our class library project. I hope this helps you.I hope you enjoyed reading the article.Happy Coding! View All
https://www.c-sharpcorner.com/UploadFile/fa5c67/how-to-create-custom-external-helper-methods-in-Asp-Net-mvc/
CC-MAIN-2020-40
refinedweb
479
65.83
: Persistent Cells and Worksheets Contents Introduction Some calculations take a long time, but produce the same result every time or only change occasionally. There is another recipe, the cache, showing how to store values and data between recalcs. Even better for long calculations, would be to store the results rather than having to repeat the calculation every time the spreadsheet is loaded. These examples show you how to persist values in cells, or a whole worksheet, to a file saved alongside the spreadsheet. It is based on code submuitted by Johannes Kersten (many thanks!). You can download both examples in a single zip file: Serialization Resolver One stores 'objects' in worksheets. An object is a techie term meaning 'anything'! Usually these objects will be values, numbers or text, but they could in theory be whatever you wanted. Taking the computers internal representation of objects and storing them on disk is called 'serialization', and unfortunately it isn't straightforward. In the .NET world of private and protected members [1] we can only serialize objects that we know how to handle. Because of this, serialization is a job for libraries. As IronPython is a faithful implentation of Python, we have a choice between using the .NET libraries or the Python ones. As most data in spreadsheets tend to be basic datatypes (numbers or text), the Python ones are slightly easier to work with. There are two basic Python serialization libraries, marshal and pickle. Both are easy to use, and for this example Johannes chose to work with marshal. Robert Smithson, one of the Resolver founders, has also posted an example of persisting worksheets to the Resolver Discussion Forum. Robert has chosen to implement his own serialization for worksheets using the XmlTextWriter and XmlTextReader. Persistent Cells The persistent cells spreadsheet will load and save the values stored in specific cells. You provide the worksheet name and a list of the cells you want to persist. You also specify a filename (which will be loaded and saved from the same directory as the spreadsheet). At the start of the spreadsheet (in the pre-constants user code), the values will be loaded into the cells (if the file exists). At the end of the spreadsheet, post-formulae user code, the values are written back out. The code for loading cell values: import os import marshal sheetname = 'Sheet1' memcells = ['A1'] #list of cells you like to save filename = 'persistent-cells.txt' filepath = os.path.join(os.path.dirname(__file__), filename) # --- # --- data = {} if os.path.isfile(filepath): try: savefile = open(filepath, 'rb') try: data = marshal.load(savefile) except Exception, e: print 'Failed to load data. Error:', e savefile.close() except IOError, e: print 'Failed to open file for load.', e for key, value in data.iteritems(): if value is None: value = Empty workbook[sheetname][key] = value The worksheet we are using is specified in the sheetname variable. The cells we want to store is the list memcells. The data is stored as a dictionary of cell names to values. The important code is the line that loads the dictionary back from the marshaled file: data = marshal.load(savefile). After this, the data is put back into the worksheet by iterating over the cell locations and values stored in the dictionary. Saving Cell Values Of course loading cell values only works if you already have some persisted data. The code for saving the values in the cells is: for key in memcells: val = workbook[sheetname][key] if val is Empty: val = None backup[key] = val try: savefile = open(filepath, 'wb') try: marshal.dump(backup, savefile) except Exception, e: print 'Failed to save data. Error:', e savefile.close() except IOError, e: print 'Failed to open file for save.', e # --- The line that does the hard work is marshal.dump(backup, savefile). One important thing to notice, is how we handle empty cells. Cells with no values use a value called Empty. This can't be marshaled, so we store None instead. The code for loading (above) knows about this, and replaces None` with the Empty value when loading back in. Persistent Worksheet The second example uses the same technique, but instead of loading/saving a list of cells it works with all the values in a worksheet. The code for loading values is very similar to the first example (except it stores values as a list of tuples - (location, value)). The code for saving values has to be different, it needs to store all the populated cells in a worksheet. There is not yet a convenient API for doing this on Resolver worksheets (this will probably change soon), but we can iterate over every location in the bounds (the populated area) of the worksheet. sheet = workbook[sheetname] for col in range(sheet.MinCol, sheet.MaxCol + 1): for row in range(sheet.MinRow, sheet.MaxRow + 1): val = sheet[col, row] if val is Empty: val = None data.append(((col, row), val)) try: savefile = open(filepath, 'wb') try: marshal.dump(data, savefile) except Exception, e: print 'Failed to save data. Error:', e savefile.close() except IOError, e: print 'Failed to open file for save.', e Conclusion These are obviously basic examples, but provide a good foundation for you to build something specific to your needs. This implementation loads and saves the values fro mdisk with every calculation. A more sophisticated solution could combine this with the cache, to only load from disk on the first calculation and only save when some of the data changes. A more advanced version could also knows about worksheet and cell properties (the traits), like boldness, backcolor, column height and so on. This would be a truly persistent worksheet rather than just storing values. Last edited Mon Feb 16 00:16:59 2009.
http://www.resolverhacks.net/persistence.html
crawl-002
refinedweb
957
65.32
Introduction to Form Validation in Django The following article provides an outline on Form Validation in Django. Django is a framework which provides built in methods to verify the data within in forms. And this verification can be done automatically using these Django methods. These can be only done by using and submitting CSRF tokens. There are functions that can be used to handle these validations. We can here take many inputs from user based on requirement and make our form validations accordingly. Form Validations For validating the forms, we first need to create a form and then use different validation techniques to validate the data written in the forms. For creating the form, each label would correspond to a data type with respect to the format of data that has to be loaded in. Given few of them below: - CharField - BooleanField - DateField - DecimalField - ChoiceField etc. Example of Form Validation in Django A small example of creating a form below: forms.py class MyForm(forms.Form): Name=forms.CharField() Gender=forms.ChoiceField(choices=[(' ','Choose'),('M','Male'),('F','Female')]) Description=forms.CharField(widget=forms.Textarea,required=False) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.helper=FormHelper self.helper.form_method = 'post' self.helper.layout = Layout( 'Name','email','Gender','Description', Submit('submit','Submit',css_class='btn-success') ) views.py from django.shortcuts import render from django.http import HttpResponse from .forms import MyForm # Create your views here. def first_form(request): if request.method=='POST': form=MyForm(request.POST) if form.is_valid(): Name=form.cleaned_data['name'] Email=form.cleaned_data['email'] Gender=form.cleaned_data['gender'] Description=form.cleaned_data['description'] print(Name,Gender) form=MyForm() return render(request,'form.html',{'form':form}) form.html {% load crispy_forms_tags %} <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width-device-width, initial-scale=1.0"> <meta http- <title>My Form </title> </head> <body style="padding: 20px;"> {% crispy form form.helper %} </body> </html> Urls.py from django.contrib import admin from django.urls import path,include from . import views urlpatterns = [ path('first_form',views.first_form,name='first_form'), ] In the main project of settings.py we need to add our new app name and in the urls.py file also, we need to have a link of our new app that is created. Below we are adding the code snippets of those files also. Urls.py: Project level from django.contrib import admin from django.urls import path,include urlpatterns = [ path('',include('Third.urls')), path('admin/', admin.site.urls), ] Settings.py : We have added our app and the crispy forms module that we have used in forms.html for styling purpose. That library can be added into Django using pip install method. INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'Third', 'crispy_forms' ] Forms can be created using form models and the forms html files and at the same time with the models functionality also. Here we have done it using forms. When observed, we can see that in views file that is written, there were form validator methods that we have used here. cleaned_data[‘field_name’] This is one of the important methods of validating. This would contain the cleaned data which are received from the form. This means that we are specifying a particular data type for a particular variable, which help in storing of the data. We can even raise customized errors. The main part, in which cleaned data function, is incorporated under function is_valid. Syntax: form.is_valid() This function is used to validate the whole form data. Whatever is uploaded in the form can be validated using this function that is declared above. This validation would be returning the Boolean expressions of validating the complete form data type. Output of the above code: Let us check the different errors that are being raised in this simple form. After giving the name, if we simply try to click on submit as shown below: System is going to alert error like below: As we have mentioned the above field mandatory we get that alert that those fields has to be filled. Now with respect to the email functionality, if we give some random letters directly let us see what we get. And now if we include the ‘@’ symbol also and if we do not specify any domain name(not necessarily the correct one), we can still get an error. So, after giving all the details properly as below: It is okay to either write anything or leave the description part empty as it is not the mandatory required field. After clicking on submit we can have the below output in our python module as written in the code. As written the code, we wanted to print the name and gender – so that is what that got printed in our python development shell. We have link between all those python files. Form.py file has link with views.py file and in turn both are to be linked with the html file in the templates. As already known we must have the link with urls.py and views.py to display the UI level output through the http web URL. This is the basic flow for all those python files that are written in generating a form and performing it’s vaLidations. For handling forms, we even have attributes like validators. These are used in case of model level changes to a form. And even with respect to raising a customized error can be done by raising a ValidationError function. These are the functions that are in default available in Django for different form validations. And with all these we can even have separate functions written to raise different validations with respect to forms. Conclusion This is how we can have different form validations in Django. We have in turn created a simple form added our in built validations using those respective python files and link each of them to create perfect form validations. There are many level validations that can be done. But here we have done by creating a form and making basic validations for it. We have covered two methods in the same example which is mentioned above. As already listed, we can create forms using Models by models.py python file and include all the customized validations that are required. Recommended Articles This is a guide to Form Validation in Django. Here we discuss the introduction, form validations and example. You may also have a look at the following articles to learn more –
https://www.educba.com/form-validation-in-django/
CC-MAIN-2020-45
refinedweb
1,092
59.5
- 33 expert $99.00 USD “Very nice guy and overall very competent in Coldfusion. He is an affordable and honest expert with a large scope of skills.”mayeul20 1 month ago. Various Bug Fixes $1120.00 USD “Travis is a very professional and skilled developer. ColdFusion Guru. Will definately hire again.”ddoble 4 months ago. Minimum zip code list that covers US in 100 mile radius - Repost $100.00 USD “great to work with!”gsommers 5 months ago. Build Web Scraping Tool $130.00 AUD “Went above and beyond to deliver!”mccprofits 9 months ago. import data $30.00 USD “What a pleasure to deal with!”terryevian123 10 months ago. UIDs Dentists $100.00 USD “This freelancer provides very high quality work. He did an excellent job for me with a laser precision. I highly recommend him.”grinfocus 10 months! Exams - Freelancer Orientation90% - SEO Level 193% - Coldfusion85% - WordPress85% - HTML Level 175% - US English Level 293% My Top Skills - PHP 74 - Software Architecture 71 - MySQL 69 - Software Testing 67 - Website Management 55 - Website Testing 42 - Web Hosting 42 - Cold Fusion 40 - Adobe Flash 34 - SQL 17 - Microsoft SQL Server 12 - Database Development 9 - Web Security 8 - Computer Security 8 - Web Scraping 5 - Data Mining 4 - Data Processing 3 - Database Administration 3 - Java 2 - Javascript 2 - Project Management 2 - Mac OS 2 - Cartography & Maps 2 - XML 1 - Visual Basic 1 - .NET 1 - Internet Marketing 1 - Data Entry 1 - Sales 1 - Marketing 1 - IIS 1 - Microsoft 1 - HTML5 1 - Game Development 1 - C Programming 0 - Graphic Design 0 - Logo Design 0 - SEO 0 - Link Building 0 - Telemarketing 0 - AJAX 0 - Photoshop 0 - iPhone 0 - Game Design 0 - Wordpress 0 - Illustrator 0 - Twitter 0 - Facebook Marketing 0 - MySpace 0 - Editing 0 - Freelance 0 - CMS 0 - Report Writing 0 - eCommerce 0 - Paypal API 0 - Forum Software 0 - YouTube 0 - Electronic Forms 0 - Leads 0 - Academic Writing 0 - Photo Editing 0 - Apache 0 - DNS 0 - vBulletin 0 - PSD to HTML 0 - Article Rewriting 0 - Photoshop Design 0 - Google Adsense 0 - Apache Solr 0 - C++ Programming 0 - Geolocation 0 - Mathematics 0 - HTML 0 - Market Research 0 - ActionScript 0 - Article Submission 0 - Firefox 0 - Google Chrome 0 - Computer Graphics 0 - Viral Marketing 0 - Real Estate 0 - Linkedin 0 - Affiliate Marketing 0 - Landing Pages 0 - Proposal/Bid Writing 0 - Big Data 0 - Pinterest 0 - Geospatial 0 - English (US) 0 - Social Media Marketing 0 - Database Programming 0 - Business Writing 0 - Internet Research 0 - Google Webmaster Tools 0 - Google Website Optimizer 0 - Internet Security 0 - Search Engine Marketing 0 - Data Warehousing 0 - Conversion Rate Optimisation 0 - Google Maps API 0
https://www.freelancer.com/u/twalters84.html?ttref=HireMe_PVBuyer
CC-MAIN-2015-32
refinedweb
429
53.92
Fill in contour bounded regions in slices of 3D image. More... #include "vil3d_fill_boundary.h" #include <vil3d/vil3d_image_view.h> #include <vcl_vector.h> #include <vcl_stack.h> #include <vil3d/vil3d_convert.h> #include <vil3d/algo/vil3d_threshold.h> Go to the source code of this file. Fill in contour bounded regions in slices of 3D image. Definition in file vil3d_fill_boundary.cxx. Fill interior of current boundary. Definition at line 126 of file vil3d_fill_boundary.cxx. Follow the current boundary in the current slice. labeling boundary pixels and background pixels that border the boundary. Definition at line 58 of file vil3d_fill_boundary.cxx. Reset background pixels to 0. Definition at line 184 of file vil3d_fill_boundary.cxx. Compute a mask where the regions in each slice of a 3D image bounded by contours are set to "on". Definition at line 14 of file vil3d_fill_boundary.cxx.
http://public.kitware.com/vxl/doc/release/contrib/mul/vil3d/html/vil3d__fill__boundary_8cxx.html
crawl-003
refinedweb
136
56.21
View Tutorial By: Bhaskaran at 2013-01-21 11:23:19 2. hi all ! This is the error im getti View Tutorial By: vjy at 2008-05-26 04:30:59 3. In 6th point .....there is a spelling mistake...mi View Tutorial By: Gnani at 2009-09-22 12:25:59 4. You should show how to implement a custom sorting View Tutorial By: AngleWyrm at 2012-12-13 19:35:30 5. good explanation View Tutorial By: Vishu Sharma at 2013-03-13 13:57:11 6. import java.util.*; public class de View Tutorial By: Virudada at 2012-05-05 06:27:22 7. when i am trying to compile your program one error View Tutorial By: anindya at 2011-11-23 03:00:49 8. I have a computer science lab final on tuesday (09 View Tutorial By: kudas at 2009-06-07 10:55:06 9. thanks alot steven for the code. i'm try to automa View Tutorial By: Roger at 2011-05-17 17:33:07 10. Hello, this is good example, but that is on View Tutorial By: Zubair at 2011-11-10 20:46:39
http://java-samples.com/showcomment.php?commentid=33416
CC-MAIN-2019-04
refinedweb
194
76.62
_wes are these Arduino commands correct? I am using Arduino 1.8.5 and CoDrone library 1.5.1 When I wrote the following #include <CoDrone.h> void setup() { // put your setup code here, to run once: CoDrone.begin(115200); CoDrone.AutoConnect(NearbyDrone); CoDrone.takeoff(); CoDrone.hover(5); CoDrone.land(); } For instance, I get the following error Arduino: 1.8.5 (Mac OS X), Board: "Rokit-SmartInventor-mega32_v2" /Users/vette99/Documents/Arduino/first_drone_program/first_drone_program.ino: In function 'void setup()': first_drone_program:8: error: 'class CoDroneClass' has no member named 'takeoff' CoDrone.takeoff(); ^ exit status 1 'class CoDroneClass' has no member named 'takeoff' This report would have more information with "Show verbose output during compilation" option enabled in File -> Preferences. Sorry, this is my first time using Arduino and I have lots of questions. Thanks for your time! -()"?
http://community.robolink.com/topic/119/flight-commands-movement-docs-update/5
CC-MAIN-2019-30
refinedweb
136
52.76
This topic contains information related to dynamic report creation. The XtraPrinting Library provides events that occur on report creation, which you can handle and implement through your own logic. So, it is possible to print out any information you want. The XtraPrinting Library also gives you the ability to create reports for custom controls. These reports are static. For example, if you need to create reports for controls of a specific class, then it is best to apply this method. Here, you have to implement the IPrintable interface for this class. Every report consists of several areas: Marginal Header, Marginal Footer, Inner Page Header, Inner Page Footer, Report Header, Detail Header, Detail, Detail Footer, Report Footer. The following image demonstrates what report regions these areas occupy. You can refer to the Document Sections topic to learn more about a report's structure. The XtraPrinting Library provides events which occur when creating the report areas detailed above. You can access them via a link (the Link class or its descendants). They are: When a link is constructing a report, it generates the events listed above in this exact order. So, you are able to handle these events to render a printed report. To do this, you have to create a link object within the PrintingSystem.Links collection. For instance, at design time this task can be performed with the help of the Link Collection Editor for the PrintingSystem component. To add a link to the collection, click the Add button, and choose the link type from the list. Assume we want to create a report for a Link instance. Click the Events button to display the link object's events. Double-click those events you want to handle. After closing the window, the declaration of the chosen events will be added to the code. The following code lists empty LinkBase.CreateDetailHeaderArea and LinkBase.CreateDetailArea event handlers added to a class: using DevExpress.XtraPrinting; // ... private void link1_CreateDetailHeaderArea(object sender, CreateAreaEventArgs e) { // Place your code here. } private void link1_CreateDetailArea(object sender, CreateAreaEventArgs e) { // Place your code here. } Imports DevExpress.XtraPrinting ' ... Private Sub Link1_CreateDetailHeaderArea(ByVal sender As System.Object, _ ByVal e As CreateAreaEventArgs) Handles Link1.CreateDetailHeaderArea ' Place your code here. End Sub Private Sub Link1_CreateDetailArea(ByVal sender As System.Object, _ ByVal e As CreateAreaEventArgs) Handles Link1.CreateDetailArea ' Place your code here. End Sub The CreateAreaEventArgs class describes data passed to event handlers. The CreateAreaEventArgs.Graph property specifies the BrickGraphics surface on which to create report elements (bricks). The sender of events is a link itself. So, you can cast this object to the corresponding Link class to access the required properties and methods. The following tutorial demonstrates a complete example of report creation with the help of events: How to: Use Link Events (Complete Sample).
https://documentation.devexpress.com/WindowsForms/166/Controls-and-Libraries/Printing-Exporting/Examples/Using-Printing-Links/How-to-Use-Link-Events-Introduction
CC-MAIN-2019-35
refinedweb
463
59.19
I'm trying to write a simple 'Conditions' combat system that allows for conditions to be applied to characters such as stun, poison, etc. I have the following generic class in C#: public class Condition<T> where T : ICondition, new() { T condition = new T(); public void ApplyCondition() { condition.ApplyCondition(); } } And an implementation of ICondition for stun: public class Stun : MonoBehaviour, ICondition { private Material material = (Material)Resources.Load ("../Materials/Stunned"); public void ApplyCondition() { ApplyStun(); } private void ApplyStun() { renderer.material = material; } } Basically I just want the character to change material when he is hit with stun. But of course, I'm doing something wrong here cause I get the following error: You are trying to create a MonoBehaviour using the 'new' keyword. This is not allowed... You are trying to create a MonoBehaviour using the 'new' keyword. This is not allowed... Which I know is because Stun inherits from MonoBehaviour and it is getting Newed in the Condition class, BUT I need it to inherit from MonoBehaviour so that it has access to the renderer component of the object it's attached to. How should I be doing this instead? I've stopped using new now and tried changing it to gameObject.AddComponent\ but still hitting some confusing issues. Is AddComponent the better way to go at least? Yes, AddComponent is the desired way to add monobehaviours. $$anonymous$$eep in mind you can grab a reference at the same time you addcomponent. I'm trying to add the component this way (won't show up correctly in these comments so I pasted here) except it's now giving me "Can't add script behaviour ". Generic $$anonymous$$onoBehaviours are not supported" Not really sure how to add the component correctly you added one '>' too many at the end What do you mean? There's exactly two opening brackets and two closing brackets Answer by Bunny83 · Jan 03, 2013 at 07:40 PM Sure it doesn't work because "Generic MonoBehaviours are not supported". It's easy as that. Also to be able to use a MonoBehaviour derived class it has to be placed in a script file with the same name. Of course a generic class can't work, Component types are determined by the class / filename. A generic type represents endless different "types". Besides that you can only add classes as components that are derived from MonoBehaviour. Your generic class isn't derived from MonoBehaviour. Also i don't really get for what you want this generic? It does nothing "generic". It just encapsulates an ICondition object (which is just an interface so it could be anything that implements that interface). You can simply add your Stun class as component since it actually is a component. Thanks to the interface you could access all components that implements that interface like this: var conditions = (ICondition[])GetComponents(typeof(ICondition)); Note: You can't use the generic version of Getcomponent because UT put a constraint on the generic parameter and only allows types that are derived from Component which interfaces are not. However this "type" version works pretty well, so just add your class like this: gameObject.Addcomponent<Stun>(); Yeah you're right, it doesn't do anything 'generic'. I really want to do this by inheritance ins$$anonymous$$d (Having Condition as an abstract class and then writing a bunch of classes that inherit/extend Condition, such as Stun, Poison, etc) however I've been instructed that Condition $$anonymous$$UST be a generic class. Not really sure why they want me to do it like this, but yeah It makes no sense to make something a generic class without any use of the generic parameter. What are your exact requirements? Do you have to use Unity? Do they have to be Components? $$anonymous$$eep in mind that you can implement your own Strategy. You don't have to use the one that's built into Unity. Unity's systems is more restricted since Unity wants the control of when and how components are created / added. I have to use Unity, and I must make use of a generic class named 'Condition'. It does not have to be a component, I just made it into a component since I was running into issues with making an instance of Stun in Condition (which didn't end up solving anything). I think I'm just going to use inheritance after all. It may be the guy who wrote the instructions just chose the wrong word with 'generic' or he could be testing my design decisions and actually expecting me not to use it. It just makes no sense otherwise Don't be afraid to email your Instructor know about the $$anonymous$$onobehaviour generic's limitation -- he might change the requirements. Answer by jogo13 · Jan 03, 2013 at 06:55 PM New answer from a better understanding of the question.:) This is not generics, but you could make Condition an abstract 'base' class: public abstract class Condition : MonoBehaviour { And make stun, poison etc inherit from Condition: public class Stun : Condition{ Any class that had a Condition variable could store stun, poison etc in that variable. Thanks, that's actually the way I was thinking about doing it too, except my programming brief states that Condition $$anonymous$$UST be a generic class (I'm doing a take home test). It's a little strange design wise but it must be so. Well, in this case you have to change one of those things: Don't use a generic class Don't attach it as component but keep your own list of objects Don't use Unity3d because it doesn't support generic. An OS design issue: File types associated with their appropriate programs 1 Answer C# Generics not constraining correctly? 1 Answer Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers C# Generic Type Inheritance 2 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/373783/c-inheriting-from-monobehaviour-but-still-be-able.html?sort=oldest
CC-MAIN-2021-17
refinedweb
984
61.87
BuildRequires for 2to3 If your specfile manually invokes /usr/bin/2to3, add an explicit build-time requirement on the /usr/bin/2to3 This is currently provided by the python-tools package, but might in the future be provided by the python3-tools package. Remember to add it within the 0%{with_python3} conditional: %if 0%{?with_python3} BuildRequires: /usr/bin/2to3 BuildRequires: python3-setuptools, python3-devel %endif # with_python3 You don't need to do this if you're using lib2to3 (e.g. implicitly using python-setuptools, a.k.a Distribute: we ship lib2to3 within the core python and python3 subpackages. Note however that in some cases the setup.py may need 2to3 to be run on it before it is valid Python 3 code). Questions - Where to add this on the Python Guidelines page?
http://fedoraproject.org/w/index.php?title=BuildRequires_for_2to3(draft)&oldid=151987
CC-MAIN-2018-09
refinedweb
132
57.27
Developing a Basic C# Application for the iPad Using Monotouch Introduction Developing applications for Apple's iOS devices including the iPhone, iPod Touch and iPad has, for the most part, meant using the Objective C language. For C# developers, this is a big turn off. Moving from C# to Objective C is something like going from Visual Basic to GW Basic. The languages are similar but different. Novell's Monotouch brings the power and elegance of C# to the table, making it possible for Microsoft developers to code in a language they're comfortable with. You'll find the process even easier if you like to build forms in code rather than using a drag-and-drop designer like Microsoft Visual Studio. Monotouch provides bindings for all the form elements in the CocoaTouch class library. Admittedly, they are different from your typical Windows Forms app, but there are similarities just the same. All the standard elements like buttons and text boxes are there along with a whole host of new elements specifically designed for the touch environment on small screens. Getting Started Novell offers a trial version of Monotouch that does everything you need except deploy onto a physical device. You can write all the code you want as long as you use the device emulator for testing. Be sure you follow the installation guide as the order of installation is important. Monotouch relies on the latest Mono runtime for OSX that must be installed first. There's also a special version of Monodevelop with additional tools for Monotouch. If you're comfortable with using Microsoft Visual Studio, you'll feel right at home with Monodevelop. There are a multitude of resources to help you get up to speed quickly with developing using Monotouch. The tutorials web page has links to how-to articles, sample code and screen casts. If you learn better from a book there's an excellent recently-released book titled iPhone Programming with MonoTouch and .NET/C# by Wally McClure, Martin Bowling, Craig Dunn, Chris Hardy and Rory Blyth. Both Wally McClure and Craig Dunn have active blogs with lots of good information as well. If you decide to try out code samples from either the book or from the Monotouch site, you'll need to know a few things. First and foremost is the SDK version number. Many sample apps were built with earlier versions of the SDK and will have that set in the project options. If you compile an app and you don't have the target SDK installed, you'll get an error. To change the target SDK, simply open application options under the Project menu and change SDK version under the iPhone Build entry as shown below. Figure 1 Code Basics One of the things you should do before you get too far along in coding is read Apple's user interface (UI) document. If you have any plans of actually publishing and selling an application, it will need to comply with their guidelines or you'll never get it approved for the Apple store. You'll want to pay close attention to the screen rotation requirement. All apps must support screen rotation, so you'll need to know how to adjust your screen elements when that happens. Coding for the iPhone family frequently utilizes the Model-View-Controller (MVC) design pattern. You can see it in many of the class names and supporting code. Apple's iOS development introductory tutorial builds a basic "Hello World" app for the iPhone and uses MVC as a part of the design. Understanding these patterns and how they apply to different programming tasks will help you build better applications. The Cocoa Fundamentals Guide is a good read to help get you oriented to the Apple way of user interface design, and it includes a chapter on design patterns. Building a UI around the CocoaTouch UIViewController is a pretty standard approach to creating a basic interface. Adding the ability to autorotate with a UIViewController is accomplished with the following snippet of code: public class AppViewController : UIViewController { public AppViewController () {} public override bool ShouldAutorotateToInterfaceOrientation (UIInterfaceOrientation toInterfaceOrientation) { return true; } } This code will cause all the elements in the UIViewController to reorient when the operating system rotates the screen. It doesn't do anything other than change things like text labels and buttons so that they render properly based on how the user is holding the device. You'll have to write more code if you want to do something like add a navigation pane on the left hand side of the screen when the device is in landscape orientation. Building an app for the iPad is fundamentally no different from the iPhone. The only real difference is the hardware, meaning screen size and iPhone specific capabilities such as GPS and the magnetometer. Creating a new iPad solution in Monodevelop looks something like the screen capture below: Figure 2 Once you have the solution created you'll see the basic components of your app appear in the Solution pane including Main.cs and MainWindow.xib. These are the two necessary pieces to any basic app. Main.cs contains all the code necessary to initialize the app and display the UI described in MainWindow.xib. This should look like the following: Figure 3 If you double-click on the MainWindow.xib file, you'll open up Interface Builder and have the opportunity to build your user interface using the Apple tools. This will take some getting used to as it is quite different from the Microsoft Visual Studio approach. For this demo we put a text label and a button on the main surface and saved the file. Selecting Run from the Monodevelop menu will launch the iPhone (iPad) simulator and run your application. For the simple text label and button you should see the following: Figure 4 Wrap Up Building apps with Monotouch should look and feel a lot like Microsoft Visual Studio. You'll have to adjust your thinking somewhat to the Apple way of doing things, but the transition isn't that difficult. The demo version of Monotouch lets you get your feet wet without spending a lot of cash. You will have to buy the full version ($399 for the Professional Edition) when you get ready to actually deploy to the device. There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/article.php/c17923/Developing-a-Basic-C-Application-for-the-iPad-Using-Monotouch.htm
CC-MAIN-2014-41
refinedweb
1,066
60.95
Large-cap biotech stocks have been on a tear Millennials' influence has important business implications Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! Kinnaras Capital Management ("KCM", "Kinnaras", or the "Firm") Separately Managed Accounts ("SMAs") returned -9.4% in Q2 2010. Table I presents KCM SMA performance relative to key indices. Investors should note that individual returns will vary based on the time one invested and that the composite return presented below is net of fees and is the time-weighted return ("TWR") of Kinnaras SMAs. TWR is one of the most comprehensive and accurate ways of gauging investment performance for managed accounts but can be skewed at times due to the timing of new portfolio openings. As a reference, SMAs that were open since the start of the year are down approximately 7.5% through Q2 2010. Table I excludes performance of smaller, highly concentrated SMA portfolios. Exclusion is based on two main factors. The first is that these accounts are not reflective of the total deep-value framework used in the main strategy. Some of these accounts literally hold 1-3 stocks which can lead to results that do not accurately reflect the performance of the main strategy. These portfolios were not included in Q1 2010 figures because they skewed performance upwards to a degree not reflective of the overall strategy and are excluded in Q2 2010 for the same reason (albeit because the results in Q2 2010 would skew downwards). TABLE I: 2010 Managed Account Performance The first half of 2010 could be characterized as manic-depressive with sharp rallies and drawdowns occurring in nearly every month. Similar to Q1 2010, Q2 2010 started off nicely for the first few weeks with equity markets possibly impounding a V-shaped recovery only to quickly reverse course as market participants questioned valuations in the context of expected outcomes for the economy. Perhaps more importantly, the market eco-system received a massive shock during the "Flash Crash" in May which coincided with the European sovereign debt crisis. We also were faced with the [[BP]] disaster which understandably would add to our collective negative psyches. By the end of the quarter, expectations for a double-dip recession were beginning to be broadly discussed by a number of economists and market observers. These fears continue to manifest themselves in fund flows. According to Trim Tabs, $10.7B fled US Equity mutual funds through July 2010 (at the time of the report, July was still in progress), following $3.8B that was liquidated in June. Those funds are stampeding into the perceived safety of bonds with $27.8B of inflows through July and $25.7B in June. While retail investors are doing this, PIMCO's Bill Gross is launching a set of equity mutual funds. In a recent Bloomberg article, Gross noted that he felt that the thirty year bond run may be coming to an end and equities may be a better place to be. Without question, there are many challenges confronting the global economy. Every item I read and every person I speak with can present a litany of obstacles. However, since everyone knows about these problems and fund flows are mainly headed towards bond funds, I feel a bit more constructive about investing. In Q2 2010, a lot of good news was snuffed out of certain stock valuations and that gave us the chance to add to existing positions as well as establish new ones. For example, Kinnaras did well in 2009 through investing in various consumer discretionary companies, particularly some mall-based retailers. These retailers were able to rapidly close underperforming stores, bargain with landlords for rent relief, achieve lower sourcing and transportation costs (cotton, oil, etc.), and aggressively rationalize their workforce. These cost controls led to positive cash flow and earnings surprises despite a massive drop in sales - and those earnings propelled shares. Heading into 2010, I felt the consumer discretionary space was played out/fairly valued and spent my time on other opportunities. However, in Q2 2010 a few of these consumer discretionary companies had 50-60% of their market capitalization wiped out in a matter of weeks. We started to buy one of these in Q2 after the majority of its decline (knock on wood) and also have been nibbling on another in Q3. The first retailer ("Retailer 1") is a $400MM market cap mall-based women's retailer catering to a specific age demographic. One of its competitors has been shifting its focus to a younger age group which actually improves Retailer 1's competitive dynamic and opportunity set. Kinnaras actually owned its competitor in 2009 so there was some good familiarity/sector intelligence that could be leveraged, and it didn't take long to get up to speed on Retailer 1. Retailer 1 lost about 33% of its value from April to early June before reporting earnings that outperformed Street estimates. However, due to some concerns regarding Q2 sale softness and market pressures in June, shares further sank another 40% or so from that point. Kinnaras began buying in mid to late June and we're still accumulating shares for managed accounts and the Fund, which is why I haven't unveiled its name yet. Retailers can be interesting investments when they trade at high forward multiples. This may sound counter-intuitive but what happens is that the Street and investors tend to get very negative in terms of sentiment. When a retailer has current depressed earnings, investors sometimes extrapolate the growth expectations and persistently low margins are impounded into valuations. So for 2011, the Street may assume no sales growth and maybe even lower margins - 0.5% net income margins, leading to $0.10 per share. The stock may trade at 30.0x those depressed 2011 EPS estimates or trade at $3.00 per share. Now if this retailer has a solid balance sheet and an investor can get comfortable with the company's competitive dynamics and strategy, there could be an interesting opportunity. As previously mentioned, I had followed and invested in this specific niche last year and I think the problems facing Retailer 1 are short-term in nature. This means there's a lot of room for earnings upside which could lead to impressive stock performance in the next year or two. CHART I: Retailer 1 [click to enlarge] One thing to note is that many mall-based retailers have largely made their IT infrastructure upgrades and supplier changes over the past two years, leading to streamlined corporate costs and supply chains. This means that the real driver for earnings lies mostly in merchandising and inventory management. Inventory management will probably be the biggest challenge facing discretionary-focused retailers for the next few years given the reining in of US households. Order too much of a product and risk an inventory write-off which results in a charge against earnings in the following quarter. Conversely, order too little off a popular product and you lose out on crucial sales that can leverage earnings during a key selling period for the year. Retailers can be risky from this standpoint but if you buy them when there's a lot of negative sentiment and the valuation is attractive enough, they can make for worthwhile investments. With our prior example, perhaps sales which are expected to languish in 2011 will actually move up 10%. This sales growth is leveraged through the company's tight cost structure leading to, say, 2% net income margins (which is still below this company's historical average). This would yield EPS of $0.44. With just a 10.0x EPS multiple, this company could be worth $4.40 down the road or nearly 50% above the current price. The challenge is riding out the volatility and being willing to be long potential good news. This approach basically encompasses most of our holdings whereby an investment is made in companies and sectors with cheap valuations/negative sentiment and accompanying skepticism for the investment thesis ("yeah, there's value there but..."). While I think the broader market is fair to slightly overvalued, we've seen a number of sectors and industries decouple in terms of correlations. For example, while a number of technology sectors have been trading at 52 week highs, industries like refiners are not that far removed from their 52 week lows. I like the refiners because they are cheap across a number of metrics. Refiners scrubbed their balance sheets over the past few years as oil prices declined. With the decline in oil prices, the carrying values of a number of their intangible and hard assets were impaired. Yet even with aggressive write downs of their asset values, share prices of some refiners are still very cheap. Reviewing the balance sheet of Western Refining ( WNR ) can help illustrate this embedded value. EXHIBIT I: WNR Balance Sheet Exhibit I applies a Liquid Adjustment whereby WNR's assets as of Q1 2010 are written down similar to a liquidation scenario. It's important to note that like many refiners, WNR had already gone through asset write-downs in recent years due to the economic decline so the above liquidation value estimates may be a bit aggressive. In addition, most of WNR's inventories are oil products. Oil products, like many commodities, are pretty liquid and can be sold at prevailing market values. It's unlikely in a liquidation scenario that if WNR had $460MM of oil or refined products like jet fuel and diesel in inventories, these assets would be sold at a huge discount. Nonetheless, I still assumed only 70% of the carrying value would be realized. This exercise yields about $4.36 in liquid book value yet the stock trades for about $5 per share right now. There are challenges facing the refining industry but many of the stocks in this sector are at five and six year lows, suggesting that the market has more than impounded these problems into their valuations. This makes me feel optimistic about the longer term performance of stocks such as WNR. Further, with capacity continuing to be stripped out, eventually even a slight uptick in demand will yield outsized EPS results which can drive share price appreciation. Many cyclical industries are characterized by over investment in boom periods and under investment and overzealous capacity reduction in bad times. During the peak oil years, refiners were making significant investments in more expensive refineries that had the ability to process sour and heavier crude. This was because at a certain price it was cost effective to refine what were typically less desirable classes of crude oil. However, since oil prices have collapsed, refiners have been closing a number of refineries including some of the most recent, high cost ones. The industry has been downsizing itself over the past two years and while there is still work to be done, I suspect that refiner executives have made the 180 degree shift to the view that ultra high oil prices are the "new normal" to the new "new normal" whereby demand will be so tepid there are far too many refiners out there. Both cases are probably wrong, and what I anticipate is that the industry reduces capacity to the point where acceptable utilization rates are achieved based on the current, weak economy. This will lead to little excess capacity if the economy has even a slight uptick in the coming years, which will result in much higher capacity utilization - leading to big EPS figures which could result in meaningful share price appreciation. The concept of having a solid level of asset protection and riding out some tough current times also led us to an investment in Seahawk Drilling ( HAWK ). HAWK was spun off from Pride International (PDI) in August 2009. Kinnaras investors know that I am a big fan of spin-offs, as are most investors that are familiar with Joel Greenblatt's 'You Can Be a Stock Market Genius'. Spin-offs can be excellent investment opportunities which is why I track upcoming spin-offs. Kinnaras has invested in its fair share of spin-offs over the years and once HAWK was spun out, I kept an eye on it. HAWK is a shallow water natural gas driller in the Gulf of Mexico ("GOM"). Like many GOM drillers, HAWK saw its share price crumble once the BP disaster and drilling moratorium occurred. Shares peaked shortly after the spin-off at roughly $35 per share and commensurately lost about 50% before the BP oil spill. Once the BP spill and moratorium occurred, shares plunged from $18 to as low as the mid $8s. Shares are now at about $10 and Kinnaras began buying in the $10-12 range. HAWK has attracted the attention of other value investors such as Corsair Capital but the best analysis of HAWK comes from Toby Carlisle, founder of Eyquem Fund Management. Anyone that wants to get fully up to speed on HAWK should check out his , which contains a number of excellent posts on HAWK. The basic thesis with HAWK is that it provides tremendous operating leverage to an upswing in natural gas prices. Natural gas prices are low due to economic weakness and a lot of supply that has been discovered in recent years. Right now, the world is awash in natural gas due to weak economic activity and new gas discoveries. However, if one can look past a one year time period, HAWK can begin to look really exciting. Dayrates - or what drillers charge for their services - are correlated with natural gas prices. So with natural gas prices so low, dayrates for drilling rigs are low. The cost of operating these rigs is fairly fixed, however, so assuming HAWK can make it through a weak economic period, there should be the potential for upside over the years. If natural gas prices increase, the drilling rates will as well. The challenge is making it through a slow period. Right now, the GOM drilling market has been very depressed but there are signs of some life with more rig activity occurring and more inquiries regarding rig availabilities being made. But even in the event that drilling activity remains extremely poor in the Gulf region, HAWK should be able to make it through this slow period. HAWK has a very attractive balance sheet with about $67MM in net cash. More importantly, HAWK's drilling rigs have considerable value but are being severely discounted by the market. In fact, the market value of drillings rigs has declined to levels below even 2004 market values. There have also been some recent transactions by Hercules Offshore ( HERO ) and Ensco ( ESV ) that suggest HAWK has much more value solely based on the value of its rigs than its current share price implies. At $10 per share, HAWK's market cap is $118MM. HAWK has about $164MM in total liabilities so the market is saying HAWK's total assets are worth $282MM. Out of the $282MM in asset value implied by the market, $73MM is cash, leaving $209MM in implied asset value. Another $67MM is accounted for by current and other assets so that leaves $142MM in asset value for its rigs. HAWK has 20 rigs and its current share price implies that the average value of its rigs is about $7MM each. HAWK's CEO has mentioned in the past that the scrap value of a rig is about $8-9MM each and rig transactions over the past year have implied rigs similar to HAWK's support higher rig values. As with Retailer I, WNR, and HAWK, our approach is really about reconciling valuation to expected future outcomes. Sure, things are bad but if they remain status quo, then we still have a valuation cushion. But If things improve even slightly, we have the chance to make some pretty solid returns. That also applies to our larger holdings like Citigroup ( C ) and Sprint Nextel ( S ). C has explicit government support whereby it cannot fail and still trades at a discount to its P/B and P/normalized earnings power. Trends continue to move in the right direction for C with net credit losses declining sequentially and on a comparable quarterly basis per C's Q2 2010 earnings report . In addition, C realized a Loan Loss Reserve Release in Q2 2010. This is a point of criticism by some skeptics that will say the earnings are juiced to some extent by these reversals. These critics say the swipe of an accountant's pen is leading to these reserve releases which artificially boost earnings. This is an unfair and inconsistent criticism because these same skeptics in prior quarters would point to the high level of loss reserves these banks were setting aside in anticipation of future losses as a warning to investors. What I had noted when Kinnaras first started looking at financials was that banks were likely over-reserving for future losses. For example, one regional bank that Kinnaras owns has about $100MM in non-performing loans ("NPLs"). However, $31MM of those NPLs were still current on principal and interest through Q1 2010. If one assumes that the period from late 2008 through 2009 was the zenith of the financial crisis, as the economy recovers, even at this anemic pace, those $31MM in NPLs could eventually be classified as performing. Basically, if those loans which are current on principal and interest did not kick out during the worst of the recession, they are likely good money. What this means is that a number of loss provisions for banks could be reversed across the board. This is consistent with historical bank crises, whereby loss provision reversals are recognized as the realized credit losses come in well below prior expectations. This expectation was what led me in January to present the notion of a bank-only investment vehicle to some investors, with a specific emphasis on regional banks. While C is a money center bank, the same phenomenon of loan loss reserve reversals will occur throughout the year and have the most pronounced impact on regional banks. We own a few regional banks and I think we will be pleased with their performance. S is also continuing its impressive turnaround. In late Q2 2010 it released its new phone, the HTC EVO 4G. I bought one, electing to pay the penalty to cancel my T-Mobile bill and drop my Blackberry to sign up for S and get the new phone. This phone is GOOG ) Android Operating System ("Android OS") and is the real deal. It's a major iPhone competitor and I would know considering my household has gone through two iPhones. S released its Q2 2010 results which were very strong. Churn was under 2%, with S reporting its best churn figures in its history. It added postpaid subscribers for the first time in three years and CEO Dan Hesse mentioned that this would be the case even without the EVO. The EVO was released late in Q2 so I expect the full EVO impact will likely be felt in Q3 2010. S continues to generate strong cash flow and has no significant near term debt maturities. In addition, its customer service metrics continue to improve. The company is doing just about everything right and I expect the stock to eventually reflect these positive developments, albeit with the typically high volatility associated with S. These quarterly missives can sometimes be a challenge because the reality is three months is a very short time period. There may be a lot of noise within 90 days but not a lot of news. It's also difficult to reflect on what's happened over just a few months when most of your time is focused on the fundamentals of the companies and a 1-2 year time horizon for catalysts to materialize. In addition, my sensitivity threshold to market volatility is pretty low whereby our holdings may swing heavily, but it doesn't really register with me. For example, June was a vicious month for Kinnaras but that was largely due to S, one of our largest positions, declining about 20% on no news. In fact, S reported impressive Q2 2010 results a few days before the end of July and the stock but sold off about 12% in the last two days of the quarter - which dinged what would have been stronger July numbers. S is doing the right things but short-term market fluctuations are part of the game. Overall, I feel pretty good about what we hold. For marketing benefits I'd certainly love it if Mr. Market hurried up and assigned better valuations to our holdings but I feel that what we have is well positioned to do some great things for us. In fact, I would love to deploy more capital in some of our regional bank names as well as one very interesting microcap. I'll save that for the next quarterly letter. Disclosure: Long C, S, WNR, HAWK See also Durable Goods Report: Mixed Signals from?
http://www.nasdaq.com/article/western-refining-seahawk-drilling-reconciling-valuation-to-expected-future-outcomes-cm31179
CC-MAIN-2015-32
refinedweb
3,525
59.74
scottschram's blog Why are Getty Images and Flickr teaming up? It may cost you. If. espn360.com - You *will* pay, visit or not. The Wall Street Journal is reporting that if you're using Verizon Communications or Charter Communications as your ISP, you're paying for espn360.com whether you use it or not. Sarah Nassauer reports that Walt Disney's ESPN360 requires ISPs to pay for the right to offer their service. "Verizon Communications Inc. and Charter Communications Inc. -- have signed up to offer ESPN360. Daniel Steinberg - Dear Elena The. O'Reilly offers pre-publication access to manuscripts O'Reilly has introduced a new service called "Rough Cuts" that gives pre-publication access to books as they are being written. It's an opportunity for early adopters to use the material and offer feedback to the author and editor. The books are priced reasonably for online access, with an option to purchase the print version when it is released. Of the initial four titles, boston.com reports that "AT&T Inc. and BellSouth Corp. Bellsouth "Tiered" Internet service doomed boston.com reports that "AT&T Inc. and BellSouth Corp. JavaOne Technical Sessions in multimedia, free with registration Some time ago, Sun released the PDFs of the 2005 JavaOne technical sessions. Eclipse 3.1.1 / site redesign Eclipse released the 3.1.1 maintenance release today with no API changes, and over 200 bug fixes. (Nice release notes.) I have been using 3.2M1, but I think I'll drop back and get a little more stability... EclipseCon 2006 Adopts Transparent Call for Participation EclipseCon scheduled for March 20-23, 2006 has adopted an open and transparent call for participation policy. All submissions are being handled via a modified Bugzilla system known as Eclipsezilla. "Anyone in the community (including The XStream library offers clean, easy XML serialization of POJOs. XStream serializes and restores very clean, readable XML from POJOs, like this: public class Publisher { private String publisherID; private String name; // getters and setters... } Becomes: <Publisher> <publisherID>lorenz</publisherID> <name>Lorenz Corporation</name> </Publisher> Using two lines of code (although y Curious about Ruby and Ruby on Rails? Blogging coherently from a conference is incredibly difficult to do, but Matt Raible has done a great job with his notes on Dave Thomas' Intro to Ruby. Phil Windley reports on David Heinemeier Hansson's (Google/O'Reilly
https://www.java.net/blogs/scottschram
CC-MAIN-2015-32
refinedweb
396
59.8
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. Microchip distribute a modified GCC which targets their dsPIC series of microcontrollers. It took me a while to figure out how to build it properly on GNU/Linux, so I'm documenting the patches required for posterity. These patches apply against Microchip's version 1.30, available from <>. For both the binutils and the gcc, I recommend running the whole lot through dos2unix before patching. Apply this patch for binutils: Microchip delete all the testsuite subdirectories from their source archive, so we have to take care not to run the tests. --- acme/libiberty/Makefile.in.orig 2005-02-22 11:35:08.324184816 +1030 +++ acme/libiberty/Makefile.in 2005-02-22 11:34:58.923613920 Apply the following five patches for gcc: By John Steele Scott, 22/2/2005 Without this patch, the build fails with the error: /home/john/debpic30/gcc/pic30-gcc-1.30/build_dir/src/gcc-3.3/gcc-3.3/gcc/cppinit.c: In function `path_include': /home/john/debpic30/gcc/pic30-gcc-1.30/build_dir/src/gcc-3.3/gcc-3.3/gcc/cppinit.c:191: error: assignment of read-only location --- gcc-3.3/gcc-3.3/gcc/cppinit.c~ 2005-02-22 15:40:42.021322392 +1030 +++ gcc-3.3/gcc-3.3/gcc/cppinit.c 2005-02-22 15:43:29.178910576 +1030 @@ -55,7 +55,7 @@ each path in the directory will be appended to the full pathname of the current driver executable */ #ifdef DEFAULT_INCLUDE_PATH -const char *executable_path_name; +char *executable_path_name; extern char **save_argv; #endif By John Steele Scott, 22/2/2005 Microchip delete all the testsuite subdirectories from their source archive, so we have to take care not to run the tests. --- gcc-3.3/gcc-3.3/libiberty/Makefile.in~ 2005-02-22 13:24:48.340870232 +1030 +++ gcc-3.3/gcc-3.3/libiberty/Makefile.in 2005-02-22 13:27:52.249911808 By John Steele Scott, 22/2/2005. Without this patch, pic30-elf-gcc complains with: pic30-elf-gcc: installation problem, cannot exec `pic30-elf-cc1': No such file or directory --- gcc-3.3/gcc-3.3/gcc/config/pic30/pic30.h~ 2005-02-22 16:30:58.686719544 +1030 +++ gcc-3.3/gcc-3.3/gcc/config/pic30/pic30.h 2005-02-22 16:53:56.587246824 +1030 @@ -210,14 +210,6 @@ #undef STARTFILE_SPEC #define STARTFILE_SPEC "" -/* making STANDARD_EXEC_PREFIX and STANDARD_BINDIR_PREFIX point to the same - directory will cause make_relative_paths to make no change - ie look in the - gcc executable's directory. */ -#undef STANDARD_EXEC_PREFIX -#undef STANDARD_BINDIR_PREFIX -#define STANDARD_EXEC_PREFIX "/bin" -#define STANDARD_BINDIR_PREFIX "/bin" - /* By default, the GCC_EXEC_PREFIX_ENV prefix is "GCC_EXEC_PREFIX", however in a cross compiler, another environment variable might want to be used to avoid conflicts with the host any host GCC_EXEC_PREFIX */ --- gcc-3.3/gcc-3.3/gcc/config/pic30/t-pic30~ 2005-02-22 13:30:40.266369424 +1030 +++ gcc-3.3/gcc-3.3/gcc/config/pic30/t-pic30 2005-02-22 15:36:48.125879904 +1030 @@ -1,8 +1,7 @@ -LIBGCC1 = libgcc1.null -CROSS_LIBGCC1 = libgcc1.null - -# forget the libgcc1... -LIBGCC1_TEST = - -LIBGCC2 = libgcc1.null +# Replacement t-pic30, by John Steele Scott +# 22/2/2004 +# The original t-pic30 file from Microchip C30 1.30 has no effect, this one +# lets the build succeed by disabling the libgcc target. +LIBGCC = +INSTALL_LIBGCC = By John Steele Scott, 22/2/2005. This changes the original to embed the version number corresponding to the Microchip release, and to indicate that it is not an official version. If this is to be redistributed by someone with the appropriate infrastructure, the bug_report_url should of course be filled in. --- gcc-3.3/gcc-3.3/gcc/version.c~ 2005-02-22 15:56:56.652155984 +1030 +++ gcc-3.3/gcc-3.3/gcc/version.c 2005-02-22 16:12:22.286438184 +1030 @@ -9,7 +9,7 @@ please modify this string to indicate that, e.g. by putting your organization's name in parentheses at the end of the string. */ -const char version_string[] = "3.3 (dsPIC30, Microchip " version(MCHP_VERSION) +const char version_string[] = "3.3 (dsPIC30, Microchip 1.30, modified for GNU/Linux" ") Build date: " __DATE__; /* This is the location of the online document giving instructions for @@ -19,4 +19,4 @@ forward us bugs reported to you, if you determine that they are not bugs in your modifications.) */ -const char bug_report_url[] = "<URL:>"; +const char bug_report_url[] = ""; I also have debian/rules files modelled along the lines of the mingw-* packages in Debian, if someone wants these they can email me. cheers, John
http://gcc.gnu.org/ml/gcc/2005-02/msg01144.html
crawl-002
refinedweb
762
52.46
Embedding Python inside a multithreaded C++ program Posted by Michael Medin at 2012-02-12 This is a tutorial for how to embed python correctly inside a multi threaded program. Python is a very neat language which is very easy to embed inside C++ thanks to the boost::python library. But there are some crucial parts which is missing from boost:python in regards to how to manage GIL and thread state which I introduce here. Since this is my first C++ tutorial I will include boost:thread as well as a quick hello world application as well. Hello World First I’d thought I d mention that the full source can be found on github. Then since this is a project which uses CMake figured I would start with a simple Hello World in case you are not familiar with CMake. First of all CMake is a tool which generates Makefiles similary to automake so the actual build will be performed with normal make. But CMake has the added benefit of being able to also produce build files for Windows (Visual Studio) as well as other tools and build systems. So for me beeing a windows developer automake is not really an option. The CMake build information resides in files called CMakeLists.txt which you will find in each directory. Now this post is not about CMake so I will not go into details about this but to make sure “CMake” works make sure that the project Hello World builds and works. At the end of the root CMakeList.exe file you can edit which subdirectories are built so if you run into issues feel free to comment out the latter ones to make sure CMake is configured correctly. ADD_SUBDIRECTORY(00_hello_world) ADD_SUBDIRECTORY(01_threads) ADD_SUBDIRECTORY(02_embed_python) ADD_SUBDIRECTORY(03_python_callins) ADD_SUBDIRECTORY(04_thread_safe) If you are on Windows you also need to point CMake to your boost lib and include folder as well as python this is done by setting options: SET(BOOST_INCLUDEDIR "D:/source/include/boost-1_47" CACHE PATH "Path to boost includes") SET(BOOST_LIBRARYDIR "D:/source/lib/x64" CACHE PATH "Path to boost libraries") ... if(CMAKE_CL_64) MESSAGE(STATUS "Detected x64") SET(PYTHON_ROOT c:/python/27x64) ELSEIF(WIN32) MESSAGE(STATUS "Detected w32") SET(PYTHON_ROOT c:/python/27) ENDIF() Looking at the source from Hello World we have two files hello_world.hpp and hello_world.cpp. The first one merely has some defines for this to compile on both windows and linux. The cpp file has the usual “Hello World” sample. #pragma once #include <iostream> #ifdef WIN32 #define MAIN wmain typedef wchar_t unicode_char; #else #define MAIN main typedef char unicode_char; #endif #include "hello_world.hpp" int MAIN(int argc, const unicode_char\* argv[]) { std::cout << "Hello World\\n"; } Multithreaded The first real step is to write a simple multi threaded C++ program using boost. This will be in line with keeping it simple so this is a very minimal multi threaded program. We add a simple background thread and we start that in a series of threads. The thread has to do some work and for this rather simple sample we simply wait a bit and then log a few messages in a loop. void thread_proc(const int id, const int delay) { for (int i=0;i<thread_loops;i++) { boost::posix_time::millisec time_to_sleep(rand()\*delay/RAND_MAX); std::stringstream ss; ss << ">>> proc: " << id << "\\n"; safe_cout << ss.str(); boost::this_thread::sleep(time_to_sleep); } } The other part we need is to start the threads which is done like so: int MAIN(int argc, const unicode_char\* argv[]) { boost::thread_group threads; for (int i=0;i<thread_count;i++) { threads.create_thread(boost::bind(&thread_proc, i, 5)); } safe_cout << "main: waiting for threads to join\\n"; threads.join_all(); } Now the observing reader will notice that we have replaced *std::cout* with *safe_cout*. This is a rather important step as std::cout is not thread safe! And this program uses multiple threads which means the console will become gibberish if we do not replace cout with a thread safe alternative. Unfonrtunetly our implementation is rather naïve so each printed chunk will be thread safe but not the entire statements (this as we protect the call to <<). TO work around this I am using a string stream to first construct the string and the just print the output. class logger { boost::recursive_mutex cout_guard; public: template <typename T> logger & operator << (const T & data){ boost::lock_guard<boost::recursive_mutex> lock(cout_guard); std::cout << data; return \*this; } }; logger safe_cout; To see the code in its entirety go to the git hub project at Embedding Python Now that we have a working multi threaded program we need to embed python inside the program. Initially we will do so without using the threads. Since I was using boost for threads I will also use boost for Python but this is fairly straight forward so it should be easy enough to adapt without boost. The first step is to expose our interface to the python code. The interface we provide to Python is a function called hello_cpp() contained inside a module called TEST. void hello(int id) { std::cout << "hello_cpp(" << id << ")\\n"; } BOOST_PYTHON_MODULE(TEST) { bp::def("hello_cpp", hello); } [/sourcecode] Then we also need to load and initialize Python in our main procedure like so. The second function is something generated for us by the BOOST_PYTHON_MODULE macro. Py_Initialize(); initTEST(); And finally we need to run some Python code I have for simplicity opted to include the actual Python snippet as a string in the C++ code. The other thing we do here (apart from catching exceptions) is to populate a copy of the global dictionary using a copy here is strictly not necessary but normally I allow each script to have its own “context” and then it is required to create isolation. try { bp::object main_module = bp::import("__main__"); bp::dict globalDict = bp::extract<bp::dict>(main_module.attr("__dict__")); bp::dict localDict = globalDict.copy(); bp::object ignored = bp::exec( "from TEST import hello_cpp\\n" "\\n" "hello_cpp(1234)\\n" "\\n" , localDict, localDict); } catch(const bp::error_already_set &e) { std::cout << "Exception in script: "; print_py_error(); } catch(const std::exception &e) { std::cout << "Exception in script: " << e.what() << "\\n"; } catch(...) { std::cout << "Exception in script: UNKNOWN\\n"; } A final piece of the puzzle is to simply print errors from Python. To do this I have implemented a catch bp::error_already_set for which in turn calls a function print_py_error(); which prints the error to stdout. Unfortunately the error_already_set exception does not out of the box provide information from the Python script so we cant (as we normally do) call the what() member function. void print_py_error() { try { PyErr_Print(); bp::object sys(bp::handle<>(PyImport_ImportModule("sys"))); bp::object err = sys.attr("stderr"); std::string err_text = bp::extract<std::string>(err.attr("getvalue")()); std::cout << err_text << "\\n"; } catch (...) { std::cout << "Failed to parse python error\\n"; } PyErr_Clear(); } That pretty much sums up our python embedding which is very simple thanks to boost::python. To see the code in its entirety go to the git hub project at Calling Python from C++ Calling into Python from C++ is pretty straight forward as well what we will do here is (again for simplicity) simply call a predefined function called hello_python() from the C++ application. Adding this is very simple we need two things a function exposed in our Python script. from TEST import hello_cpp def hello_python(id): hello_cpp(id) And then we just need to call that function. void call_python(bp::dict &localDict, int id) { try { bp::object scriptFunction = bp::extract<bp::object>(localDict["hello_python"]); if(scriptFunction) scriptFunction(id); else std::cout << "Script did not have a hello function!\\n"; } catch(const bp::error_already_set &e) { std::cout << "Exception in script: "; print_py_error(); } catch(const std::exception &e) { std::cout << "Exception in script: " << e.what() << "\\n"; } catch(...) { std::cout << "Exception in script: UNKNOWN\\n"; } } Simple enough right? Again much thanks to boost python which makes everything simple and straight forward. I guess the most complicated parts is the error handling Next up is making this thread safe but first feel free to review the code in its entirty at git hub Multi threaded Python: GIL Python is unfortunately single threaded this means only a single thread (ish) can access python at a given time. To manage this Python has something called GIL: Global Interpreter Lock. This is something we need to acquire when we enter python (and very importantly functions accessing Python state). To manage this we are using a fairly common RAII concept by having a class to manage our state for us. struct aquire_py_GIL { PyGILState_STATE state; aquire_py_GIL() { state = PyGILState_Ensure(); } ~aquire_py_GIL() { PyGILState_Release(state); } }; This function use construction/destruction to manage the state automatically meaning to use this all we need to do is define a variable of this type. try { aquire_py_GIL lock; ... ... } ... The other thing we need to do is to release the GIL when we no longer need it and I am not referring to after calling into Python (as that is handled by our manager) I mean when Python leaves Python calling in to C++. This means whenever the Python script calls a C++ function (which takes time) we need to hand over GIL to whomever might need it. To help we also have a similar function which does the reverse of the previous function. struct release_py_GIL { PyThreadState *state; release_py_GIL() { state = PyEval_SaveThread(); } ~release_py_GIL() { PyEval_RestoreThread(state); } }; Then we need to switch all std::cout to use our safe_cout which we introduced previously. We also want to change our hello function to actually pretend to do some work. The resulting code for hello_cpp looks like this: void hello(int id) { release_py_GIL unlocker; std::stringstream ss; ss << ">>> py: sleep: " << id << "\\n"; safe_cout << ss.str(); boost::this_thread::sleep(boost::posix_time::millisec(rand()\*delay/RAND_MAX)); } As you can see we have now added the *release_py_GIL unlocker;* to allow other threads to call into python while we are “working”. We have also done some minor but significant change in the *call_python* function. void call_python(bp::dict &localDict, int id) { try { aquire_py_GIL lock; try { bp::object scriptFunction = bp::extract<bp::object>(localDict["hello_python"]); if(scriptFunction) scriptFunction(id); else safe_cout << "Script did not have a hello function!\\n"; } catch(const bp::error_already_set &e) { safe_cout << "Exception in script: "; print_py_error(); } } catch(const std::exception &e) { safe_cout << "Exception in script: " << e.what() << "\\n"; } } As we now have to aquire the GIL before we can access any Python related functions we need to re-scope our error handling. This is important as if we get a *error_already_set* we still require GIL to retrieve the error message. The simplest way to achieve this is to have nested catches. The init code looks something like this: int MAIN(int argc, const unicode_char\* argv[]) { Py_Initialize(); PyEval_InitThreads(); initTEST(); try { bp::object main_module = bp::import("__main__"); bp::dict globalDict = bp::extract<bp::dict>(main_module.attr("__dict__")); bp::dict localDict = globalDict.copy(); try { bp::object ignored = bp::exec( "from TEST import hello_cpp\\n" "\\n" "def hello_python(id):\\n" " hello_cpp(id)\\n" "\\n" , localDict, localDict); } catch(const bp::error_already_set &e) { safe_cout << "Exception in script: "; print_py_error(); } PyThreadState \*state = PyEval_SaveThread(); boost::thread_group threads; for (int i=0;i<thread_count;i++) threads.create_thread(boost::bind(&thread_proc, i, localDict)); safe_cout << ":::main: waiting for threads to join\\n"; threads.join_all(); } catch(const std::exception &e) { safe_cout << "Exception in script: " << e.what() << "\\n"; } } The main change from our previous attempt is the rescoping of the error handling (again to accommodate GIL) as well as a very very important often left out piece of the puzzle. Namely releasing GIL! Once we have initialized Python we leave processing over to our threads (the main thread which now own GIL has no further use for it) so we need to release the GIL which we automatically receive when we start Python. To do this we add the *PyThreadState *state = PyEval_SaveThread();*. The actual value of the save state function is not really necessary as we never intend to reacquire GIL in this thread. Download the Source This is pretty much it. We now have a bi directional Python program embedded in our multi threaded C++ program. The full source can be found on github.
https://www.medin.name/blog/2012/02/12/embedding-python-inside-a-multithreaded-c-program/
CC-MAIN-2020-05
refinedweb
2,011
60.14
Create the Interaction Model for Your Skill With the Alexa Skills Kit, you can create skills with a custom interaction model.. Intents are specified in a JSON structure called the intent schema. - - in the travelDate slot. The service can then save this information and send back text to convert to speech.. To learn how to create intents and slots in the developer console, see Create Intents, Utterances, and Slots. For more about designing skills and identifying your intents and slots, see the Alexa Design Guide.} To learn how to create utterances in the developer console, see Create Intents, Utterances, and Slots. For more about writing good sample utterances, see the Alexa Design Guide., validate, and confirm slot values. The conversation continues until all slots needed for the intent are filled and confirmed according to the rules defined in the dialog model. You can use the developer console to define a dialog model. The dialog model is a structure that. -. - Check for utterance conflicts that may cause Alexa to send the wrong intent to your skill. -. No numbers, spaces or special characters are allowed. Note that the built-in intents use the AMAZONnamespace, so the name for an intent uses a period. For example: AMAZON.HelpIntent. This notation is valid for specifying the AMAZONnamespace only. Periods can't be used in custom intent names. - Intent and slot names can't overlap with each other in the the interaction model. If you use a name for an intent, you can't use that same name for a slot. - Intent and slot names are case insensitive, so the intent name "ABC" can't be used with the slot name "abc". - The same slot name can be used
https://developer.amazon.com/pt-BR/docs/alexa/custom-skills/create-the-interaction-model-for-your-skill.html
CC-MAIN-2021-10
refinedweb
284
67.15
Tutorial: Machine Learning¶ Machine learning methods can be used to train a trainable classifier to detect features of interest. In the tutorial below we describe how to train and use the first trainable classifier we have made available in PlantCV. See the naive Bayes classifier documentation for more details on the methodology. Naive Bayes¶ The naive Bayes approach used here can be trained to label pixels as plant or background. In other words, given a color image it can be trained to output a binary image where background is labeled as black (0) and plant is labeled as white (255). The goal is to replace the need to set binary threshold values manually. To train the classifier, we need to label a relatively small set of images using a binary mask. We can use PlantCV to create a binary mask for a set of input images using the methods described in the VIS tutorial. Alternatively, you can outline and create masks by hand. For the purpose of this tutorial, we assume we are in a folder containing two subfolders, one containing original RGB images, and one containing black and white masks that match the set of RGB images. First, use plantcv-train.py to use the training images to output probability density functions (PDFs) for plant and background. plantcv-train.py naive_bayes --imgdir ./images --maskdir ./masks --outfile naive_bayes_pdfs.txt --plots The output file from plantcv-train.py will contain one row for each color channel (hue, saturation, and value) for each class (e.g. plant and background).. from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Read in a color image img, path, filename = pcv.readimage("color_image.png") # Classify the pixels as plant or background masks = pcv.naive_bayes_classifier(img, pdf_file="naive_bayes_pdfs.txt") See the naive Bayes classifier documentation for example input/output. Naive Bayes Multiclass¶ The naive Bayes multiclass approach is an extension of the naive Bayes approach described above. Just like the approach above, it can be trained to output binary images given an input color image. Unlike the naive Bayes method above, the naive Bayes multiclass approach can be trained to classify two or more classes, defined by the user. Additionally, the naive Bayes multiclass method is trained using colors sparsely sampled from images rather than the need to label all pixels in a given image. To train the classifier, we need to build a table of red, green, and blue color values for pixels sampled evenly from each class. You need a minimum of 2 classes. The idea here is to collect a relevant sample of pixel color data for each class. The size of the sample needed to build robust probability density functions for each class will depend on a number of factors, including the variability in class colors and imaging quality/reproducibility. To collect pixel color data we currently use the Pixel Inspection Tool in ImageJ. To collect pixel samples, open the color image in ImageJ. Use the Pixel Inspector Tool to select regions of the image belonging to a single class. Clicking on a pixel in the image will give you a set of R,G,B values for a window of pixels around the central pixel. In this example, nine pixels are sampled with one click but the radius is adjustable in "Prefs". From the "Pixel Values" window you can copy the values and paste them into a text editor such as Notepad on Windows, TextEditor on MacOS, Atom or VS Code. The R,G,B values for a class should be proceeded with a line that contains the class name preceded by a #. The file contents should look like this: #plant 93,166,104 94,150,101 82,137,91 86,154,102 87,145,94 79,137,95 116,185,135 103,172,126 96,166,126 #postule 216,130,52 217,129,51 221,132,53 218,131,53 223,132,54 221,132,53 219,131,54 221,132,54 225,135,56 #chlorosis 255,242,89 255,241,90 255,239,87 254,239,87 255,241,90 254,238,88 255,241,88 253,238,87 255,240,90 #background 31,42,54 42,52,60 40,49,58 28,38,51 32,43,55 36,47,59 24,35,45 30,40,50 37,49,66 Next, each class needs to be in its own column for the plantcv-train. You can use a utility script provided with PlantCV in plantcv-utils.py that will convert the data from the Pixel Inspector to a table for the bayes training algorithm. python plantcv-utils.py tabulate_bayes_classes -i pixel_inspector_rgb_values.txt -o bayes_classes.tsv A note if you are using Windows you will need to specify the whole path to plantcv-utils.py. For example with an Anaconda installation it would be python %CONDA_PREFIX%/Scripts/plantcv-utils.py tabulate_bayes_classes -i pixel_inspector_rgb_values.txt -o bayes_classes.tsv where pixel_inspector_rgb_values.txt is the file with the pixel values you created above and bayes_classes.tsv is the file with the table for plantcv-train.py. An example table built from pixel samples for use in plantcv-train.py looks like this: plant postule chlorosis background 93,166,104 216,130,52 255,242,89 31,42,54 94,150,101 217,129,51 255,241,90 42,52,60 82,137,91 221,132,53 255,239,87 40,49,58 86,154,102 218,131,53 254,239,87 28,38,51 87,145,94 223,132,54 255,241,90 32,43,55 79,137,95 221,132,53 254,238,88 36,47,59 116,185,135 219,131,54 255,241,88 24,35,45 103,172,126 221,132,54 253,238,87 30,40,50 96,166,126 225,135,56 255,240,90 37,49,66 Each column in the tab-delimited table is a feature class (in this example, plant, pustule, chlorosis, or background) and each cell is a comma-separated red, green, and blue triplet for a pixel. Like the naive Bayes method described above, use plantcv-train.py to use the pixel samples to output probability density functions (PDFs) for each class. plantcv-train.py naive_bayes_multiclass --file pixel_samples.txt --outfile naive_bayes_pdfs.txt --plots The output file from plantcv-train.py will contain one row for each color channel (hue, saturation, and value) for each class. using the same function described in the naive Bayes section above. A plotting function pcv.visualize.colorize_masks allows users to choose colors for each class. Parallelizing a Workflow that uses a Bayes Classifier¶ To parallelize the naive Bayes methods described above, construct a workflow script following the guidelines in the workflow parallelization tutorial, but with an additional argument provided for the probability density functions file output by plantcv-train.py. For example: #!/usr/bin/env python import os import argparse from plantcv import plantcv as pcv # Parse command-line arguments def options(): parser = argparse.ArgumentParser(description="Imaging processing with opencv") parser.add_argument("-i", "--image", help="Input image file.", required=True) parser.add_argument("-o", "--outdir", help="Output directory for image files.", required=False) parser.add_argument("-r", "--result", help="result file.", required=False) parser.add_argument("-w", "--writeimg", help="write out images.", default=False, action="store_true") parser.add_argument("-D", "--debug", help="Turn on debug, prints intermediate images.", default=None) parser.add_argument("-p", "--pdfs", help="Naive Bayes PDF file.", required=True) args = parser.parse_args() return args def main(): # Get options args = options() # Initialize device counter pcv.params.debug = args.debug # Read in the input image vis, path, filename = pcv.readimage(filename=args.image) # Classify each pixel as plant or background (background and system components) masks = pcv.naive_bayes_classifier(rgb_img=vis, pdf_file=args.pdfs) colored_img = pcv.visualize.colorize_masks(masks=[masks['plant'], masks['pustule'], masks['background'], masks['chlorosis']], colors=['green', 'red', 'black', 'blue']) # Print out the colorized figure that got created pcv.print_image(colored_img, os.path.join(args.outdir, filename)) # Additional steps in the workflow go here Then run plantcv-workflow.py with options set based on the input images, but where the naive Bayes PDF file is input using the --other_args flag, for example: plantcv-workflow.py \ --dir ./my-images \ --workflow my-naive-bayes-script.py \ --db my-db.sqlite3 \ --outdir . \ --meta imgtype_camera_timestamp \ --create \ --other_args="--pdfs naive_bayes_pdfs.txt" - Always test workflows (preferably with -D flag set to 'print') on a smaller dataset before running over a full image set.* You can create a sample of your images with plantcv-utils.py sample_images.
https://plantcv.readthedocs.io/en/stable/machine_learning_tutorial/
CC-MAIN-2021-10
refinedweb
1,425
57.27
-1.9.tar.gz cd minitube At this point we have to pause before running the next commands because there is an issue with gcc 4.7 and we need to make a patch and apply the patch. Special thanks to Vbrummond for this patch! Open a text editor in your terminal. leafpad, gedit, pluma whatever you use and add this text into the ~/tmp/minitube directory - Code: Select all diff -crB minitube-orig/src/qtsingleapplication/qtlocalpeer.cpp minitube/src/qtsingleapplication/qtlocalpeer.cpp *** minitube-orig/src/qtsingleapplication/qtlocalpeer.cpp 2012-09-27 06:17:03.000000000 -0400 --- minitube/src/qtsingleapplication/qtlocalpeer.cpp 2012-10-28 13:18:02.836364666 -0400 *************** *** 46,51 **** --- 46,52 ---- #include "qtlocalpeer.h" + #include "unistd.h" #include <QtCore/QCoreApplication> #include <QtCore/QTime> #include <QDebug> Then save the text as qtlocalpeer.patch and exit Now apply the patch - Code: Select all patch -p1 < ./qtlocalpeer.patch Now we are ready to build minitube - Code: Select all qmake-qt4 dh_make -p minitube_1.9 --createorig dh_make will prompt you what to build, press 's' for single. then run the next command - Code: Select all dpkg-buildpackage This will build the package for you and make the debian installable file for your machine's architecture. Now just install the package: - Code: Select all cd ~/tmp sudo dpkg -i minitube_1.9-1_amd64.deb Now go Play and Have FUN! (and share this with your friends)
http://forums.linuxmint.com/viewtopic.php?p=651362
CC-MAIN-2014-41
refinedweb
233
52.56
How to resize an extremely large image Hi, I'm using Python 3.4.3 and OpenCV 3.0.0. While I'm trying to resize a very large RGB image (107162,79553,3) using the following code: import cv2 image = cv2.resize(image, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA) I got the error message "cv2.error: C:\opencv-3.0.0\source\modules\imgproc\src\imgwarp.cpp:3208: error: (-215) ssize.area() > 0 in function cv::resize" I did some further testing and realized this is an integer overflow problem because the code would work on image of the size (46340,46340,3) but not (46341,46341,3).I understand I can perform bloc processing but I'm still interested in knowing if there is a direct solution to this problem. My naive thought is that if I can identify where exactly this int that gave me the trouble is, I can then go in and change it to int64. So my questions are : 1) Is this approach plausible? 2) If so, exactly what should I change to solve this? If not, how can I solve this problem?
https://answers.opencv.org/question/68571/how-to-resize-an-extremely-large-image/?sort=votes
CC-MAIN-2019-39
refinedweb
193
66.64
Hello! Week 3 of profiler-building is over! My main goal for last week was to release an alpha, and I did! You can download the project & try it out at. If you’re interested in how the project is organized, I wrote an architecture document this week. If you do find it useful, I’d be really interested to hear about what you’re using it for – you can email me, leave a message on gitter, tweet at me – anything! Also I very much appreciate bug reports :) For example, somebody said on Twitter that they used rbspy snapshot (which prints a single stack trace from the program) to figure out why their tests were running slowly! This made me super happy =). I used it to profile a test run on CI: some tests suddenly became very slow; I connected to the container thru SSH, downloaded rbspy and took a couple of snapshots while tests were running; that was enough to find the cause of the problem) name: rbspy! On Tuesday I polled people on Twitter for name ideas. I wanted something that was a little bit fun (profiling is fun!), but not too clever – I want people to be able to actually tell what the project does from the name. Hopefully rbspy will be that! Also I drew a quick logo! It is not super fancy but I like it anyway. An alpha logo for an alpha release :) refactoring! Last week I also refactored the project significantly. I probably spent 2-3 whole days on trying to organize the project better – at the beginning of the week it was all basically one 1000-line file, and at the end of the week, I had files for - initialization code (what happens every time you start the profiler) - operating-system-specific code (want to add support for a new OS? it goes in address_finder.rs!) - ruby-version-specific code (want to add Ruby 2.5.1 support? That goes here in ruby_version.rs) - UI code (all in main.rs, right now) My most useful strategy for refactoring was to write an architecture document (which you can read!). Basically I tried to explain to an outsider how the project was put together, found parts that really didn’t make sense, and then refactored until those parts were easier to explain. I don’t think it’s “perfect” (what is?) but the organization was easier for me to work with at the end of the week, and Kamal said it made more sense to him too. better testing with core dumps! This week we also got some significantly better testing implemented – now there are a bunch of core dumps in the rbspy-testdata repo (). During the tests, we - load the core dumps - try to read a stack trace from those core dumps as if it was a real Ruby process - compare the stack trace we read to the expected output Kamal wrote the code to make a core dump mimic a real process and it’s really simple and clever. This whole testing strategy is Kamal’s idea and he actually implemented the key ideas 1.5 years ago. Also it was his idea to keep the core dumps in a separate rbspy-testdata repository so that we can keep several megabytes of coredumps for testing without making the main repo huge. I’m very happy to have these tests and they make me feel a lot more confident that the project is actually doing the right thing. And they let me make improvements! For example – right now I have a core dump of a process where rbspy gives me an error if I try to get a stack trace out of it. Once I fix the issue (to do with calling C functions), I can check that core dump into rbspy-testdata, add a test, and make sure it stays fixed! One more example of a thing these tests helped me do – I needed to get both the relative and the absolute path to a file in Ruby 1.9.3. Figuring out how to do this was pretty simple (I did a little git blame and then this commit showed me the way). With the Ruby 1.9.3 core dump, I could add code to get the relative & absolute path, run get_stack_trace on the core dump, and assert that I got the expected answer! Really easy! contributors! I published my first release last night. So far 3 people have created issues and I’ve merged a pull request from one of those people! This is exciting because one of my major goals is to get more people contributing to rbspy so it’s a sustainable project and not just me. this week: Mac support & container support This week I’m hoping to add Mac support! I don’t own a Mac, but my plan is to rent a cloud VM for a week or so and develop on that. I also have a bug to do with C function-calling support that I’m hoping to fix. Also container support: right now if you try to profile a process running in a container from outside the container it won’t work because because the process is in a different filesystem namespace. That shouldn’t be too hard to fix. At some point I also want to start investigating memory profilers – maybe I can add a memory profiler to rbspy? I have no idea what’s involved in that yet! We’ll see!
https://jvns.ca/blog/2018/01/22/profiler-week-3--refactoring--better-testing--and-an-alpha-release/
CC-MAIN-2019-09
refinedweb
922
70.13
Re: C/C++ Syntax Folding - Special treatment for functions/methods Expand Messages - Around about 08/02/05 15:17, Jean-Sebastien Trottier typed ... > What are your folding requirements now? Can you clearly state them orAh ... that might've helped, mightn't it :-) ... although I was (at > will you keep us guessing? ;^) > Please also post code examples and explain how you want them folded... the time) more interested in why my config. generated fake parenthesis errors, but here you go: I don't want to make it /too/ complex at the outset, so this isn't *quite* what I'm after, but it's a step: a) in a namespace block, fold only on '^\s\+[{}]' indented blocks; the extension for this would be to only do the first one block; the extension for *that* (final) would be to only do it for the first brace-pair inside 'class {}' itself inside 'namespace {}'. So I was thinking along the lines of defining a 'namspace-block' syntax region, then a 'class-block' region which is only allowed in a namespace-block and can't contain other class-blocks, then finally a brace-fold-block which (ditto) can only be inside class blocks and cannot contain other brace-fold-blocks. b) not in a namespace block, fold only on '^[{}]' (column-0 blocks). Primarily, (a) is for our headers [and is v. tricky], (b) for our cpp's [and is trivial & I have it working]. namespace CODEBASE { class jobby { public: int stuff; void do_it() { // some inline code { // some block that'll not be folded } } }; } // body CODEBASE::jobby::do_it() { // main code // lives here { // another inline block that'll not be folded } } .. goes to: namespace CODEBASE { class jobby { public: int stuff; void do_it() +-- 6 lines: {----------------------------- }; } // body CODEBASE::jobby::do_it() { // main code // lives here +-- 3 lines: {----------------------------- } This is what currently doesn't work right; one thing that's obviously wrong is that it prematurely ends cFoldSimpleInner blocks, and it think that's also corrupting the parenthesis checks. syn region cFoldSimpleOuter start='^{' end='^}' transparent fold syn region cFoldSimpleNamespace start='^namespace\>.*\n{' end='^}' \ transparent contains=ALLBUT,cFoldSimpleOuter syn region cFoldSimpleInner start='^\s\+{' end='^\s]+}' \ transparent fold contains=ALLBUT,cFoldSimpleInner \ containedin=cFoldSimpleNamespace > For completeness, I would use:Yes, I've tried a few variants of this. I'll try that one > start='^namespace\>.*\n{' specifially in a bit, though. > You probably should specify end='^[ \t]\+}'Oops ... > By the way, you can easily replace '[ \t]' with a simple '\s', they meanYou're right; too many regex syntaxes ... :) > the same thing... and tend to read better -- [neil@fnx ~]# rm -f .signature [neil@fnx ~]# ls -l .signature ls: .signature: No such file or directory [neil@fnx ~]# exit Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/vim/conversations/topics/56434?o=1&d=-1
CC-MAIN-2015-35
refinedweb
455
62.88
As my job search is slowly coming to a close, my life for the past couple months has been filled with… trivial programming problems. It seems that the hiring practices of HR departments has become a draconian screening process where by applicants are asked to sit down and write out solutions to deviously “simple” problems on a piece of paper or a white board. While I’m not a fan of the process, there are few options for getting around the gate keepers. So I’ve learned to just get on with solving these problems. Actually I must say that I do like programming challenges. I just don’t tend to solve them well off the top of my head on a piece of paper while the clock is ticking down. I tend to solve them better with a compiler/interpreter/virtual machine, a debugger, and a decent editor. I happened upon one that isn’t terribly interesting, but for which the brute-force solution is so sufficiently obvious that I was immediately struck by the desire to optimize it (if possible) upon finishing it. The problem description is as follows: (Some of you may recognize this problem, I found it from Grepplin). Anyway, my solution was: import itertools def largest_sub(nums): """ """ combinations = 0 sorted_nums = sorted(nums) for i in range(2, len(sorted_nums) + 1): for combination in itertools.combinations(sorted_nums, i): if sum(combination) in sorted_nums: combinations += 1 return combinations if __name__ == '__main__': with open('numbers.csv') as f: nums_str = f.read() nums = map(lambda x: int(x.strip()), nums_str.split(',')) print largest_sub(nums) On my laptop it takes about 7 seconds to run on the following input set: 3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61, 70, 73, 78, 81, 92, 95, 97, 99 I’m basically summing every possible combination of sub-set from two to the length of the input set and checking whether it is in the input set. For small inputs this method might work well enough to get the job done, but every number we add to the input set we increase the number of sums and checks we do exponentially (note that I only just cracked into Concrete Mathematics a couple of months ago, please correct me if I’m wrong about this). Now, I haven’t taken a stab at optimizing this code yet, but I wanted to get the conversation going if anyone out there wants to share their ideas. Can you make it faster? [**Note:** My laptop is a little old, being a Core 2 Duo T5500 @ 1.66Ghz, 1G DDR2, running Ubuntu 10.04]
https://agentultra.com/blog/can-you-make-this-toy-python-program-faster/
CC-MAIN-2017-30
refinedweb
442
57.81
Hydrated provides a BehaviorSubject. int double bool String List<String>. The goal of Hydrated is to make persistence of BLoC classes as simple as possible for Flutter projects. PRs are welcome, but be warned that I am committed to simplicity. shared_preferencesversion to ^0.5: hydrated: ^1.2.2 You can install packages from the command line: with Flutter: $ flutter packages get Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:hydrated/hydrated.dart'; We analyzed this package on Apr 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries.
https://pub.dartlang.org/packages/hydrated
CC-MAIN-2019-18
refinedweb
124
52.26
Renaming an Element The Rename command changes the pathname at which an element is accessed. You can use this command to perform a simple renaming, to move an element to another directory, or both. You cannot move an element to another depot. Invoking the Rename Command Select an element in the Details pane of the File Browser. Then choose Rename from the selection's context menu. In the Rename dialog box, type a new pathname for the element:\ - Type a simple name to rename the element, leaving it in the same directory. - Type a relative pathname to move the element to another directory in the depot (and possibly rename the element, too). (You cannot type a depot-relative pathname.) In either case, a single move transaction records the changing of the element's pathname in your workspace. The renamed object is activated (it is included in the workspace's default group), and the (kept) and (member) flags are added to its status. Renaming a Modified File Before Keeping It You can create a new version of a file element in your workspace, recording either a content change or a namespace change. But you can't do both at once. For example, suppose you: - Edit the contents of file color.java, so that it gets (modified) status. - Use the Rename command to change the filename to hue.java. This creates a new version of the file element in your workspace, recording the name change, but not the content change. To preserve the content change, you must use a separate Keep command to create a second new version. Caution on Reusing the Name of a Renamed Element The ability to reuse the name of a renamed element provides significant flexibility for project refactoring tasks. But it also introduces a complication: what happens if you rename an element, create a new element at the same pathname, then invoke the Revert to Basis command on the renamed element? The renamed element cannot revert to its old pathname, because there’s a new element at that pathname. The original element simply disappears from your workspace. You might wonder “Why does the element not get (stranded) status?” The Revert to Basis command makes an element inactive in the workspace. The (stranded) status applies only to active elements. At this point, your workspace contains a new element at the given pathname, and the parent stream contains the original element at that pathname. Attempting to promote the new element would produce a 'name already exists in parent stream' error. Use one of the following procedures to return this pathname to a consistent state: If you want to return to using the original element - Rename the new element, to a name like myfile.java.DISCARDED. - Defunct the new element. - Promote the new element to the backing stream. - At this point, the original element reappears in the Details pane, with (missing) status. - Invoke the Populate command on the original element. If you want to discard the original element and use the new element These steps must be performed with the AccuRev CLI, which supports defuncting of the "disappeared" element using its element-ID. - Defunct the original element in the workspace, using the command accurev defunct –e <element-ID>. - Promote the defuncted version to the backing stream.
https://admhelp.microfocus.com/accurev/en/latest/online/Content/AccuRev-OnLine-Help/Rename_Dlg.html
CC-MAIN-2022-05
refinedweb
546
55.54
Microsoft Turning Screws on Customers 432 Mitch Wagner submitted his own story about Microsoft cracking down on big customers who it thinks aren't playing fair on their licenses. ."" Sounds only fair... (Score:3) If the companies in question signed an agreement with Microsoft, surely they can't complain when the other party actually wants what is due to them. It's high time everyone learned what making deals with the devil actually means. Eventually he will collect, in blood... Helping Linux Out (Score:4) I guess the outlook for alternative OSs and office suites is VERY good. I've got yer plan right here! (Score:2) Linux? BSD? Rising costs = opening for Linux or *BSD? (Score:2) -- Give a man a fish and he eats for a day. Give M$ a Shovel (Score:2) After Virginia Beach, this shouldn't be news (Score:4) Now, I could get into the idea that MS waited until there was ample evidence that some governments were dependant on it's products before starting this, but that would sound like a Linux zelot. Still begs the issue, why now? Why did they not start on day one and come down on pirates? Why have there been posts on MS bulletin boards saying that they don't care if you take the OS you use at work home with you to use. Unless they knew this day would come and only now the boom is lowering. Does this really surprise anyone? Ensure everyone is dependant on it, saturate the market, then suddenly decide to play hardball with licenses. Gee, sounds like a decent business practice, but only works if you're a monopoly. DanH Cav Pilot's Reference Page [cavalrypilot.com] Why audit? (Score:2) So does anyone know what happens if a company refuses to audit? Microsoft Kills (Score:3) New York, N.Y. March 30th,2001 In an independent study conducted this month by staff at AntiOffline.com, and MacroShaft.org, it was revelead that Microsoft is killing people on a daily basis, with the evidence verified by statisticians at New York University's Mike Hunt. "Based on these estimated projections, it seems the Justice Department needs to begin a prompt investigation into this matter." states Mike. Judging on data gathered on a one month term this is the output: Windows users crash an estimated two times a day which requires an estimated 3 minutes to reboot. Result? (Rough estimates) 100 million Windows users x 120 seconds == 507 years lost. 6 deaths a day are attributed to this product. This alone does not include any estimates from those users who have to reboot upon installing programs. Nor does this include time spent configuring TCP/IP reboots. With an estimated dollar amount of about 22 million dollars lost weekly (this is a generous amount) due to these reboots, its strange that no company has gone bankrupt. "If anyone would care to break these figures down into dramatic fashion, their would probably be global catastrophes." states Sil of AntiOffline The difference between life and death on the workplace is no longer restricted to psychotic Postal workers, but rather a more chilling enemy known as the Blue Screen of Death. We've yet attempted to solidly document that *actual* numbers out of fears our calculator could not reach the given amount, so we actually have given Microsoft what could be an actual death toll of 20-30 people daily. Staff at Microsoft declined to return our e-mails repeatedly but we will continue to pursue the numbers as time goes by. President George W. Bush today also intervened on Microsoft's behalf stating, AntiOffline's numbers are fuzzy math. Sil could not be contacted for comment. "Windows -- When do you want to reboot today?" who'd a thought [antioffline.com] This is ridiculous (Score:2) Cost Burden (Score:5) If a software company wants to, they could audit your licence compliance monthy and put you out of business _EVEN IF YOU DON'T USE A SINGLE PIRATED PROGRAM_. The fact that they are taking a week out of every one of your months will probably kill you. Rising Costs (Score:2) I know this has been beaten like a dead horse but, Linux. One copy, one license, 10,000 desktops, it does th office productivity and internetworking that the windows machines do just fine, A good desktop (Gnome, KDE) is intutive enough that retraining would be minimal, not to mention the costs that could be saved. On the flip side, it would take more on the technician end, but I think dropping the cost of 10,000 M$ Windows licenses would more than make up for it. I have this same problem. (Score:3) Legally licensed and Operated...The Linux Pimp [thelinuxpimp.com] They're auditing us (Score:4) Some of it is our fault because we trusted the wrong folks internally to keep track (long story and trust me, you don't care to hear it) but there is a lesson to be learned in making sure someone keeps track of these things. Preferably someone involved with computers... Of course I'm having a very hard time biting my tongue about how we could avoid this problem in the future. (*cough* linux *cough*) sounds familiar (Score:4) Uh-huh. Talk about a thinly veiled threat. We had just done a software audit a couple weeks beforehand, so we were cool. But still, the damn thing read like some Mafia protection letter. Dirk What happened to Microsoft's 'Blind Eye'? (Score:3) I guess with a company that is as large as the one mentioned, with as many Win32 desktops, Microsoft values extracting as many dollars as they can through extortion tacticts rather than turning the other cheek and increasing their good karma with 'Microsoft Shops'. Dammit (Score:5) I first read it as: "Turning Microsoft On Screws Customers" This really isn't news (Score:2) So please CmdrTaco, please don't do the knee jerk response and post EVERYTHING that goes against MS, we already KNOW how full of shit Gate and co are...and anyways, after a certain point it just makes you look like a troll. some companys change money for "services" when (Score:2) BTW, about a year ago i interviewed for the "Anti Piracy" group at MS. They we're very interested in encyption, and my JavaScript skills (which i had none of). Bunch of weird scary looking guys, not the normal breed of geek you find at MS. They didn't seem to bright either (hey they made me an offer). They also wanted a second interview to see what kind of "person" i was.. i think because i would be the only guy there who was under 40 and didn't live with there mother. but anyway. -Jon Streamripper [sourceforge.net] How Microsoft licenses isn't too straightforward. (Score:2) (Sigh!) No, you can't be surprised at that. However, one point raised in the article is (if I may be allowed to paraphrase) is that trying to understand the terms of the MS license for your software is somewhat akin to trying to derive a sommon sense meaning from a Scientology manual. (Sigh, sigh!) Just because something is legal, doesn't mean that is moral - or practical - or good business sense - or reasonable! Re:And this is why... (Score:3) By scaring people now, corporations will buy licenses. They will continue buying MS to stay legal. This will force home users to also buy the latest software, as the corporations are distributing everything using MS Word 2004 Shiney Professional with Sprinkle Power. The question will become, how fast will people be able crack the activation scheme? -------------------------- Re:After Virginia Beach, this shouldn't be news (Score:2) Can microsoft be sued to pay for lost time? (Score:5) They want a license for every re-install... (Score:4) Re:Why audit? (Score:3) Because licenses are binding contracts and they can be fined for breaking them. It seems to me that if a company owns the hardware, and knows that they at least got an OEM license for Windows with the machine, they should be able to tell Micro$oft to take their audit request and shove it. This assumes that they only license stand alone operating systems and don't have any kind of applications or services requiring client access like SQL Server or Exchange. So does anyone know what happens if a company refuses to audit? You pray you can't be sued in a state that has passed UCITA. Maryland and Virginia, I think. Why Must Linux ALWAYS be the answer? (Score:4) Picture putting Linux on one of your sales force's desk. They wouldn't know what to do with it. Linux (or in my case FreeBSD) is the answer for people like US. All of the techies, kernel hackers, coders and network admins that understand how to use Unix. You would spend more money retraining your people, and higher support costs running around answering questions, than you would spending to make your company M$ license compliant. Get a site license and don't worry about it. You'll sleep better tonight. Brad Re:Dammit (Score:2) "Microsoft Screws Turning Customers" All your data (and biz plans) are belong to M$! (Score:3) -- It's all about keeping the stock price up. (Score:5) Microsoft has been able to keep it's stock price stratospheric for years by posting record earnings. However, with slumping hardware sales, a slowing economy, lethargic adoption of Windows 2000 and Office 2000 and a emergence of a real threat on the low end server from Linux and BSD Microsoft can no longer afford to look the other way when it comes to licensing issues. Microsoft needs the revenues, and it needs them now. After all, employee options are a huge part of the average Microsoftie's employment package. If their stock doesn't go up (or worse, if it goes down), then working at Microsoft is not really that nifty a job. In the past Microsoft realized that casual sharing of their software actually served as a very effective free advertising campaign. It helped maintain their position by making sure that their software was ubiquitous. Now that they have the market tied up, they are looking to reel in all the freeloaders. Microsoft's plan will backfire, especially if they continue pestering companies that are honestly trying to comply. Bastards (Score:2) It's sad to say, but this is why Microsoft is so successful. Bill Gates is both a computer guy AND a businessman. He probably knows, but wont admit, that windows is unstable as hell and that the things he does are evil. But he doesn't care, because it gets him more money. A company isn't going to switch from windows to something like linux because microsoft harrasses them about licenses. It's just a way for microsoft to squeeze money out of its customers who can't or wont use another product. That's why it's called a MONOPOLY. Re:What happened to Microsoft's 'Blind Eye'? (Score:2) Why now? Keep the growth going (Score:2) Typical Slashdot Hypocrisy (Score:2) The FSF starts a GPL crackdown and the person that broke the license is the bad guy, not the FSF. Perhaps you people need to know that people in glass houses shouldn't throw stones? Re:Sounds only fair... (Score:5) The biggest problem is that no two people at MS give you the same answer to the same question. I have spent many hours on the phone with MS sales people and they are in general, smart, competent folks. But one guy interprets the contract-speak one way, another sales guy interprets it another way, and I read it a completely different way. When nobody is on the same page, things get screwed up. What I'm really afriad of is how they're going to license the new -B I have mixed feelings about this (Score:2) --- Thanks again CmdrTaco! (Score:2) :) Seriously though, why should we be feeling sorry for these people? So they didn't bother to document how many licenses they have and how many desktops they have running which software.... how exactly is that some sort of Microsoft problem? It would appear that CmdrTaco is attempting to scare people by giving the impression that Microsoft runs around with a club trying to beat people over the head for more money (that may or may not be the case.) I know that we keep exact records of how many licenses we have for each piece of software, and how many of those licenses are currently in use. Microsoft could walk in tomorrow and we can present the proof that we have x copies installed and we own y licenses, end of story. Any IT/PC support department worth their salt would be doing the same. Cost is another issue entirely. Sure, the initial price for a Linux system is little to nothing, but when you factor in other issues that corporations face every day, the Linux value isn't quite the deal it once appeared to be. First of all, there is no MS Access equivalent. That would mean we'd have to switch over all these little programs that have maybe 10 users to another system. There really isn't any RAD programming system for Linux (Klyx ain't there yet.), so that means a lot of time and effort for something pretty small. There is also the cost of retraining all of our users and staff. We would have to try and track down and support lots of Linux apps for various tasks, if they even exist. If not, we'd have to write and support our own from scratch. I would also say anywhere from 20% to 50% of the peripherals and components in the systems we have out there don't have any Linux support whatsoever, which means replacing a lot of hardware. The lack of any standard Directory Services client also hurts. The only real options without spending an insane amount of money are NDS and AD, neither of which have Linux clients. Oh, and any time any person on the company wants a software application, we would have to go scour the net to try and find a Linux-compatible one, or try and write out own. When you compare all that to the cost of Windows 2000 (less than $10,000 for 7 copies of server and 1000 user CALs under our select contract), and it really doesn't make sense to switch. ------- -- russ "You want people to think logically? ACK! Turn in your UID, you traitor!" Re:This is ridiculous (Score:2) That isn't actually true. The GPL is a license (in other words an implicit contract), but it rests on the right of distribution (a copyright) without which agreement you would be unable to distribute GPL'd software. Use of GPL'd software doesn't come into it. First of all using a product isn't a copyright, so you don't need to agree to anything in order to do it (with the exception of public performances). Secondly, prohibiting specific uses would be inimical to the free software community. FWIW, I doubt that use clauses of a standard shrinkwrap license would be enforcable if you made it clear that you didn't intend to be bound by them, and were using the software without a license under the general use provisions of copyright law. Maybe a better investment would be ... (Score:2) Maybe a better investment would be, to train the staff to use another operating system, instead of always trying to figure out how to make the best of Microsoft licensing terms, only to have it in pieces again, when Microsoft decides to change their licensing again. At least retraining has to be done only once. Also they may expect that with the event of XP (which means eXPerience as we all now learned) they're in for a totally new (but not better) licensing eXPerience. Licensing - the App Killer (Score:2) Microsoft's licensing scheme would have killed us. We would have to buy a client license for every client machine, a server license for every connection to the server, and a Citrix license on top of all this. We would have paid these, but even without charging for our application and services we would have been unable to compete on price. There must be another way. [Enter stage left: Linux.] We were already a Unix shop. Some of our programmers were playing with RedHat 5.x. Then, it hit us: no client license fees for Linux. Would Linux prove robust enough for mission critical applications? Yep. This is a compelling business reason for choosing Linux (or other OS/FS alternative). Yes, we had technical reasons, too (having the source is terrific), but the business realities sealed the deal. Microsoft may have changed its technologies to focus on the Internet, but its pricing strategies are stuck in a 1983 standalone time warp. Panspermia (Score:2) Hey man, copies of Win2K just blew across the road and sprouted on my desktop Re:Rising Costs (Score:2) Re:After Virginia Beach, this shouldn't be news (Score:2) The problem isn't making other companies follow the law, it's what constitutes "following the law." Apparently, the issue is what constitutes a valid Windows license. As a result, there is significant confusion as to whether companies have valid licenses, need upgraded licenses, or how many licenses they need for a particular software installation. Microsoft seems to be in no hurry to clean up confusion, leading to people paying double for software, or outright discontinuing software installation plans when it turns out that they need some outrageously large number of licenses. Read the article, not the summary, before posting. -- Re:Rising Costs (Score:2) Sorry, that sounds like a major flame, but I'm just trying to make a point. Switching to Linux would work for a technically proficient, computer programming only company, but any service oriented company with customer service reps is going to have a hard time doing so. You must remember that non-programmer types (which are more prevelant than programmers) use Windows everyday, but they don't have a clue what Linux is. That having been said, I'm sick of using Windows and would love to use Linux for everyday use, but that's not how my company works. Re:After Virginia Beach, this shouldn't be news (Score:2) Well, there is one important difference: the GPL licensing really only deals with distribution, not use, but a Microsoft license is primarily concerned with your use of the product. That's why Bruce Perens (who seems to come to mind as someone likely to point out GPL violations, although my apologies to him if that is an unfair characterization) can't come into your place of business and audit your Linux boxes the way that Microsoft can come in and audit your Windows boxes. What Microsoft is doing is entirely legal, but I think that overall they're creating more problems for themselves than they're solving. Re:Nobody is "screwing" anybody! (Score:2) Were you ever a marketing director for a failed .com media company? This sounds a bit too much like "mindshare is our biggest asset" for me to be comfortable with. _____________ Good 'ole MS (Score:2) When Microsoft heard about the application, it demanded that the airline pay for a full-time license for every computer that would access the app, Reeder said. "I told them that was ridiculous," he said. "I can't license every computer in the world." This is pretty damn funny, but am I missing something here? Why should the airline be responsible for licensing remote users? Is this "mainframe work-scheduling application" a Microsoft app that has to be licensed (which I can almost understand), or are they saying that any computer simply accessing a remote NT box has to be licensed to do so? Somehow, I can't help but think of the Star Wars quote, "The more you tighten your grip, the more systems will slip though your fingers". And yes all you quote geeks, I realize that probably isn't exact Alt. OSes.. sure, but what about apps? (Score:2) Re:Helping Linux Out (Score:3) Of course, I could be biased because I happen to agree with the poster. The simple truth is that this tactic is nothing but good for Linux and friends. Take for instance the Alaska Airlines bit. The overall cost of the project was going to exceed their acceptable budget by $250,000. For a small airline, still suffering from a tarnished image that is just way too much money. I fully expect that we will see this scenario replayed many more times with different companies and I'd bet that most aren't going to be willing to shelf a good idea, when there is a more economical solution. Re:What happened to Microsoft's 'Blind Eye'? (Score:2) About the only way they can increase market share is if the market itself is growing or if they can use their thumbscrews to extract more seats from that market. Re:Thanks again CmdrTaco! (Score:5) Pardon me sir, Haywood Jablome here. I'm chief auditor for Microsoft, and I'm troubled by the figures you present in your analysis here. You mentioned "X copies installed and Y licenses", pointing to the fact that there is a DISCREPANCY between the number of copies installed and the number of licenses you have purchased. Please stay where you are; an auditing strike team will be arriving within 3 hours to verify that your values of X and Y are equal, or even better, that Y is greater than X. Thank you for your time, Heywood Jablome Chief Auditor, Microsoft Corp. "All your license are belong to us" Re:I know I'm missing something here... (Score:2) In the Alaska Airlines situation you describe, the clients in question are connecting directly to the NT servers and using their resources. According to MS, that means they have to pay for client licenses. we were audited (Score:2) Re:Sounds only fair... (Score:2) So if i have a license which allows my business to run 200 instances of program "foo" worldwide, and do the necessary installing on the machines it's supposed to run on, then i might even install it on 600 machines if only 200 of them use it at a time (think license server, applications only used during daytime, worldwide business and timezones). Now *that* might be something to lower license costs. Sounds like... (Score:5) Gosh, these licenses sure are hard to keep track of! Oh I know If only Microsoft had some kind of product for me... Tracking Licenses (Score:2) Novell's ZENWorks is supposed to do that, but the inventory functions are pure S***. Microsoft's SMS will do it as well, among the many things it also does. If you need Remote Control, Software distribution, Inventory, etc... and you are on a Windows network, go with SMS. If you just need Inventory, go with Tally Systems. Hope this helps those out there in the IT world that cannot afford to use Open Source software for everything, and still need to keep track of licenses. ------- -- russ "You want people to think logically? ACK! Turn in your UID, you traitor!" Re:Knowledge of plan HURTS (Score:2) We got targeted (Score:2) When my girlfriend was in a car accident, the idiot who caused it hired a lawyer. The weasel lawyer sent out official-looking, registered mail stating that he needed her immediate written responses to the contained survey and questions. Her insurance company said to forward it to them and forget about it, as the lawyer had no right to any of that information. A similar tactic was used when my mother was rear-ended at a stoplight. Simple fact is that we aren't required to give Microsoft diddly. They are not a federal agency, they don't have authority to demand the info, and we aren't going to give it to them. Simple solution is to quietly make sure, should the occassion arise that we need to give the proper authority proof, we are up-to-date on our licensing. Sending the information places you in a much more dangerous situation, because Microsoft knows you're scared and ready to cooperate with them. Incidentally, we were contacted very shortly after by a Microsoft employee who congratulated us on our recent growth (no, I don't know how he knew) and asked if we needed any more licenses to keep us legal. Coincidence... I think not. Redistribution not a provision in the GPL (Score:2) The only "intuitive" interface is the nipple. After that, it's all learned. Maybe in Texas (Score:5) Question (Score:2) Does Microsoft inform you in their EULA about these audits? The truth about /. (Score:2) Well, now that I understand true subversive tactics from "1984", it's clear to me that CmdrTaco == Bill Gates. Identify deviants, recruit them, gain their trust, then burn and 're-educate' them. How do you think Old (evil grin) ----- D. Fischer And this is a BAD Thing? (Score:2) Re:I have this same problem. (Score:2) Sheesh, the M$ FUD is getting more subtle all the time, isn't it? The answer to your question is no. There will be no such sweeps. Why? Because no organization, certainly not the FSF, has the right to demand that you divulge internal records or allow their access to your equipment like Microsoft gains when you "sign" one of their licensing agreements. Pretty much compliance with GPL and other licenses will depend on informants, which BTW is probably the primary way that the SPA finds out about corporations cheating on licensing agreements now. --- Re:Nobody is "screwing" anybody! (Score:5) First of all, Microsoft's licensing terms and conditions are unbelievably vague, and not just for the operating system licenses, but for the applications and client access licenses as well. Try developing a custom application using Exchange 2000, Conferencing Server, and SQL Server 2000 to be accessed by internal users, business partners, and transient consultants. Now imagine the project has a dedicated MS salesperson, and a squad of MS consultants who all have completely differing opinions on what requires a license and what does not. Now take it one step further, and imagine that someone at Microsoft thinks you're missing some licenses and demands a license audit. You spend the next two days trying to piece together what you have, what MS thinks you need, and what you really do need. It happened to my previous company, and after a week of arguing with MS were ultimately vindicated, when the know nothing in licensing was proved wrong. Now I'm not saying that it isn't within MS's right to do so, but you should seriously consider the impact such a position will have on your customers. That situation so infuriated our CTO, that are next big _similar_ project used Domino and Sametime. Re:Rising Costs (Score:2) Capitalism for Dummies (Score:2) Picture two little companies, competing against each other, one uses Windows, the other uses Linux. Microsoft has to do everything it can to milk as much cash as possible out of the first one, and cost of production for that company will inevitalbly be higher than for the other company. (Even accounting for the fact that the Linux-using company might need to hire a guru as its IT manager. Its pure Darwin folks. The smarter comapnies will use the free OS, the dumber ones will stick with Bill & Co, and run themselves right out of business. During the boom, this wasn't a problem because everyone was raking in the cash, but as soon as the coming Depression get really bad, people will be looking for ways to cut costs, and getting rid of the MS in a company is the best way to do that. Microsoft is doomed, but they are far too arrogant to realize it, and they might not until its too late. Re:This is an outrage (Score:2) I didn't mispell it. I left a 'b' out to symbolize all the pain and suffering my people have had to endure. So fuck off you smarmy little retard Re:After Virginia Beach, this shouldn't be news (Score:3) I think this get back at your employer tatic of advertising on the radio is about as slimeball a thing you can do. It's worse than ambulance chasers. Did anyone know the more litigation in a society, the lower the GNP? It's a proven fact. Productivity drops sharply. Quality of life goes down. Re:I know I'm missing something here... (Score:2) Except when Microsoft say you must pay client licenses for each unique user who may possibly connect, as happened with Alaska in the story. Microsoft's doom (Score:3) Is it just me, or does it sound to anyone else like microsoft is finally dying? Dying may be a bit harsh. I'm certain that they'll always be around in one form or another. Even Novell is still with us. But there really seems to be serious issues with nearly every one of their products. Does anyone know anybody who likes the idea of renting their software? It sounds to me like .NET will be the last nail in the coffin for MS. I can see entire companies leaving microsoft in droves over this one. Which is good for me. I'm a consultant who specializes in MS/Unix interoperability and porting from one to the other. And what about becomming a license nazi? MS has already been caught collecting info from users machines and sending it back to MS. I read a newsgroup post saying that even some of their games were doing this. They're going after corporate customers now, when will they send a bomb to private users? Maybe it's not a coinsidence that this outlook/activex bug won't seem to die. And has anyone actually looked at OS X? I played with it at compusa the other day. For the first time ever, I'm actually considering buying a macintosh. I'm telling you, it's unix, I was shocked. I opened a tcsh shell and looked around. With the MACH kernel and the aqua interface, it's everything that linux should be. And they're taking a beating on the server front as we all know, especially with IIS. If I were doing a new web development project, I would certainly hesitate to go the IIS/ASP route. And is anyone really using C#? All we need now is a champion for Star Office so that it's as polished as Office, yet still free/open-source. It looks to me like they've dug their own grave, and now it's time for us to dance on it. Microsoft's evolving license terms (Score:5) (I submitted this InternetWeek story yesterday morning and it was rejected. How come it's accepted a day late?) Re:More Knee-Jerk News (Score:2) Moreover, the BSA (not the Boy Scouts) encourage employees to report their employers for non-compliance. Sounds innocent enough, until you have to deal with BSA representatives at your door because your ex-employee was ticked and told them you have pirated Windows installations. You could be completely legit, but you'll waste time and money proving it whenever some software company decides to ask. Wow, I think I've slipped into rant mode, so I'll wrap up. I think illegal copying of software is wrong, but I have issues with companies that want to own me because I use their software. Re:After Virginia Beach, this shouldn't be news (Score:2) Let's see Msft Audit the IRS (Score:2) Need contingency plans for migration away from MS? (Score:2) Microsoft software is arguably a single point-of-failure. Desktops preferentially all run one version of MS-Windows, mailservers all run another, and fileservers are similarly uniform. Technically, this is very dangerous because an entire category of service could be lost to a bug/virus. Now MS playing hardball is adding a legal failure mechanism. One or all MS software may become unrunnable due to legal issues. In negotiations with MS, a CEO needs alternatives if he is to have any power at all. ERP should give him some so he doesn't have to "bend over ..." Re:^^^ IGNORE #66, MISPRINT ^^^ (Score:2) It should be a cost/benefit analysis -- if you can't afford the lawyers and the accountants, don't select option #2. Businesses make these decisions all the time, chosing to pay out one large sum of money for low risk in favor of many small sums of money with unknown risk. One of the worst mistakes is to put the techies in charge of licence compliance (because they usually have a totally lax attitude towards such things, and they are not exactly organizational geniuses). I lived through a MS audit a few years ago with the kinder, gentler Microsoft. We had our shit in order and had bought certain selective site licences (such as for Office), so it was no problem. Re:They're auditing us (Score:4) For the past several years our firm was receiving shipments (100's at a time) of computers from various vendors (lowest price) which I was in charge of setting up and delivering to various users/desktops/cubes/etc. I always saved the documentation that came with these units (warranty/licenses/CD's/etc) and set them aside for safe keeping. About a year ago, my boss asked what I did with this stuff. I showed him full monitor boxes stuffed with these goodies. Each box was clearly marked with what was inside (i.e. Office97: 200, Win95B: 200, etc). He promptly asked me to load them into his SUV so he could take them to our offsite storage building. While loading his truck, the shipping manager asked what I was doing. I explained myself. The manager then asked my boss to sign manifest/paperwork of some sort showing what was being removed from his shipping area. My boss signed it, then threw his copy into the trash. After loading his vehicle, I walked back thru shipping, stopping at the can my boss threw the paperwork into. For some reason, I picked up the slip he discarded into the trash and placed it into my pocket. Eight months ago, Microsoft came calling. A meeting was held which I attended. Finance asked my boss where the licenses were. My boss then turned to me. Right then and there my career flashed before my eyes... then I remembered the slip I had picked up lazily out of the trash container that one day. I spoke up and said "Let me get the paperwork on that". I came back with the paperwork that the shipping manager made the boss sign and showed it to the CFO. I'm typing this from my bosses old office. Re:Maybe in Texas (Score:2) Interesting point! As an owner of an OEM, I'm all for the fact that the are going after those who do not comply. I have to make sure that systems I sell, have licenses, so should everyone else. HOWEVER, it does seem strange that they are going after small local governments, that probably have little organization and poor record keeping, as far as IS is concerned anyway. So why don't they go after the larger offenders, rather than pick on the small governments? Why don't they go after the 31337 h4x0rz that have CD images of Win 2k and the like on their FTP sites? RAD on Linux (Score:2) There really isn't any RAD programming system for Linux (Klyx ain't there yet.), so that means a lot of time and effort for something pretty small. Au Contrairy!!! Check out RadBuilder 3.0 from Emediat Solutions Inc. [emediat.com]. I really like this RAD platform and have written a couple of client applications. Excellent string manipulations, a complete widget set (with the ability to extend), an integrated IDE, cross-platform with Windows, and, most importantly, comprehensive HTML documentation. Sorry if I sound like too much of a booster, but its sad to see good products fall by the wayside due to a lack of exposure. On the down side, I've heard that they are going to go Open Source but they are not currently... though it is pretty inexpensive (~ $100 for linux I think) They have a support site at [radbuilder.org] Re:Why Must Linux ALWAYS be the answer? (Score:2) -- Dr. Eldarion -- Re:After Virginia Beach, this shouldn't be news (Score:2) Mass Audit = change to .net (Score:2) Microsoft I think believes, "If we want to do something annoying and to take away the privacy of people, do something legal that is worse so that our "new alternative" looks better and is accepted.". --Brandon Bullshit (Score:2) Could Microsoft audit IBM? Sure! Would it bankrupt them? I doubt it highly, knowing how the shop is run there. Microsoft is now resorting to harassing customers with lawyers to extract profit growths. This is good. It means they're putting themselves increasingly into a very unpopular position with large corporations and governments, which may prompt some of the "victims" to lobby (throw money at) lawmakers. It's bad for customers, but that's par for the course. Microsoft has never been good for the consumer, I don't expect them to change now. Re:Rising Costs (Score:2) -- Dr. Eldarion -- There are plenty of licenses... (Score:3) ~afniv "Man könnte froh sein, wenn die Luft so rein wäre wie das Bier" Re:Simple solution are often the best. (Score:2) winblows over Linux. You mean other than application support? Yeah, it's pretty easy to miss that. Re:Can microsoft be sued to pay for lost time? (Score:2) Re:What happened to Microsoft's 'Blind Eye'? (Score:3) You can also see the attitude change between Gates and Ballmer. Gates, since the doomed hobbiest letter, hasn't ever really sweated if someone somewhere was ripping him off, as long as he knew he'd eventually get paid. On the other hand, rampant MS piracy probably keeps Ballmer awake at night. Playing Both Ends -- (Score:2) We have a couple WinME machines where I work and its an accomplishment if they don't crash once or twice during a workday ... but *I* would be the bad guy if I grabbed a NT WKS disk and downgraded to a stable os? Re:Bullshit (Score:2) contracts, and SLA agreements on file, yadda yadda. " You are presuming that all this has no cost. It costs money to keep track of documents it costs money to prove you have the documents. For many companies this could add a several full time staff in and of itself. Re:I have this same problem. (Score:2) Re:Rising Costs (Score:4) As far as your 10,000 user example, I wouldn't want to retraing 10,000 users for anything. Re:Thanks again CmdrTaco! (Score:2) Sure you keep track of how many license you own and how many are in use. You think that MS is gonna come and ask you for these number, you're going to tell them, and then they're going to say "Okidoki... Thanks very much, have a nice day!" Fat chance is hell. What they want is proof. On one hand you'd better have a big room with thousands of those holograms (typically glued on top of a manual) that come with your pre-installed Dell PCs. And (if you read the article) the proof of purchase for everyone of those holograms. On the other hand, you're gonna have to prove that you really have all of these installed machine and not more that you're just not declaring. Now the whole idea behind an audit is that they're probably going to want to verify the information you provided. That's kind of the idea behind the word "audit". Who knows how they do that, they may walk around in your organization and count machines for all I know... The point is, regardless on how organized you might be, someone (and probably more than one person) at your company will be busy for a while. Since I assume that person gets paid by your company, that's money your company is spending on completely unproductive work. It is very disruptive - the level of disruptivity might be slightly alleviated if your IT people have their act together, but it will nevertheless be disruptive. And my last point is that no corporation should have the right to barge in your company and "demand" anything - regardless on how easy it might be to give an answer. The government can't do it (not without "probably cause") why should microsoft be allowed to? We got m$ screwed (Score:3) Anyway, if you want to avoid this situation, just pirate everything. In our case, we were trying to do the right thing. We called to get estimates on some exchange licenses. The sales lady asked a bunch of questions... how many clients... do they all need it... how many servers. All the questions seemed innocent enough. In the end, they took our answers, looked at the number of licenses they knew we had, and they decided we needed to buy more. Re:Nobody is "screwing" anybody! (Score:2) Microsoft is moving to per seat licences for almost all their new software. This is harder than hell to keep up with, espically if the licencses aren't transferrable. Don't be suprised if the first Re:Thanks again CmdrTaco! (Score:2) Personally, I find Python/TK much easier to develop with than VB. I use it in Windows and Linux. I don't know if there's a GUI IDE for it because it's so easy I've never felt the need to even look for one. And Python is a much nicer language than Basic. Re:^^^ IGNORE #66, MISPRINT ^^^ (Score:3) Is that like installing it twice ? Sucks to your EULA (Score:4) these are the reasons I think EULA's are not legal: They're not avaliable prior to purchase. No retailer allows the return of software if you don't like the license. If a retailer *DID* allow the return, MS should bare the cost of that return (restocking fees, shipping etc), but they don't. A contract is an agreement between two parties ... usually both parties recieve some benifit from the contract ... in the EULA, theres no agreement its "take it or leave it." And the Eula provides no benifit (IE waranty, fitness of purpose) and seeks only to benifit the software company. Last but not least, a legally enforcable contract has to have a minimum of 3 signatures, the notary and the two parties ... The notary serves several purposes -- she authenticates both parties, can be called upon in a legal dispute, and establishes that both parties are aware of the contents of the contract, which I believe is called [IANAL] "communication." It is my belief that "press f8 to continue" [NT4 installer] is not a sufficent "notary". Can you prove I read and understood the entire agreement then pressed f8 ? What if I gave someone 5 bucks to install a MS os on my machine ... would I be then bound by the EULA ? I didn't agree to it, someone else did ... is this situation is analgous to purchasing a computer with preinstalled software? bait and switch (Score:3) So everybody is clear, DON'T PAY FOR SQL SERVER! (Score:4) Instead, go download Sybase 11.0.3.3 for Linux or FreeBSD. It works just the same, and it is free for almost all commercial use. MS SQL server and Sybase were once the same product. MS ODBC drivers work with Sybase, and the SQL syntax is pretty much identical. If you need support, just upgrade. No, you aren't buying a product with the spectacular benchmarks of SQL Server 2000, but then again, you aren't buying anything at all, so why complain? Re:Simple solution are often the best. (Score:3) 2) throwing all of your Microsoft holograms in one file cabinet with a sheet of paper attached to each that shows the PC's manufacturer and serial number And keeping the install disks locked away with the key held by the most anal person in the company. And searching employees on their way in to make sure they don't bring software from home to install, make sure that all software purchases be handled exclusively through the above anal person (no more running to Office Depot with petty cash), having your legal staff study the licenses carefully in a vain attempt to come up with the same interpretation that MS will use, and finally: get audited and screwed anyway. It seems that even if you buy an unlimited site license, MS will argue about what constitutes 'your site'. On the other hand, Linux and the BSDs all effectively have an unlimited universe wide no questions asked site license.
https://slashdot.org/story/01/03/30/1527201/microsoft-turning-screws-on-customers
CC-MAIN-2017-17
refinedweb
7,645
71.95
In this section, you will learn how to parse the time using custom format. In the previous section, we have discussed the formatting of time. Parsing is just reverse of it. It converts a string into an internal representation of different objects. Here we are going to parse the time with custom format. You can see in the given example, we have used a pattern of special characters inside the constructor of class SimpleDateFormat to specify the format of the time to parse. Then we have invoked the method parse(). This method parses the text to produce an object. Now, in order to convert this date object into string, we have used toString() method and display it on console. Here is the code: import java.util.*; import java.text.*; public class ParsingTime { public static void main(String[] args) throws Exception { DateFormat df = new SimpleDateFormat("hh.mm.ss a"); Date date = (Date) df.parse("05.50.33 PM"); System.out.println(date.toString()); } } Output:
http://www.roseindia.net/tutorial/java/corejava/javatext/ParsingTimeUsingCustomFormat.html
CC-MAIN-2014-52
refinedweb
163
59.8
In my introduction to computer programming course i was a given a project asking to create a change maker. Specifically, i must write a program that will make change in coins for any amount up to 99 cents using the fewest possible coins. The program should prompt the user to enter an amount, and then print out the number of each type of coin to make that amount of change. If there are no coins of a particular type, then the program should not print a line for that coin. Examples: Enter the change amount: 93 Quarters: 3 Dimes: 1 Nickels: 1 Pennies: 3 Enter the change amount: 89 Quarters: 3 Dimes: 1 Pennies: 4 This is what i have so far. The program works unless the "change" value falls under 25 so i'm assuming the problem involves the "if" statements. I have been stuck for a while and was hoping a misc brah could help me out with some suggestions. #include <stdio.h> main () { int change; int coins; int remainder; printf ("Please enter an amount of change from 0-99.\n"); scanf ("%d", &change); if (change>=25) { coins = change/25; printf ("Quarters: %d\n", coins); remainder = change%25; } if (remainder>=10) { coins = remainder/10; printf ("Dimes: %d\n", coins); remainder = remainder%10; } if (remainder>=5) { coins = remainder/5; printf ("Nickles: %d\n", coins); remainder = remainder%5; } if (remainder>0) { printf ("Pennies: %d\n", remainder); } }
http://forums.devshed.com/beginner-programming/917889-beginner-programming-question-last-post.html
CC-MAIN-2017-34
refinedweb
236
72.29
A namespace for describing Access Control Lists $Revision: 1.10 $ $Date: 2002/01/28 16:18:07 $ Access Rule An assertion of access privileges to a rdf:resource. Identity Any entity to which access may be granted to a rdf:resource. Principle An Identity to which credentials or other uniquely distinguishing characteristics may be assigned. Group Collection of Principles. accessor The rdf:resource identifying an entity (for intance, a user) to whom access privileges have been granted. access The access privileges extended to an accessor. has access to Relates an Access Rule to the rdf:resources to which the rule applies. The inverse relation is 'accessedBy' member of The relationship of a member of a group to that group. time interval The time interval over which an ACL rule is declared. ChACL change ACLs method ChACL read ACLs method HTTP HEAD method HTTP GET method HTTP PUT method HTTP POST method HTTP DELETE method HTTP TRACE method HTTP CONNECT method
http://www.w3.org/2001/02/acls/ns
crawl-002
refinedweb
160
57.57
The Community REST SDK is a framework for third party .NET developers to use in their sites or projects to interact with a Verint Community instance using its REST API. It makes using REST easier by handling the authentication flow automatically via Oauth, giving a consistent unified interface for making requests, allows you to handle the response in its XML or JSON form, or interact with the response using some more .NET friendly methods such as streams, XElements and the dynamic object. The most powerful of these is dynamic since it means you can interact with the response like it was an object without the need to create your own classes and deserialize the response. While the SDK handles many operations for you, to ensure you understand what is being done it is recommended that you review the REST API, REST Authentication, and Making Requests topics. Table of Contents Open Source The SDK is also completely open source meaning if you want to know how it works you can head over to GitHub to view the source. All of the technical documentation can be found in the GitHub wiki and if you have problems you can file a bug there as well. We also accept community contributions so if there is something broken you want to take a try at fixing, or add a little something you think is valuable, fork the repository and submit your change as a pull request. Community Rest SDK on Github Requirements Since this SDK is based on .NET, it will only run in a .NET environment so if you currently have a non-windows environment or a site running a technology other than a .NET web technology, unfortunately at this time the SDK is not for you. But you can still interact with your community using the REST API directly. Here is a list of the minimum system requirements: - A windows environment or .NET based web technology running .NET 4.5 or later - Verint Community (formerly Telligent Community) version 8 or higher (Any Edition)* *In general we recommend running the most recent version of Verint Community available. Community REST SDK was designed to be forward compatible starting with Verint Community (formerly Telligent Community/Zimbra Social) version 8. However, since the SDK is just using REST, it can possibly work unsupported with versions as early as Telligent Community 7. Compatibility It is important to understand that the SDK does not understand the differences between community versions so even though the SDK may allow you to make a call to any endpoint, if that endpoint is only supported in a version of community later than the version installed you will receive a not found or error response. From time to time we may add SDK features that rely on newer functionality than your community may currently have. In those cases we will make every effort to have those features unavailable or fail gracefully on earlier versions so that you can continue to utilize the latest SDK version. The SDK also has a supported API and we use the same logic as Verint Community. These are classes located in any Telligent.*.Extensibility.* namespace. We reserve the right to alter anything as needed not in this namespace. If it is an API we will not make breaking changes if its avoidable, instead we will issue new versions of the API in question. This rule will also be applied to any pull request. How to Get It There are 2 ways to get it. As already mentioned you can download the source from GitHub and compile it from there into a dll. The easier method is to install it directly into your Visual Studio solution using NuGet. You can install it by searching for Telligent SDK in the NuGet Package manager GUI or use the command line pacakage manager: PM> Install-Package CommunityServerSDK Sitecore Developers For Sitecore developers integrating with Sitecore specifically, we have a Sitecore specific SDK. Really its an implementation of the same SDK with some additional Sitecore specific items. You should download the Sitecore version from GitHub or via NuGet. It is important to point out that the Sitecore version when obtained from NuGet will first install the Community REST SDK as a prerequisite so all of the functionality discussed in this and future topics is full available. PM> Install-Package CommunityServerSDKSitecore Configuring the SDK The installation and configuration of the SDK can vary depending on what you want to do. In the Hosts topic you will be introduced to the main API object and depending on that host there may be more or less configuration needed. Consult the topic on the individual host you are using for its specific configuration. Single Sign On Single Sign On (SSO) is an option available only when using the Default REST Host. It also functions differently than the SSO modules in Verint Community. You do not configure any of the SSO modules in community, instead its is handled by the SDK and a user synchronization with community. This will be covered in depth in the Default REST Host topic, but in general the process is handled by utilizing signed urls that community uses to log users in and out. While it involves cookies on both the community and SDK side, these cookies are not shared. This means the SSO can be handled across domains. For more information on installation on configuration, visit the GitHub Wiki
https://community.telligent.com/community/11/w/developer-training/63123/rest-sdk
CC-MAIN-2021-39
refinedweb
906
50.67
This is not the first time that we have worked with Thunderhead around DB2 pureXML. Earlier this year, we worked with them to develop a solution for derivative trade confirmations. And before that, we worked with them on a solution for insurance correspondence automation. So if you don't want to wait until the Context Engine comes out later this year to see Thunderhead with DB2 pureXML, check out these solutions. In fact, this week at Sibos (the world's premier financial services event, according to their website), the solution for the derivatives trade confirmations is being demoed at the IBM booth. So if you just happen to be there, stop by and take a look! (Oh, did I mention that Sibos was in Vienna this year?) On this past Monday and Tuesday, I attended the first DB2 9 for z/OS pureXML Proof of Technology (PoT) event in St. Louis, organized by Paul Bartak, one of our Senior Executive IT Specialists in the field, at the IBM Customer Briefing Center facility near St. Louis International Airport. I stayed there for two days. The first day is for a group of IBMers (with one customer who couldn't come the second day). We listened to Paul's presentation and exercised through the labs, learning pureXML and testing the PoT. I had fun setting up the network for ThinkPad's with VMWare, and testing connections to the DEMOnet DB2 9 for z/OS on a native z machine at Austin in the morning. I was amazed that a bunch of software guys could handle the hardware and network wiring without too much trouble (well, with some remote assistance). We even identified a malfunctioning 5-port Ethernet switch. On the second day, there were 12 participants, from nearby three DB2 for z/OS clients, including a university from Illinois. They are DBAs, application architects and developers, and data architects. Paul gave a presentation, about 75 minutes, covering the basics of XML and pureXML features, to prepare for the labs. Then everyone went through the labs on one's own pace. I was impressed how fast the participants did. It took me about 3.5 hours to finish the lab on my first-day trial. But I saw many of these customers finished the lab in a shorter time than I did! One DBA told me that she really liked it. She had the concept and now with the lab, it's really flowing. Oh, she thanked me for delivering XML capabilities in DB2 for z/OS. I had chance to chat with some of the participants on their intended use of pureXML. They were thinking to use XML to keep logs for auditing, store diverse system configuration information, or store purchase orders, or even for displaying data structures during debugging, etc. And I also learned some features that would be nice to have in the future. This pureXML PoT uses a real DB2 9 for z/OS system, and is unique in the following aspects: We are lucky to have someone like Paul, who knows what customers need, understands the technology, and put lot of efforts to deliver a PoT that people can really learn to understand the technology. I'd like to thank Mark Wilson for his contribution to this PoT also. This pureXML PoT will be offered in different areas. If you are interested in (having your folks) attending such pureXML PoT, please contact your local DB2 advisor, or contact me or Paul. It will also be offered as a Hands-on Lab during IOD 2008 Conference at Las Vegas,Session: HOL-2584A DB2 9 for z/OS pureXML for DBAs Time: Wed, 29/Oct, 10:00 AM - 01:00 PM Location: Mandalay Bay South Convention Center - Lagoon DTake note if you are attending IOD and interested in playing pureXML on DB2 9 for z/OS there. Since.. Last. Last week, just before the Thanksgiving holiday in the US, Cindy Saracco from our team, along with Tad Worley, one of our Infosphere Warehouse experts, published a new paper on developerWorks titled: "Create business reports for XML data with Cognos 8 BI and DB2 pureXML: Two techniques to help you get started." We have been getting more and more questions about using Cognos with DB2 pureXML and thought it would be good to get some information out there on the subject. Because XML messages frequently contain important business data, companies are increasingly interested in querying and reporting on this data. The paper takes you through two different methods, step by step, showing examples along the way. Cindy is one of the best technical writers that I know of (inside and outside IBM), so if this topic is of interest, be sure to check out the article. And in January, we will have a related article coming out titled "Reporting on pureXML data with QMF/DataQuant" so look for that as well. Kate Riley Tennant Last week, December 8 and 9, 2008, I visited Kansas City and Topeka with the help of Jeff Mucher, a DB2 advisor from IBM at Dallas. On the 12/8 afternoon, I presented Introduction to pureXML in DB2 9 for z/OS to the Heart of America DB2 Regional User Group at a hotel in Overland Park, with an audience of over 30 people. In the middle, I also did a live demo using CLP (Command Line Processor), connecting to the DB2 9 on a DemoNet native machine at Austin. I have put the CLP scripts and their output capture called XMLQuickDemo online so you can download to try out yourself. The presentation was well received with quite some interesting questions and discussions. At least one friend from DST told me right after the presentation that the DB2 for z/OS XML features are very impressive. I was happy to hear that! Here are a few words I've heard from customers describing XML in DB2 9 for z/OS or the developers (including me :-) ): quite impressed/very impressive, brilliant, genius, clever. Impressive, huh?! On the 12/9, Jeff and I visited a DB2 client at Topeka, KS. We met with about a dozen developers, DBAs and System Programmers to introduce DB2 XML features and discuss their application scenarios. We saw DB2 XML well fit in three scenarios for their applications: The common theme in these usage scenarios is flexibility, flexibility, and flexibility. We saw more and more of these XML application patterns. Beth wanted me to confirm if her description of XML is correct: it's more like a delivery truck, it could be UPS, FedEX, or any other truck, we don't care. We handle it based on its content. YES! This trip had special meaning for me. I was attending KU for 2 years over 14 years ago, and this was the first time I got back to Kansas after so many years. And I was able to meet with one of my classmates not seen for over 14 years, and also one of our former interns, now a CS faculty member at KU. Also I seldom experienced snowy weather these years, and it was snowing on 12/9! What a colorful trip! I'd like to describe one performance result at our lab of a workload that simulates an auditing application. It uses one XML column to store all kind of events in small XML documents. There are 210 XML indexes created on the XML column. For each XML document, there are about 10 indexes that will have keys generated, and the rest does not have hit. This large number of indexes enable efficient diverse queries on the event log. The result is that the overhead of 210 XML indexes caused only 40% degradation compared with only 10 XML indexes that generates XML keys always. This is a pretty good result! We have an APAR PK75613 (overriding PK66218) to improve XML index keygen performance. Applying this will help reduce XML value index overhead in general. By the way, if you'd like to know what APARs to apply for XML features in DB2 9 for z/OS, look into the info APAR II14426. Yes, you almost need nothing to set up before you try out pureXML in DB2 9 for z/OS if you are in DB2 9 NFM and have SPUFI for DB2 9. Here is how: download XMLQuickDemo I just posted in my previous blog entry. And log on to TSO and get to SPUFI, and copy the SQL statements from the download, and run from there. Within the downloaded zip file, statements from the files with a name starting with "1" to "8" can be tried out in sequence from SPUFI without WLM/Java/XSR setup. "ADropTable.CLP" is used to clean up after yourself. Chances are that you will run successfully. Possible problems you may encounter: When you try your own XML documents and queries, especially with large XML documents or results, you may encounter sqlcode: -904 sqlstate: 57011sqlerr Message: UNSUCCESSFUL EXECUTION CAUSED BY AN UNAVAILABLE RESOURCE. REASON 00C900D1, TYPE OF RESOURCE 00000907, AND RESOURCE NAME... which means you need to increase LOBVALA zparm setting, default was 1024KB. DB2 uses LOB functionality for XML data bind-in and bind-out, but no LOB used for XML database storage. DB2 9 XML column type and built-in functions on XML type do not require XML schemas. Therefore, if you don't use XML schemas, you don't need XSR, which requires Java stored procedure setup and WLM setup. Only when you need XML schemas for schema validation or annotated-schema decomposition, you need the additional setup. Once you go through these, you can try to embed SQL/XML queries in your COBOL or PL/I applications. You can just use LOB or VARCHAR host variables to hold XML data. Happy playing with pureXML! No matter how familiar one is with SQL, DB2, and relational data model, the first time getting exposed to XML and XPath, she would likely feel overwhelmed. Getting into the XML world is not a small step. But here I'd like to demystify XPath for SQL people who is just getting started with XML and XPath. I will give you a very gentle introduction to XPath and how you can use it to query XML data in SQL/XML. It is actually very easy to start: we need to understand thatXML data model is a natural but significant extension to the relational model, it features two very powerful structural capabilities - nesting and repeating. First, it allows nesting, to any level (DB2 allows 128 levels of nesting maximum). Nesting is very common in data structures. For example, you want to separate a name into first name and last name, then you have two levels. You can view a table to have three levels: table level, row level, and column level. When you reference a column with tablename.columnname, you actually use tablename to reference each row in the table, then use the column name to get to a column in the row. In most programming languages, you use dot (.) to separate the field names at each level, such as name.firstname. You can have an XML document as simple as name with two sub-elements - firstname and lastname in an XML column like this: <name> <firstname>Guogen</firstname> <lastname>Zhang</lastname></name> Now how can you get the firstname and lastname in the document? XPath is the only solution in SQL: You can use /name/firstname to get the firstname, and /name/lastname to get the lastname. So what's the difference of XPath /name/firstname from name.firstname? You start with a slash (/), and replace the dot with slash. Each slash gets you to the next level. You can use that to get any field or structure in XML data. The second feature is repeating, like an array, but very flexible. Take a simple example, you can list multiple phone numbers in one XML column, and also associate an attribute for each phone number element to tell it's a wired phone or cell phone, or work phone or home phone: <phones name="Guogen Zhang"> <!-- fake numbers --> <phonenumber type="home">408-555-1234</phonenumber> <phonenumber type="cell">408-555-2345</phonenumber> <phonenumber type="work">408-463-2012</phonenumber></phones> To get a specific phone number, you can use array index notation:/phones/phonenumber[2]. [2] is a short-hand for [fn:position()=2]. You can generalize this predicate to other search condition: /phones/phonenumber[@type ="cell"]. This will get the second phone number also. Here @type is to refer to attribute "type". Attributes are those things in the start tag of an element (between < and >). And you use quotes around "cell" to mean a string comparison. In contrast, for a numeric comparison you would use numeric values without quotes. For example, size = 5. Two more commonly seen "operators" are "//" and ".." (abbreviated axes). "//" is to look at any level under a certain level (roughly descendant-or-self). For example, //phonenubmer is looking for phonenumber in the entire document. ".." is going back up one level (parent axis). Now you should know why it's called XPath, as it's like file paths in a hierarchical file system. Now let's look at where XPath is used in SQL/XML. XPath is used in the XMLEXISTS predicate, and the XMLQUERY and XMLTABLE functions. For example, the following query retrieves the quantities of "Baby Monitor" items from purchase order documents: SELECT XMLQUERY('declare namespace ipo="";/ipo:purchaseOrder/items/item[productName = "Baby Monitor"]/quantity' PASSING XMLPO)FROM PURCHASEORDERSWHERE XMLEXISTS('declare namespace ipo="";/ipo:purchaseOrder/items/item[productName = "Baby Monitor"]' PASSING XMLPO)# In the above, we assume we have a table PURCHASEORDERS containing XMLPO column of type XML. In XMLQUERY and XMLEXISTS, PASSING is the keyword to pass the XML column from the SQL world into the XML (XPath/XQuery) world. The XMLEXISTS in the above example is to search for purchaseOder with an item whose productName is "Baby Monitor", while XMLQUERY is a scalar function to extract quantity for the baby monitor items. XMLEXISTS is the only main predicate on XML data in the SQL world (the other predicate on XML is IS [NOT] NULL). You cannot use other SQL comparisons on an XML value, but you get many comparisons within the XML (XPath) world. Also in the above query example, I illustrated the use of XML namespaces. To see purchase order document examples and more query examples, see the XMLQuickDemo I referenced in the previous two blog entries. Isn't XPath easy to start with? For more details and tutorials on XPath, you can Google "XPath tutorial" to get very good web sites on the top of the list, such as w3schools.com and zvon.com. Try some examples, you will soon become comfortable with XML and XPath! (well - you can pick up the terminology gradually.) If you are a System Programmer or System DBA, you would find nothing significant that is really new in pureXML for DB2 9 for z/OS, thanks to our architectural design principle: leveraging mature optimized infra-structure in DB2 for XML data management. We use familiar table spaces (pages and records), indexes, buffer pools, and locking scheme for XML data, and provide DBAs the same utilities and tools to administrate the familiar objects, even though XML data model is hierarchical and even though we provide application developers "revolutionary" new weapons (SQL/XML with XPath) to process XML data in DB2. If this is the first time you looked at XML in DB2 9 for z/OS, you can take a quick look at XMLQuickDemo, where the second CLP 2ListObjects.CLP finds the DB objects involved in storing an XML column data, for background. You probably would be a bit surprised (and feel relieved?!) when you realized (or you are told now) that there are no new utilities specific for XML, and little setup to start with XML in DB2 9 for z/OS. I've looked again and again to see what's new specific to XML. I've found the following list. In the following, I will briefly describe what is new in pureXML for system programmers and DBAs, and what remains the same. Setup and Configuration As I indicated in a previous posting, getting started with pureXML requires almost nothing to set up once you get to DB2 9 for z/OS NFM. You probably only need to take care of some authorization issues, business as usual. When the application folks get serious, you may need to take care of the zparms that are related to XML: XMLVALA, XMLVALS, LOBVALA, LOBVALS - these four are related to virtual storage limit. LOB zparms are involved because LOB manager is used for XML data bind-in and bind-out. Also the default buffer pool for XML data (BP16k0) needs to be changed. XML Schema Repository (XSR) setup is only needed if your applications use schema validation or annotated-schema-based decomposition. Only then will you need to take care of Java stored proc and WLM setup. You would deal with SDSNLOAD and SDSNLOD2 also. Utilities and Tools No new utilities for XML, and no new tools for XML performance monitoring. You use all the existing and familiar utilities and tools to cover XML objects. You include XML objects using LISTDEF for backup and recovery operations. Some minor restrictions may apply to XML objects in some utilities. XML performance problems can be analyzed through accounting traces and performance traces. Business as usual. XML indexes and XPath, and Access Types Indexes are critical for query performance. No difference for XML queries, they require XML indexes. This is something new to application DBAs and architects. XPath is used to index XML data that is searched frequently. Even the new XPath is involved, the index infra-structure remains the same. One XML document may generate zero, one, or more index entries for an XML index. Also some minor new access types are introduced when using XML indexes. They are DX/DI/DU for DOCID list access (single index), DOCID list ANDing, and ORing. You still get "R" for DOCSCAN evaluation of XMLEXISTS predicate. XML Schema Registration In case your applications would like to use XML schemas, you need to set up XSR, and register XML schemas. Something new to application DBAs. There are tools to help, such as IBM Data Studio, in registering XML schemas. If you just want to use a simple tool, I recommend CLP (Command Line Processor). I'd like also to emphasize that knowledge in XML, XPath, and SQL/XML will make you much more valuable at this SOA age, and in this tough economy. Take advantages of XML and pureXML in DB2 as many other people do! Let me know if you have any questions and concerns.. To make it easier for DB2 for z/OS friends to find pureXML information specific to DB2 for z/OS, we've just created a DB2 for z/OS pureXML wiki page under DB2 XML wiki page to list z/OS-specific information. It may contain cross-references to other areas, and has initial content right now. We expect the content to accumulate over the time. In the new wiki page, I've just uploaded DB2 9 for z/OS pureXML podcast series, recorded by Guogen Zhang (that's me :-) ) in 2008. This is a 10-part introductory podcast series on pureXML business value and technical knowledge. The first part covers overview and business value, and the follow-on 8 parts cover different aspects of pureXML, from query, schema, to utilities and performance. The last part talks about best practices. Each part is about 10-15 minutes. The podcast is in mp3 audio format together with transcripts in PDF. Give us feedback on whether this introductory podcast series is useful to you. If so, we will prepare follow-on podcast for more advanced topics of your interests. Thank you. Check back the DB2 for z/OS pureXML wiki page often. You never know what will pop up there. One of the key differences of XML from the Large Object (LOB) types in DB2 is that XML data can be indexed. The XML indexes supported in DB2 9 are also called XML value indexes, and queries on XML can use these indexes for performance. An XML index example An XML index is used to provide a mapping from a node value to its location. An XPath expression is required to specify the nodes within a document to be indexed. Two data types are supported for XML indexes in DB2 9 for z/OS: DECFLOAT for numeric values and VARCHAR(n) for string values. For example, the following CREATE INDEX DDL creates an XML index on TransRefGUID of the REQUESTXML column of ACORD data, as VARCHAR(24): CREATE INDEX ACORD.ACORDINDEX1 ON ACORD.REQUEST(REQUESTXML)GENERATE KEYS USING XMLPATTERN'declare default element namespace ""; /TXLife/TXLifeRequest/TransRefGUID' as SQL VARCHAR(24) DB2 will take the value of /TXLife/TXLifeRequest/TransRefGUID as the key, and map to its logical location (DOCID, NODEID), and physical location (RID), using the existing B+-tree index infrastructure.This index can be used for queries that search on TransRefGUID. For example, the following XMLEXISTS predicate can potentially use this index: XMLExists('declare default element namespace ""; /TXLife/TXLifeRequest[TransRefGUID="2004-1217-141016-000012"]'PASSING REQUEST.REQUESTXML) From queries, it is easy to figure out what kind of XML indexes can speed up the queries - by concatenating path steps from within a predicate to the steps outside the predicate, and using data types consistent between the queries and indexes. Before explaining the index access procedure, I'd like to review the basic XML storage scheme so you can understand it better. Basic XML storage scheme in DB2 9 for z/OS The following picture depicts the high-level storage scheme for XML data. At high-level, the XML data is stored in a separate table space, just like LOB data. The real XML data is stored in the XMLDATA column of the internal XML table. It contains the hierarchical data in records that can fit in 16KB pages. In order to support free movement of data records, logical links using NODEIDs are used. That's why we need a NODEID index to link records for a document. Similarly, since utilities, such as REORG, can be applied to the base table space and the XML table space independently, XML indexes do not contain base table RIDs, but XML table RIDs. In order to get to base table rows from XML indexes, we need the DOCID index on the base table (see below). That's why DB2 always creates a DOCID index on the base table and a NODEID index on the XML table as part of the storage scheme, although they are for totally different purposes. By XML indexes, we refer to XML value indexes created by users. Basic XML index access plans If you use EXPLAIN for a query and select some key columns from the PLAN_TABLE, you will see some new access type in the ACCESSTYPE column for SQL/XML queries. They are the following: If you see "R" (R-Scan) in the ACCESSTYPE for a table with an XMLEXISTS predicate, then DOCSCAN is applied for the XML column. No new type was introduced for the scan. Here is an example for an index ANDing plan: +---------------------------------------------------------------------+ | PLANNO | ACCESSTYPE | MATCHCOLS | ACCESSNAME | MIXOPSEQ | +---------------------------------------------------------------------+1_| 1 | M | 0 | | 0 |2_| 1 | DX | 1 | ACORDINDEX2 | 1 |3_| 1 | DX | 1 | ACORDINDEX1 | 2 |4_| 1 | DI | 0 | | 3 | +---------------------------------------------------------------------+ The following diagram illustrate the process of using the index ANDing plan. Step 4 is where a DOCID index is always used. These are the basic XML index access plans. For example, NODEID and RID from XML indexes are not used for queries today. We are enhancing plans for better query performance, so expect more methods in the future. Somethings specific to XML indexes The same principles for relational indexes apply to XML indexes, such as create indexes only needed by queries and use REBUILD INDEX. The following are some unique features of XML indexes: To produce consistent query results with or without XML indexes, DB2 tries to tolerate cast errors during XMLEXISTS predicate numeric comparison. For example, if a node size contains "XL", comparison [size > 10] will tolerate the "XL" value, which is equivalent to evaluate to false. DECFLOAT is used instead of DOUBLE for the numeric index type due to its precision. Date and time are not yet supported, but you can use string indexes if you use ISO format (which is required in XML) without timezone or always use the same timezone for the data, and use string comparison in the queries to search the documents. If you never search inside XML documents, but get XML data in and out as a whole, you probably don't need to use the XML type, since VARCHAR or VARBINARY or LOB types can serve the purpose. To summarize, XML indexes use XPath to identify nodes to be indexed, and can be used for queries with XMLEXISTS and XMLTABLE predicates. XML Schema is a W3C recommendation to specify "schema"s for XML data, used to put constraints on otherwise extremely flexible XML data. The constraints include basic data types, structures, occurrences, uniqueness, and referential integrity, etc. DB2 9 for z/OS pureXML provides XML schema validation for XML data through a user-defined function (UDF) called DSN_XMLVALIDATE. The following provides some guidelines in using XML schema validation. INSERT INTO MYTABLE VALUES(10, XMLPARSE(DOCUMENT DSN_XMLVALIDATE(xmllob, 'SYSXSR.PURCHASEORDER') ) ); Contact us if you have any questions related to XML schema validation.
https://www.ibm.com/developerworks/mydeveloperworks/blogs/purexml/tags/purexml?order=asc&maxresults=15&sortby=0&lang=en
CC-MAIN-2015-11
refinedweb
4,286
61.87
public class Covariance { public static double sampleCovariance(double[] x, double[] y) { // means of each data set double xMean = mean(x); double Mmean = mean(y); double result = 0; for(int i = 0;i < x.length; i++) { // sum the product of the deviations with the means result += ((x[i] - xMean) * (y[i] - yMean)); } // divide by data size - 1 to get the estimated covariance from a sample result /= (x.length - 1); return result; } // Get the mean of a data set private static double mean(double[] data) { double sum = 0; for (int i = 0; i < data.length; i++) { sum += data[i]; } return sum / data.length; } public static void main(String[] args) { double[] x = { 2, 3, 4, 5, 6, 8, 10, 11.53542 }; double[] y = { 21.05, 23.51, 24.23, 27.71, 30.86, 45.85, 52.12, 55.98 }; double result = Covariance.sampleCovariance(x, y); System.out.println("Sample Covariance = " + result); } } // OUTPUT // Sample Covariance = 46.45166951071428 Calculate the Sample CovariancePage 1 of 1 0 Replies - 1795 Views - Last Post: 03 April 2012 - 05:58 AM #1 Calculate the Sample Covariance Posted 03 April 2012 - 05:58 AM Description: See Example Usage in snippetIn probability theory and statistics, covariance is a measure of how much two random variables change together. More information - Page 1 of 1
https://www.dreamincode.net/forums/topic/364072-Calculate-the-Sample-Covariance/
CC-MAIN-2019-43
refinedweb
212
54.73
Contains the event distributor. More... #include "gui/core/event/dispatcher.hpp" #include "gui/core/event/handler.hpp" #include "sdl/point.hpp" #include "video.hpp" #include <string> #include <vector> Go to the source code of this file. Contains the event distributor. The event distributor exists of several classes which are combined in one templated distributor class. The classes are closely tight together. All classes have direct access to each others members since they should act as one. (Since the buttons are a templated subclass it's not possible to use private subclasses.) The mouse_motion class handles the mouse motion and holds the owner of us since all classes virtually inherit us. The mouse_button classes are templated classes per mouse button, the template parameters are used to make the difference between the mouse buttons. Although it's easily possible to add more mouse buttons in the code several places only expect a left, middle and right button. distributor is the main class to be used in the user code. This class contains the handling of the keyboard as well. Definition in file distributor.hpp.
http://devdocs.wesnoth.org/distributor_8hpp.html
CC-MAIN-2018-26
refinedweb
182
60.82
tag:blogger.com,1999:blog-26670947047263399562016-02-17T16:14:04.283-08:00Daravuth BlogMy Personal Blog | News Sharing | Blogging | TechnologyAdminnoreply@blogger.comBlogger38125blogspot/Ecoou Friend 藤井康治 Fujii Kouji in J />Adminnoreply@blogger.com0tag:blogger.com,1999:blog-2667094704726339956.post-22845924908835646392012-03-12T08:44:00.000-07:002012-03-12T08:53:32.101-07:00The Reason why study Japanese by Nao Daravuth<div style="text-align: justify;"></div><div class="MsoNormal" style="font-family: Times,"Times New Roman",serif; text-align: justify;"><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="" width="320" /></a></div><div style="font-family: Arial,Helvetica,sans-serif;"><span style="font-size: small;">J.</span></div><;">Finally, Study Japanese I will make friends from many different countries in Japan. If we don’t understand something we can help each other and especially Japanese people are often extremely interested in meeting and becoming friends with foreigners as well. </span></div>Adminnoreply@blogger.com2tag:blogger.com,1999:blog-2667094704726339956.post-17124040525140177612012-02-07T23:40:00.000-08:002012-02-07T23:47:46.945-08:00Why Do I Study English?<div style="text-align: justify;"></div><div class="MsoNormal" style="text-align: justify;"><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="" width="320" /></a></div>All human beings were born in the world need to study, although they find it difficult. Actually, they have to study not only subjects in a school system but also everything around them. In addition, they try to learn so as to reach their future goals or to make their dreams become true someday. For me, I decide to choose one subject to learn so that I can get success in my life. In fact, it is English, and there are many reasons why I study English such as, I need it to study at university, it’s the international language for most people, I can understand more about the world, and it helps me to get a good job.</div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;">First,.</div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;".</div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;".<br /><br /><br /><div class="MsoNormal".</div><div class="MsoNormal"><br /></div><div class="MsoNormal".</div></div>Adminnoreply@blogger.com3tag:blogger.com,1999:blog-2667094704726339956.post-19600878885360033232011-11-17T03:43:00.000-08:002012-02-07T23:47:31.686-08:00My Opinions about Reading and Writing<a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5675932338956764402" src="" style="cursor: hand; cursor: pointer; display: block; height: 265px; margin: 0px auto 10px; text-align: center; width: 400px;" /></a><br /><div class="MsoNormal" style="text-align: justify;">There are the two most important of academic are writing and reading. Some subjects are required me need to read and write a lot. For example, when I Study about cultural or Khmer Study, I need to read and then I need to write about something what I know. There are a lot of importance of reading and writing, so I will try to explain about the importance of reading and writing based on my experience.</div><div class="MsoNormal" style="text-align: justify;"><o:p> </o:p></div><div class="MsoNormal" style="text-align: justify;">First of all, I would like to tell about the importance of reading. For me, reading is one activity of my learning. I can make a critical thinking after I read. It can make me to know what is right and what is wrong, and I can know a lot of vocabularies, improve my reading skill, and understand the meaning in the text quickly. Therefore, reading is my leisure activity. I usually read books when I have free time. Reading can cut down the stress and sadness out of my mind. For instance, reading comic books have more fun. Furthermore, the more I read, the more I enjoy myself. When I read a lot, I will be able to make good decisions. It is useful to read more in order to exercise my mind, get information around the world, Know instructions and directions, and plan for my future. Therefore, reading is a good activity for me to develop my mind, my ideas, and my knowledge.</div><div class="MsoNormal" style="text-align: justify;"><o:p> </o:p></div><div class="MsoNormal" style="text-align: justify;">Second, I would like to describe about the importance of writing. Writing is also important for me. I can’t remember all lessons, so I have to write them down on the books. It’s easy to remember, and when I forget some lessons, I can take and review it. It is the benefit of writing. In addition, I can share my ideas with my writing, and I can describe about something that I like. I write because I want to tell how I feel and how I think. Also, writing is a part of fun. I write to enjoy myself. When I write a lot, I can know grammar structures and words spelling. Writing exercises my brain to show opinions, ideas and feeling. Almost every day, I like to write about my special activity into my books. I write in order to show my feeling to my parents, my relative, and my friends when I miss them. I write because I want to distribute my knowledge to others. Indeed, writing is the process that I can improve my writing skill and my studies.</div><div class="MsoNormal" style="text-align: justify;"><o:p> </o:p></div><div class="MsoNormal" style="text-align: justify;">In conclusion, reading and writing are the two important components of academic. They are also very important for me to according to the reasons above. There are a lot of advantages that I can get from reading and writing. I will have enough ability to read and write very well, if I try to do it. So, it is motivating me to study and I wish to be a good writer and reader.</div>Adminnoreply@blogger.com2tag:blogger.com,1999:blog-2667094704726339956.post-164612335312662562011-05-18T17:27:00.000-07:002011-05-18T17:40:17.541-07:00What is Real Hide IP and How to use it<a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 214px;" src="" alt="" id="BLOGGER_PHOTO_ID_5608212388714104242" border="0" /></a><br /.<br /><br />Now we can use software instead of these proxy websites. Type of the software has a lot, but now I choose one that Free and easy to use is "Real Hide IP" <a href="">Click Here to Download.</a><br /><br />After downloadind, first extract from file rar put it in C:\Program file\ and use RealHideIP.exe is Desktop shortcut.Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-7352955134478999152011-05-17T08:58:00.000-07:002011-05-17T09:03:31.519-07:00Parents are My First Teachers<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 214px;" src="" alt="" id="BLOGGER_PHOTO_ID_5607715483515572130" border="0" /></a><br /><span style="font-size:12pt;"><span _msthash="2546284" _mstchunk="true" lang="en">Everyone always have many teachers in their life. </span><span _msthash="12092899" _mstchunk="true" lang="en">It is not only the teachers in the class, but it is including classmate, friends in the work, neighbor, boss, and especially our parents. </span><span _msthash="6896279" _mstchunk="true" lang="en">Parents are the first teachers in our lives, and they always give many advices for us.</span><br /></span> <p class="MsoNormal"><span style="font-size:12pt;"><span _msthash="8633287" _mstchunk="true" lang="en">Our beginning of parents to teach us while we was born and taught us, they shared experiences in their lives to us. </span><span _msthash="3834402" _mstchunk="true" lang="en">On the other hand they teach us also language and other knowledge.</span><br /></span></p> <p class="a"><span _msthash="2399566" lang="en" style="font-size:12pt;">So, our parents are the best teacher in our life.</span></p>Adminnoreply@blogger.com3tag:blogger.com,1999:blog-2667094704726339956.post-37953923645943379962011-02-26T19:05:00.000-08:002011-02-26T19:16:05.275-08:00Cambodia Marriage Custom<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 261px;" src="" alt="" id="BLOGGER_PHOTO_ID_5578202630996161666" border="0" /></a><br /><div class="modfloat full"><div id="mod_13120854" class="module moduleText color0"><div style="" class="txtd" id="txtd_13120854"><p>Traditional.</p> <p.<br /></p><div id="mod_13120822" class="module moduleText color0"><div style="" class="txtd" id="txtd_13120822"> <p.</p> <p.</p> <p.</p> <p.</p></div></div></div></div></div>Adminnoreply@blogger.com4tag:blogger.com,1999:blog-2667094704726339956.post-31056894360004886782011-02-24T22:01:00.000-08:002011-02-24T22:04:53.922-08:00How To Inspire Exercise<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 400px;" src="" alt="" id="BLOGGER_PHOTO_ID_5577503904465401138" border="0" /></a><br /><div id="mod_13101510" class="module moduleText color0"><div style="" class="txtd" id="txtd_13101510"><p>Exercise is very important for us. Nowadays, scientific discipline has confirmed the connection, with main show that some bodies who take effective life styles are less likely to die early, or to see major illnesses such as heart disease, diabetes and colon cancer.</p> <p>You need to exercise everyday, you can significantly improve your general health, welfare and excellent of living. And the wellness does good of physical exercise can be reached by virtually everybody, careless of age, sexuality, work or physical ability.<br /></p><div id="mod_13101509" class="module moduleText color0"><div style="" class="txtd" id="txtd_13101509"><p>Make Time to exercise daily. Somehow, you have got to get the program, or you are starting to take medical troubles. And it will need much more time to consider with them than it requires practicing physical exercise. </p> <p><strong>I. Arouse every morning.</strong> </p> <p> Make your plan to exercise every morning. And making the road at down does not mean you will miss out on rest. Investigators at Physical study received that humans who began physical exercise in the sunrise slept better than they had before they began working out. </p> <p><strong>II. House work. </strong></p> <p> Don't just stay at home you must spend time to exercise. And always give priority to activities that serve the greatest aim. Those affecting work, family, and physical exercise. For instance, twenty-five minute walk a twenty-five minute sitcom every time. It’s that easy! </p> <p><strong>III. Multitasking. </strong></p> <p> Do house work. However most of us are either inactive or simply minimally involved. Confusion may keep many couch potatoes from getting into shape. But it’s never too late. No matter what your personal status, you can agree physical exercise into your life-style. At home, playing with your partner or children is quality time to exercise also. </p> <p>Schedule your physical exercise and track your advance, and you will soon be paid back with better health, more energy, and a feel of welfare that will keep you affected for years to come. </p> <p style="font-weight: bold;">Indeed, beginning physical exercise and save a life, your life!</p></div></div><br /></div></div>Adminnoreply@blogger.com0tag:blogger.com,1999:blog-2667094704726339956.post-44387107692897988042010-11-26T23:01:00.000-08:002010-11-26T23:20:31.160-08:00Blog Profits Blueprint<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 195px; height: 224px;" src="" alt="" id="BLOGGER_PHOTO_ID_5544125881437796066" border="0" /></a><br />Yaro gave a free report on how he made money from blogging. Download and read it today. It's a free report, you don't have to pay anything. If you want to become a successful blogger you must to read this E-book "Blog Profits Blueprint" Written by Yaro Starak Download <a href="">Here</a>Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-4085934040710799162010-11-21T20:12:00.000-08:002011-02-15T20:42:18.670-08:00Walk on Water Festival 2010These are some pictures of water festival in Cambodia 2010<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 285px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542223653739007074" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 281px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542223571126733842" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 287px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542223361588097570" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 282px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542223264892574466" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 399px; height: 266px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542223008687145410" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 254px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542222656259581378" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 398px; height: 178px;" src="" alt="" id="BLOGGER_PHOTO_ID_5542222225254273458" border="0" /></a>Adminnoreply@blogger.com0tag:blogger.com,1999:blog-2667094704726339956.post-34685480964074113652010-11-20T19:53:00.000-08:002010-11-26T23:37:18.179-08:00Second day Festival of boat racing in Cambodia<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="" alt="" id="BLOGGER_PHOTO_ID_5544129187594236210" border="0" /></a><br /><br /><p style="text-align: justify;">For the people of Cambodia, the water Festival (Bonn Om Teuk) in Phnom Penh is the most magnificent traditional festival. This morning, I was driving to the riverfront with my family members to see the view of the second day of water festival. And there are also different kinds of people in the period of this time. I noticed that happy goers always lost their valuable jewelry and money and sometimes their kids also lost their way because their parents are careless of their kids. By the way, we really don’t determine who is good or bad person. And million of people coming for sure in the afternoon as I can feel more and more people are coming straight to the river side now.<br /></p><p style="text-align: justify;">During the water festival, in order to avoid losing your valuable belongs and money, you should not wear it because we do not know who is thief or not. During this time, there are lots of people and it is the good opportunity for thief to steal your valuable belongs and money in your pocket.</p><div style="text-align: justify;"> </div><p style="text-align: justify;">The first thing, I think that all parents should be careful of their children and they have to hold their kids’ hands all the time. In addition, you should write your <span class="IL_AD" id="IL_AD3">phone</span> number, address, and your name on a small paper to put in your kids’ pocket in case your kids lose their way accidentally. And then it is easy for authorities or other people help contact you.</p><div style="text-align: justify;"> </div><div style="text-align: justify;"> </div><p style="text-align: justify;">Best wishes and bless to everyone with great time with family and friends during the water festival holiday! Cheers</p>Adminnoreply@blogger.com5tag:blogger.com,1999:blog-2667094704726339956.post-21176341853947842992010-11-18T00:43:00.000-08:002010-11-18T01:06:10.470-08:00Cambodia water festival, Phnom Penh water festival<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 400px;" src="" alt="" id="BLOGGER_PHOTO_ID_5540811293804249762" border="0" /></a><br />Bonn Om Teuk (the Festival of Boat Racing) is an annual boat rowing contest which has become the largest spectator event in Cambodia as well as a national festival. And Most of Cambodian people flock from different provinces and cities into Phnom Penh capital in order to <span class="IL_AD" id="IL_AD3">watch</span> boat racings on Mekong River. <p style="text-align: justify;">Bonn Om Teuk lasts three days so that boats from near and far provinces can join the contest. Therefore the Water Festival Ceremony is the army training to do attest of the army for preparing to do a battle. By the way, this festival is prepared very year to choose Champion of sailing battle, as in Bayon temple, Batteay Chhmar in the Preah Bat Jayvarman VII.</p><p style="text-align: justify;">In Cambodia, there are many statues depicting sailing battles under the leadership of King Jayvarman VII, originally carved on the wall of Angkor Thom temple. Because of this Bonn Om Teuk has become a very important traditional festival in Cambodia, and an opportunity to admire the military exercises of the naval forces.</p> <p style="text-align: justify;">In addition, holding the festival is to mark the changing of the flow of the Tonle Sap River and is also seen as thanksgiving to the Mekong River for providing the country with fertile land and abundant fish. And at the time, the Tonle Sap River reverts to its normal down-stream direction.</p> <p style="text-align: justify;">In Cambidia, during the Water Festival, most of Cambodian people come to Phnom Penh but during the Khmer New Year and P’chum Ben Day, those flock to provinces in order to meet their families and relatives. And it is so quiet in Capital. I am waiting for thisfestival….!!!</p><p style="text-align: justify;">Oh, I almost forgot the Water Festival itself – which takes place on about 1.7km of river (the competition course) with over 400 rowing boats and approximately 20,000 rowers from all the different provinces across the country. </p>Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-16624310275629294722010-06-22T19:47:00.000-07:002010-06-22T19:53:13.083-07:00English Proverb part 2<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 196px; height: 228px;" src="" alt="" id="BLOGGER_PHOTO_ID_5485796426789141442" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />51. Trouble brings experience and experience brings wisdom.<br />52. Half a loaf is better than none.<br />53. Have lost the goods is not so important than have lost the respect.<br />54. The more one gets, the more one wants.<br />55. Poor in wealth is better than poor in opinion.<br />56. Live now, pay later.<br />57. If you don't help row, don't slow the boat down.<br />58. If you love someone don't forget yourself.<br />59. If a man thinks himself great, he can not know his own wrongdoings.<br />60. Honesty is is the best policy.<br />61. Speech is the picture of mind.<br />62. Life without industry is guilt, industry without art is brutality.<br />63. A mere madness to live like a wretch and die rich.<br />64. No man can be the patriot on an empty stomach.<br />65. The unexpected always happen.<br />66. Diligence is the mother of good fortune.<br />67. One law for the rich and another for the poor.<br />68. A poor spirit is poorer than a poor purse.<br />69. In prosperity caution, in adversity patience.<br />70. Love is ever the beginning of knowledge as fire of light.<br />71. It's love that makes the world go round.<br />72. Reason is the guide and light of life.<br />73. Living without an aim is like a sailing without compass.<br />74. Reason teaches young men to lie well, and prepare old men to die well.<br />75. Life is half spent before we know what it is.<br />76. Who lives by the sword shall die by the sword.<br />77. A clear conscience can bear any trouble.<br />78. Fire is test of gold, adversity of strong man.<br />79. The enemy of the best is not the worst but the good enough.<br />80. If you can not speak good of men, refuse to speak ill of them.<br />81. Difficulties strengthen the mind, as labor does the body.<br />82. Little is done where many command.<br />83. Better untaught than ill taught.<br />84. Philosophy is the microscope of thought.<br />85. Knowledge is power.<br />86. Don't be disappointed with fate, don't make protest against the predestination.<br />87. Knowledge is a treasure, but practice is a key to it.<br />88. If you want to reduce poverty and ignorance, you must go to bed late and get up early.<br />89. If I advance, follow me! If I retreat, cut me down! If I die, avenge me!<br />90. We must leave exactly on time... From now on everything must function to perfection.<br />91. The rich experience of history teaches that up to now not a single class has voluntarily made way for another class.<br />92. Belief is harder to shake than knowledge.<br />93. The broad mass of a nation... Will more easily fall victim to a big lie than a small one.<br />94. It is a mistake to look far ahead. Only one link it the chain of destiny can be handled at a time.<br />95. Politics is more dangerous than war, for in war you are only killed once.<br />96. The empire of the future is the empire of the mind.<br />97. I'm not built for academic writings. Action is my domain.<br />98. There are limits to self-indulgence, none to self-restraint.<br />99. Don't cut my feel, don't kill my love.<br />100. Being a leader, you must be aware of poor people's illness.<br />101. We need to have a sense of past and future, not just present.<br />102. Everything can be solved, but dying can't be solved.<br />103. Struggle in life will bring a bright future.<br />104. Courage is rightly esteemed the first of human qualities because it is the quality which guarantees all others.<br />105. To a man with an empty stomach food is god.<br /><br /></div>Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-38571523980569459292010-06-22T02:20:00.000-07:002010-06-22T19:56:32.848-07:00English Proverb<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 400px; height: 308px;" src="" alt="" id="BLOGGER_PHOTO_ID_5485527817645789842" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />1. Prevention is better than cure.<br />2. No interference is the sinew of peace.<br />3. For loan loses both itself and friend.<br />4. there is no success without patience.<br />5. Blood is thicker than water.<br />6. Good children reflect their parents, good students reflect their teachers.<br />7. Cut your coat according to your cloth.<br />8. Do not throw rice and pick up husks.<br />9. Do not whisper in front of guest.<br />10. Don't judge a man by his looks.<br />11. Still waters run deep.<br />12. Lock the barn door after the horse is stolen.<br />13. Charity begins at home.<br />14. Look before you leap.<br />15. There is no perfectness in the world.<br />16. None so blind as those who will not see.<br />17. There's none so deaf as those who will not hear.<br />18. No one is too old to learn.<br />19. No one is perfect.<br />20. No man is born wise.<br />21. No guts no glory.<br />22. A teacher can never truly teach unless he is still learning himself.<br />23. Out of sight, out of mind.<br />24. Associating with a blacksheep always bring sorrow but a wise man always brings happiness.<br />25. Great oaks from little acorns grow.<br />26. There's safety in numbers.<br />27. Think today and speak tomorrow.<br />28. Learn from your mistakes.<br />29. Love all, trust a few, do wrong to none.<br />30. When in Rome do as the Romans do.<br />31. Jack of all trades and master of none.<br />32. Experience without learning is better than learning without experience.<br />33. To know many things is not as good as to be an expert in one field.<br />34. Don't blow your own horn.<br />35. Let bygones be bygones.<br />36. A miss is as good as a mile.<br />37. Life is struggle.<br />38. No success without hard work.<br />39. Fingers are all thumbs.<br />40. Strike while the iron is hot.<br />41. Let sleeping dogs lie.<br />42. You can not use a winnowing basket to hide a dead elephant.<br />43. Little by little, a bird can make its nest. Drop by drop water fills the container.<br />44. When wine is in, wisdom is out.<br />45. Better buy than borrow.<br />46. Even if they are sagacious but they would become darkness.<br />47. All gotten gains never proper (Thinks wrongly obtained don't bring success or happiness).<br />48. All work and no play makes Jack a dull boy.<br />49. Never put off till tomorrow what you can do not today.<br />50. When a man is angry, he can not be in the right.<br /><br /><a href="">More Proverb</a><br /><br /><br /></div>Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-85002595511886864922010-06-18T22:25:00.000-07:002010-06-18T22:29:42.755-07:00E-book: Ultimate Blog Profit Model<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 210px; height: 270px;" src="" alt="" id="BLOGGER_PHOTO_ID_5484352046645156210" border="0" /></a><br /><div style="text-align: justify;">I am one of readers of John Chow blog which lately declared to free his online report. I don't understand this E-book in details so far after downloading it. So I will spend my free times to read the full report next time. Now I want to give you download link of this E-Book If one of you needs to read about "Ultimate Blog Profit Model" , you can <a href="" target="_blank">download it Here</a>.<br /><br /><br /><br /></div>Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-82140358379828594762010-06-18T22:19:00.000-07:002010-06-18T22:25:20.346-07:00Download Free E-book: The Roadmap to Become A Blogger<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 297px;" src="" alt="" id="BLOGGER_PHOTO_ID_5484351031435169298" border="0" /></a><br />This is the third E-book for you to download. If you want to become a successful blogger you must to read this E-book " Roadmap to Become a Blogger" Written by Gideon Shalwick and Yaro Starak. <a href="" target="_blank">Download Here</a>Adminnoreply@blogger.com2tag:blogger.com,1999:blog-2667094704726339956.post-11233153013226396032010-06-17T19:24:00.000-07:002010-06-17T19:49:52.631-07:00Temples in Cambodia<div style="text-align: justify;">Ang<br /></div><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 350px; height: 233px;" src="" alt="" id="BLOGGER_PHOTO_ID_5483936435093691122" border="0" /></a><br /><br />483935967806944066" border="0" /></a.<br /></div><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 320px; height: 213px;" src="" alt="" id="BLOGGER_PHOTO_ID_5483934503800700482" border="0" /></a><div style="text-align: justify;".<br /><br /></div>Adminnoreply@blogger.com4tag:blogger.com,1999:blog-2667094704726339956.post-35767000282184531162010-06-15T18:06:00.000-07:002010-06-16T20:55:18.879-07:00FIFA World Cup 2010 Live<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 400px; height: 262px;" src="" alt="" id="BLOGGER_PHOTO_ID_5483172975002314338" border="0" /></a><br /><br /><br /><div style="text-align: justify;"><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Just staying and connected to the internet you can access your computer and watch football online. For many reasonableness like checking the world cup football match for watching it is very easy for those who don't need to sit in front of their Television at house or someplace in building. With a small laptop computer you can honest make it in seeing it on internet everywhere.<br /><br />Only for some reasons that paying online. The internet stream live show is costing a bit only responsible and convenience for understand individuals or as referred previous. But in some countries, many people cannot view the world cup connected by internet yet but they can watch it on Television and it is the starting ceremonial occasion began around half an hour equally of i'm publishing this article. I like football game since I was young so until today it is about ten years as a football admirer just I don't play with football game though.<br /><br />For the end I hope you will enjoy with this match.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 400px; height: 225px;" src="" alt="" id="BLOGGER_PHOTO_ID_5483585812804152146" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /></div>Adminnoreply@blogger.com2tag:blogger.com,1999:blog-2667094704726339956.post-13741618546865806282010-06-09T01:57:00.000-07:002010-06-09T02:16:08.417-07:00How to write good blog posts more quickly<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 350px;" src="" alt="" id="BLOGGER_PHOTO_ID_5480699698914132178" border="0" /></a><br /><br />Its well known that the best way to make your Blog or Website popular is by writing good Articles. It is the subject and the primary words that time and again appeal possible readers. Fresh blog or website message literally ways setting something various on any other blog.<br /><br /.<br /><br />Writing is communication and communication is about people, especially if you are trusting to bring available traffic from SEO and social bookmarking sites; as well you will make to practice many keywords research to check what search terms people are typing into their computers.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 320px; height: 216px;" src="" alt="" id="BLOGGER_PHOTO_ID_5480699523437208738" border="0" /></a><br /.<br /><br /.<br /><br />Finally, you need to know how readers want and be gratitude to your readers for supporting you. The key is to attract your readers is quality not quantity, and it is easier to write about things that you know about, so it's a good idea.<br /><br /><br /></div>Adminnoreply@blogger.com7tag:blogger.com,1999:blog-2667094704726339956.post-7419688290958117462010-06-04T19:49:00.000-07:002010-06-04T20:03:52.795-07:00How to promote your new blog<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 300px;" src="" alt="" id="BLOGGER_PHOTO_ID_5479119836196357026" border="0" /></a><br />Many new bloggers always want they blog popular but they don't know how to promote it? or how to make blog well-known? It takes long time to make your blog popular, not a day, a week, a month and sometime not a year I think! At the mean time, I would like to share my little knowledge that I used to do it and it is simple ways for new bloggers to practice.<br /><br />I. You must to think about Content: Quality content, writing unique and good content is very powerful for appealing your audiences to read your blog post. If it is nice, the readers will take it and they will take to see it again. And you have to update it on a regular basis if workable. If you practice it, it means that your blog is alive. Readers will arrive to read it any longer and then you have more readers.<br /><br />II. Leave comment: you should take time to visit some blogs and last comment with positive idea or opinion in their blog. At the time, blog admin pass on arrive back to see your blog via your comment. Commenting other blogs is likely to present yourself to them to see you by your comment.<br /><br />III. Request link to some other bloggers: you should contact to other bloggers, even though, they are your friends or not. While you linked their blogs, they will essentially mark and link back to yours in the next. That is the style to aid promote your blog done with link. I ever link to new bloggers to my blog link.<br /><br />IV. Use mail signature: you should practice email signature by attaching your blog's URL to social message.<br /><br />V. Tell friends about your blog: you can tell your friends or relationship about your blog's URL through e-mail, or private or public meeting. And tell them to help comments on your blog post as well if possible.<br /><br />VI. Use blog directory to promote it, you should bring your blog into free blog directories such as or others . It is essentially type of available directories in place to allow members to find and communicate one other.<br /><br />These are the easy way for new beginner to show their blog to the world. Then Again, there are other ways to promote blog. Simply I am looking fresh to advices from master bloggers to contribution with us.<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 334px; height: 228px;" src="" alt="" id="BLOGGER_PHOTO_ID_5479119552297358290" border="0" /></a><br /><br /></div>Adminnoreply@blogger.com2tag:blogger.com,1999:blog-2667094704726339956.post-38120107937312371002010-06-02T21:44:00.000-07:002010-06-16T02:51:07.322-07:00Meaning of Relationship<div style="TEXT-ALIGN: justify"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img id="BLOGGER_PHOTO_ID_5478404571835301298" style="FLOAT: right; MARGIN: 0pt 0pt 10px 10px; WIDTH: 400px; CURSOR: pointer; HEIGHT: 400px" alt="" src="" border="0" /></a>Family and friends are necessary parts of a person's life. Naturally, communicate with another people is very important. They act as companion in our voyage through life. With them we experience and share many things. We learn how to enjoy, hate, laugh, joke, and be a normal functioning human being.<?xml:namespace prefix = o /><o:p></o:p><br /><o:p></o:p></div><p class="MsoNormal" style="TEXT-ALIGN: justify">Everytime with my friends together, we feel happy and hopefull. we almost say or do anything we like without afraid of reproach or embarrassment. It mean that we understand one another very well. I do not feel shy in front of my friends. None of us feel that you are superior or inferior. We are just friends, doing things we like together, studying together, laughing together, eating together and generally having a good time together.<o:p><br /></o:p></p><div style="TEXT-ALIGN: justify"></div><p class="MsoNormal" style="TEXT-ALIGN: justify">Has a lot of good friends is very wonderful. When I have a problem, they can help me. Sometimes when I am a bit richer, they help me spend money. After all what are friends for? They are there to share and enjoy things with. So when we are together, we really let ourselves go. In doing so the bonds of our friendships grow stronger. having good friends is like having treasures rich beyond comparison.<o:p><br /></o:p></p><div style="TEXT-ALIGN: justify"></div><p class="MsoNormal" style="TEXT-ALIGN: justify">But I don't think every thing is good with friends for me. Some friends always invited me to see a midnight film-show or go out at night. That's I would not like to see. When I told my mother about invitation she immediately said that I could not go. I asked why? She said she did not want me to go out so late at night because there had been a spate of criminal activities occurring recently and she did not want me to be exposed to any of these activities.<o:p><br /></o:p></p><div style="TEXT-ALIGN: justify"></div><p class="MsoNormal" style="TEXT-ALIGN: justify">So, it has good and bad points about friends. We should choose friends that we think they are good people. As the saying goes "Associating with a black sheep always bring sorrow but a wise man always brings happiness" that my parent and my teacher always told me. And if we know clearly about this slogan, relation and friendship certainly bring us bright future.</p><p class="MsoNormal" style="TEXT-ALIGN: justify"><br /></p>Adminnoreply@blogger.com1tag:blogger.com,1999:blog-2667094704726339956.post-64114397755547927302010-06-02T21:34:00.000-07:002010-06-02T21:37:21.401-07:00What is the different between giving someone money and showing the way of living?<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 400px; height: 340px;" src="" alt="" id="BLOGGER_PHOTO_ID_5478401670762991554" border="0" /></a>In our live Money is an important thing in everybody's life. We require money for almost thing in everything that we want. We need money to buy food, to have a place to stay in, to buy dresses, to go to school to sit on a bus...<span style=""> </span><o:p></o:p> </div><p style="text-align: justify;" class="MsoNormal">In other speech we work to get money in order to live. Many of us try to get by with what ever money we have but all of us would like to have more. We experience that money can bring us many luxuries and pleasures. We buy almost any thing so much so we may think it can even buy happiness. That is where we make the blunder and allow money to rule our lives. That is when money goes the root of many evils.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">Some people think that giving the way of living to someone is good. Because they know by the saying that "Poor in wealth is better than poor in opinion”. man works for money when she or he gets the way of living from them and then he or she try to follow them, like find a job or other works that he or she can do... After that they can earn the money by his or her-self.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">But the other people don't think like this, for someone that poor or very poor, they think that giving money is very important, because it's very necessary for they living, First they need the food, and another things for living. Money can give what they want. For Example if they was ill and very poor and can't do any thing for earn money, so what should they do? That person certainly needs the money first.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">So, there are two opinions that different for helping someone, the first opinion they showing the way of living to a poor man and the second they giving money. But by my opinion I think that sometime we must take the first option and sometime we must take the second for practicing. And for the special case we need take both to practice. It's mean that we give money and after that we showing the way of living to them.</p>Adminnoreply@blogger.com0tag:blogger.com,1999:blog-2667094704726339956.post-20219074044366306702010-06-01T19:40:00.000-07:002010-06-02T21:43:52.661-07:00Reasons I chose English for my blog<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 265px;" src="" alt="" id="BLOGGER_PHOTO_ID_5478002080653623474" border="0" /></a><br /></div><p style="text-align: justify;" class="MsoNormal">In the world, there are many deferent languages that people used for communication with another people in their countries like: English, Chinese, French, Japanese...But English is a language that has popularity and more important than any other languages.<o:p></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal"><o:p> </o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">The statistics answer the question. About four hundred million people speak English as a first language and another three hundred million use English as a second language. It is the official or semi-official language in a lot of countries and of many international organizations.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">For example the International Olympic Committee always holds meeting in English. Air Traffic control and communication at sea around the world is always in English. About seventy five percent of all the letters and texts are in English and eighty percent of all the information in the world's computer is in English.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">By the way it is easy for searching blog or website through Search Engine. Because the internet users always use English language for searching.</p><p style="text-align: justify;" class="MsoNormal"><br /></p>Adminnoreply@blogger.com0tag:blogger.com,1999:blog-2667094704726339956.post-88296395290916896452010-06-01T19:26:00.000-07:002010-06-02T21:43:35.975-07:00Life in the city477999797212828498" border="0" /></a><br /></div><p style="text-align: justify;" class="MsoNormal">A city is a place where many people live, work and play in. <st1:cityPhnom Penh</st1:city> is the city in <st1:country-region<st1:placeCambodia</st1:place></st1:country-region>. To accommodate all these people, skyscrapers and super-market are built. Some trees are grown to offer greenery and shade but they are few compared to the thousands of buildings that stretch as far as the eye can look.<o:p></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal"><o:p> </o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">For most people has works to do, the daily task of going to work and coming home from work is a much practiced and tolerated routine. Beside these jobs, there are hawkers, taxi-drivers, road-side barbers, pavement artists, businessmen, policemen... Many of them do good business and make a great deal of money while some are not so fortunate. Practically all of them seem to be active. Life is a speed. It is a never-ending per suit of money to make ends receive, or to buy a most coveted product or to put into the bank.<o:p></o:p></p><div style="text-align: justify;"><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 400px;" src="" alt="" id="BLOGGER_PHOTO_ID_5477998725598285330" border="0" /></a><br /></div><p style="text-align: justify;" class="MsoNormal">The roads of <st1:city<st1:placePhnom Penh</st1:place></st1:city> are sometimes very fantastic with lots of traffic. Many people drives on them daily. There are lots of cyclos, cars and trucks full of passengers. Often the roads are jammed with cars. People from province can also come to <st1:city<st1:placePhnom Penh</st1:place></st1:city> by train, but buses are safer than trains. Some of the roads are great, but some are getting old now. People don't like to drive on the oldest roads, they want to repair them and also want more traffic lights. With traffic lights, the roads are much safer.<o:p></o:p></p><div style="text-align: justify;"><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 266px;" src="" alt="" id="BLOGGER_PHOTO_ID_5477998127428585826" border="0" /></a><br /></div><p style="text-align: justify;" class="MsoNormal">When night comes, Phnom Penh Becomes alive in a different way. The night-clubs, snooker parlors and message parlors ...open for business. These places are never short of patrons who arrive to release some of the tension built up during the day. These nocturnal activities go on deep into the night. <o:p></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal"><o:p> </o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">Thus life in the city goes on, filled with excitement and struggle, hopes and dreams.</p>Adminnoreply@blogger.com0tag:blogger.com,1999:blog-2667094704726339956.post-41390434125575184492010-05-31T21:28:00.000-07:002010-05-31T21:45:19.576-07:00My Dream Life<div style="text-align: justify;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 398px; height: 400px;" src="" alt="" id="BLOGGER_PHOTO_ID_5477660038748597234" border="0" /></a><br />I knew all the people in the world certainly want to be a better life. And I think that one man's idea of a better life will be entirely different from another man's. It's hard to decide what makes a better life, for we cannot agree on a definition of better life that is acceptable to all. But for me, there are five factors in my dream life that I need.<o:p></o:p><br /><o:p></o:p> </div><p style="text-align: justify;" class="MsoNormal">First I think my family. Because family is a necessary part of a person's life. My parents are very kind people. They tried to do everything for me to be a good person. They look after me and gave education to me. I'm very happy, warm when I live together with my family.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">Then I need a good health. Health is very importance in our lives. Because it can make us: good feeling, happy and can do another working that we want. For my health I always eat good food, do exercise very morning and some time I play sports with my friends. As the saying goes "All work and no play makes Jack a dull boy".<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">And after that is Education. It's very good for the people. Certainly all children must to study at school because for to be a good person in society. At university, if I know clearly about skill in my subject I can find a good job and get high salaries.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">In our lives we always communicate with another people, usually with our friends. So, I need a lot of good friends. When I am with my friends, I feel free and easy. I can say or do almost anything I like without fear of reproach or embarrassment. This is because we understand one another very well. Good friend that I choose is kind, friendly and can help me when I am in a problem. Because when I have good friends I don't think that I live alone.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">At the end I think a good society. I want to live in a society that has peace justice and especially development. It means that when I live in a good society I certain to be very good in my life and I think it's good for another people, too.<o:p><br /></o:p></p><div style="text-align: justify;"> </div><p style="text-align: justify;" class="MsoNormal">So, for all these I think it good enough for me to be a better life. And I hope that I will have everything that I want.</p><p style="text-align: justify;" class="MsoNormal"><br /></p>Adminnoreply@blogger.com0
http://feeds.feedburner.com/blogspot/Ecoou
CC-MAIN-2017-13
refinedweb
8,419
56.45
jGuru Forums Posted By: Cristian_Caprar Posted On: Tuesday, July 16, 2002 12:36 AM Hi everybody. Here is my problem: I want to create a class with only 4 instances as public static members and use them instead of normal constants (ViewMode.TEXT, VieMode.DETAILS, etc.). So I declared those 4 instances as public static members of their class, made the constructor private and now I need to initialize the 4 instances. I can do it from a static initializer, but I need to pass a config object to the constructor. So this is not a very good solution. Another option is to have a static method for init that receives the config object and constructs the 4 instances, but how can I protect the instances being accesses before init has been called? Any design suggestions? Thanks, Cristian Re: Safe Constants pattern question Posted By: Christopher_Koenigsberg Posted On: Tuesday, July 16, 2002 08:28 AM If they are just "constants", then what "config object" do you need for the constructors? Can you have the constructor just take a string arg, and then call it for each instance in the declaration? (e.g. public class ViewMode { private final String name; private ViewMode(String name) {this.name = name; } public String toString() {return name; } public static final ViewMode TEXT = new ViewMode('text'); public static final ViewMode DETAILS = new ViewMode('details'); ......} In Josh Bloch's maximally cool book "Effective Java" (ISBN 0-201-31005-8, Sun Java Series, Addison Wesley), on p. 105 under "Item 21: Replace enum constructs with classes" he does recommend what you are doing, and gives an example (he calls it the "typesafe enum pattern". Posted By: Gautam_Marwaha Posted On: Tuesday, July 16, 2002 01:44 AM Consider the following:1. private non-static member variables2. public accessor methods (getter methods) to return members3. public constructor with config passed into it, sets values for the member variables using the config4. Usage: make object passing the config, use getters to get the member variable values
http://www.jguru.com/forums/view.jsp?EID=950856
CC-MAIN-2014-42
refinedweb
334
53.51
- Join Date - Mar 2007 - Location - The Netherlands - 24,246 - Vote Rating - 107 I haven't heard anything, but I'll ask them again at the conference. FitToParent is using the size before resizing Hi all, I'm using se follow version of FitToParent : Code: Ext.namespace('Ext.ux'); /** * @class Ext.ux.FitToParent * @extends Object * <p>Plugin for {@link Ext.BoxComponent BoxComponent} and descendants that adjusts the size of the component to fit inside a parent element</p> * <p>The following example will adjust the size of the panel to fit inside the element with id="some-el":<pre><code> var panel = new Ext.Panel({ title: 'Test', renderTo: 'some-el', plugins: ['fittoparent'] });</code></pre></p> * <p>It is also possible to specify additional parameters:<pre><code> var panel = new Ext.Panel({ title: 'Test', renderTo: 'other-el', autoHeight: true, plugins: [ new Ext.ux.FitToParent({ parent: 'parent-el', fitHeight: false, offsets: [10, 0] }) ] });</code></pre></p> * <p>The element the component is rendered to needs to have <tt>style="overflow:hidden"</tt>, otherwise the component will only grow to fit the parent element, but it will never shrink.</p> * <p>Note: This plugin should not be used when the parent element is the document body. In this case you should use a {@link Ext.Viewport Viewport} container.</p> */ Ext.ux.FitToParent = Ext.extend(Object, { /** * @cfg {HTMLElement/Ext.Element/String} parent The element to fit the component size to (defaults to the element the component is rendered to). */ /** * @cfg {Boolean} fitWidth If the plugin should fit the width of the component to the parent element (default <tt>true</tt>). */ fitWidth: true, /** * @cfg {Boolean} fitHeight If the plugin should fit the height of the component to the parent element (default <tt>true</tt>). */ fitHeight: true, /** * @cfg {Boolean} offsets Decreases the final size with [width, height] (default <tt>[0, 0]</tt>). */ offsets: [0, 0], /** * @constructor * @param {HTMLElement/Ext.Element/String/Object} config The parent element or configuration options. * @ptype fittoparent */ constructor: function(config) { config = config || {}; if(config.tagName || config.dom || Ext.isString(config)){ config = {parent: config}; } Ext.apply(this, config); }, init: function(c) { this.component = c; c.on('render', function(c) { this.parent = Ext.get(this.parent || c.getPositionEl().dom.parentNode); if(c.doLayout){ c.monitorResize = true; c.doLayout = c.doLayout.createInterceptor(this.fitSize, this); } else { this.fitSize(); Ext.EventManager.onWindowResize(this.fitSize, this); } }, this, {single: true}); }, fitSize: function() { var pos = this.component.getPosition(true), size = this.parent.getViewSize(); this.component.setSize( this.fitWidth ? size.width - pos[0] - this.offsets[0] : undefined, this.fitHeight ? size.height - pos[1] - this.offsets[1] : undefined); // this.component.doLayout(); } }); Ext.preg('fittoparent', Ext.ux.FitToParent); Code: var painelITENS = new Ext.Panel({ id : 'xpainelITENS', plugins: [ new Ext.ux.FitToParent({ parent: 'FormTabPanel', fitHeight: true, fitWidht: true, offsets: [ 20 , 215 + 10 ] }) ], autoScroll : false, frame : true, renderTo : 'ListadeItens' }); ..... var gridITENS = new Ext.grid.GridPanel({ id: 'xgridITENS', store: WEBui.storeITENS, loadMask: true, stripeRows: true, sm: new Ext.grid.RowSelectionModel({singleSelect:true}), height: 200, width : 400, plugins: [ new Ext.ux.FitToParent({ parent: 'xpainelITENS', fitHeight: true, fitWidht: true, offsets: [ 10 , 10 ] }) ], stateful: true, stateId: 'grid', columns: [ ...... .... Ext.getCmp('xpainelITENS').add(gridITENS); Ext.getCmp('xpainelITENS').doLayout(); .... NOTE : When i change the tab and back, i'm calling doLayout() and the size is corrected. It makes me think that i really need some "wait resource" or "before resizing". Code: var FormTabPanel = new Ext.TabPanel({ activeTab : 0, deferredRender : false, // need this if you are going to include a form in your tab panel id: 'FormTabPanel', frame: true, border: false, anchor: '100%', defaults : { autoScroll : true, bodyStyle : 'padding:10px' }, items: [{ id: 'NFresumo', height: altura[3]-127, title : 'Informações Gerais', contentEl : 'FormTab1' },{ id: 'NFdetalhe', title : 'Itens da Nota Fiscal', contentEl : 'FormTab2' }], listeners: { tabchange: function(tabp,tab){ if (tab.id=='NFdetalhe'){ Ext.getCmp('xpainelITENS').doLayout();}; } } }); ftp-1.png ftp-2.png ftp-3.png So, the Box Component is built of divs, right? Aren't divs 100% wide by default anyway? Why do we need all this code to set inline style widths, when "width: auto" would work as well? Is this for reverse compatibility with IE 6 or something?Bruce Bell-Myers Principal Software Engineer, PTC - Join Date - Mar 2007 - Location - The Netherlands - 24,246 - Vote Rating - 107 But all this is inside a tabpanel! You shouldn't be using FitToParent inside an Ext.Container. Instead you should add the elements as boxcomponent items to the container. To get the proper width you could use layout:'anchor' on the container and anchor:'100%' on the boxcomponents. I'm using Ext.ND and my TabPanel is inside of Ext.Nd.UIDocument. I don't know why, but when i render de grid directly on the DIV, the behavior in IE is different of the Firefox. Sometimes the scroll bar appears in browser windows and sometimes the scroll bar in the grid doesn't (i need to click on the grid to show scroll bar). My interesting is in height size because the panel above the grid will changed deppending of situation. Using the FitToParent was the only way to have the same behavior in all browsers (IE, FF, Opera and Safari), including correct showing of scroll bar. But using FitToParent, the function runs before the resize occurs, changing the size to the "last size". So, how i put a "wait resource" to runs FitToParent only after resize occurrs ? (sorry about my poor english) Thanks in advance. - Join Date - Mar 2007 - Location - The Netherlands - 24,246 - Vote Rating - 107 I repeat: If this is in an Ext container (e.g. TabPanel) then you should not be rendering anything. You should add() the grid to the tabpanel and call doLayout(). Condor, I'm not rendering (my mistake writing) I'm using add() and doLayout() to show the grid. But I'm doing it inside the ext.panel not directly on the TabPanel. I'll try to add directly on TabPanel how you said. - Join Date - Mar 2007 - Location - The Netherlands - 24,246 - Vote Rating - 107 You can do it in a normal panel too, but you have to check 2 things: 1. Do you really need the panel (don't forget that a gridpanel is also a panel!). 2. Do all containers have a layout (e.g. is the panel layout:'fit')? Condor, first, thank you. I'm only a week or so into Ext JS and this plugin has given me great insight into Ext JS internals. I'm working on a project that requires a border layout within a parent div. There's an additional UI that surrounds my app, and I'd like to keep that intact and not have to resort to Ext.Viewport and a separate pop-up. I've got your plugin working somewhat - width resizing great, height, well not so much. I've taken the complex layout sample code from the Sencha demo and placed it in a panel with your plugin defined. But the only way I can get anything more than a thin blue bar to appear is to give the panel a height: After your Ext.ux is defined and inside an Ext.onReady I start with... Code: var panel = new Ext.Panel({ autoHeight: true, // height: 600, layout: 'border', renderTo: 'content', plugins: ['fittoparent'], items: [ ... Code: <div style="border: 1px solid #f00;"> <div id="content" style="width: 100%; height: 100%; padding: 0; margin:0 overflow: hidden; min-height:500px;"> {pg->output_section f='layout'} </div> </div> screen-shot-1.png The red border is to show me where the inner container "content" is showing as it shapes itself to the inner div. The thin blue line just below the top red line is the complex layout. If I change the panel config (as follows) removing autoHeight and setting a height I get what's seen in screen-shot-2.png Code: var panel = new Ext.Panel({ // autoHeight: true, height: 600, layout: 'border', renderTo: 'content', plugins: ['fittoparent'], items: [ So, I have the UI rendering now, and changing the width tracks nicely. However, when I reduce the height of the browser I would expect to see a browser based scrollbar - but I get none... see screen-shot-3.png - sigh... screen-shot-3.png What I'm shooting for is a minimum height presentation, a container div that expands to fit the contents of the border-layout, and browser scroll bars if the window is smaller than the rendered content. What am I missing?)
https://www.sencha.com/forum/showthread.php?28318-Fit-to-parent/page10
CC-MAIN-2015-32
refinedweb
1,394
58.38
Running Samba 3.0.4 (installed from RPM) on SuSE 8.2 on Intel x86. The following log entries repeat many times (15+). The user under Windows 2000 gets the error message, while trying to create a shortcut to a file in the same folder (located on Samba share): "The shortcut cannot be created. Your disk may be full or you do not have permission to access the folder". However, *copying* files to the same share works fine. Restarting SMB daemon with "/etc/init.d/smb restart" does not rectify the problem. After full system reboot it works again until the next panic. ---------------------------------------------------------------------------- [2004/06/18 19:16:02, 1] smbd/service.c:make_connection_snum(619) tsxxxx (172.19.72.138) connect to service thd2 initially as user smbuser1 (uid=509, gid=501) (pid 15062) [2004/06/18 19:16:02, 0] lib/fault.c:fault_report(36) =============================================================== [2004/06/18 19:16:02, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 15062 (3.0.4-SerNet-SuSE) Please read the appendix Bugs of the Samba HOWTO collection [2004/06/18 19:16:02, 0] lib/fault.c:fault_report(39) =============================================================== [2004/06/18 19:16:02, 0] lib/util.c:smb_panic2(1398) PANIC: internal error [2004/06/18 19:16:02, 0] lib/util.c:smb_panic2(1406) BACKTRACE: 22 stack frames: #0 /usr/sbin/smbd(smb_panic2+0x18c) [0x819fb35] #1 /usr/sbin/smbd(smb_panic+0x10) [0x819f9a7] #2 /usr/sbin/smbd [0x818f96e] #3 /usr/sbin/smbd [0x818f9ccbed1] #8 /usr/sbin/smbd(sys_get_quota+0x8e) [0x80cc6ed] #9 /usr/sbin/smbd(disk_quotas+0x30) [0x80cf304] #10 /usr/sbin/smbd [0x8082963] #11 /usr/sbin/smbd(sys_disk_free+0x1a) [0x8082b6c] #12 /usr/sbin/smbd(vfswrap_disk_free+0x1a) [0x80be6d1] #13 /usr/sbin/smbd [0x80adc3a] #14 /usr/sbin/smbd(reply_trans2+0xb5e) [0x80b466f] #15 /usr/sbin/smbd [0x80c80e0] #16 /usr/sbin/smbd [0x80c8172] #17 /usr/sbin/smbd(process_smb+0x1d6) [0x80c8491] #18 /usr/sbin/smbd(smbd_process+0x158) [0x80c9004] #19 /usr/sbin/smbd(main+0x769) [0x81f812e] #20 /lib/libc.so.6(__libc_start_main+0xce) [0x402068ae] #21 /usr/sbin/smbd(ldap_msgfree+0x71) [0x8076d61] [2004/06/18 19:16:02, 0] lib/util_sock.c:get_peer_addr(978) getpeername failed. Error was Transport endpoint is not connected [2004/06/18 19:16:02, 0] lib/util_sock.c:read_socket_data(367) read_socket_data: recv failure for 4. Error = Connection reset by peer [2004/06/18 19:16:02, 1] smbd/service.c:make_connection_snum(619) --------------------------------------------------------------------------- This didn't seem to happen with Samba 3.0.2a although the system is used more extensively with the 3.0.4 version. [2004/06/28 10:07:30, 1] smbd/service.c:make_connection_snum(619) herbert (192.168.0.1) connect to service georg initially as user georg (uid=1000, gid=100) (pid 4366) [2004/06/28 10:07:35, 0] lib/fault.c:fault_report(36) =============================================================== [2004/06/28 10:07:35, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 4366 (3.0.2a-SUSE) Please read the appendix Bugs of the Samba HOWTO collection [2004/06/28 10:07:35, 0] lib/fault.c:fault_report(39) =============================================================== [2004/06/28 10:07:35, 0] lib/util.c:smb_panic2(1398) PANIC: internal error [2004/06/28 10:07:35, 0] lib/util.c:smb_panic2(1406) BACKTRACE: 21 stack frames: #0 /usr/sbin/smbd(smb_panic2+0x1ec) [0x81ee9e5] #1 /usr/sbin/smbd(smb_panic+0x25) [0x81ee7f3] #2 /usr/sbin/smbd [0x81da9b7] #3 /usr/sbin/smbd [0x81daa2d] #4 [0xffffe440] #5 /lib/tls/libc.so.6(getmntent+0x54) [0x4034f7c4] #6 /usr/sbin/smbd [0x80decc3] #7 /usr/sbin/smbd(sys_get_quota+0xad) [0x80df685] #8 /usr/sbin/smbd(disk_quotas+0x51) [0x80e31b1] #9 /usr/sbin/smbd [0x8087628] #10 /usr/sbin/smbd(sys_disk_free+0x2d) [0x80878e3] #11 /usr/sbin/smbd(vfswrap_disk_free+0x39) [0x80ce868] #12 /usr/sbin/smbd [0x80bb23d] #13 /usr/sbin/smbd(reply_trans2+0xca0) [0x80c2cbf] #14 /usr/sbin/smbd [0x80da408] #15 /usr/sbin/smbd [0x80da4cd] #16 /usr/sbin/smbd(process_smb+0x241) [0x80da8b0] #17 /usr/sbin/smbd(smbd_process+0x199) [0x80db525] #18 /usr/sbin/smbd(main+0x8d9) [0x8265bc5] #19 /lib/tls/libc.so.6(__libc_start_main+0xe0) [0x402b24b0] #20 /usr/sbin/smbd [0x8078241] [2004/07/01 13:41:30, 0] lib/fault.c:fault_report(36) =============================================================== [2004/07/01 13:41:30, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 28798 (3.0.4-SUSE) Please read the appendix Bugs of the Samba HOWTO collection [2004/07/01 13:41:30, 0] lib/fault.c:fault_report(39) =============================================================== [2004/07/01 13:41:30, 0] lib/util.c:smb_panic2(1398) PANIC: internal error [2004/07/01 13:41:30, 0] lib/util.c:smb_panic2(1406) BACKTRACE: 17 stack frames: #0 /usr/sbin/smbd(smb_panic2+0x120) [0x8202790] #1 /usr/sbin/smbd(smb_panic+0x26) [0x8202956] #2 /usr/sbin/smbd [0x81edf20] #3 [0xffffe440] #4 /lib/tls/libc.so.6(getmntent+0x54) [0x403467c4] #5 /usr/sbin/smbd [0x80e6488] #6 /usr/sbin/smbd(sys_get_quota+0xed) [0x80e6ead] #7 /usr/sbin/smbd(disk_quotas+0x5a) [0x80eaf9a] #8 /usr/sbin/smbd(sys_disk_free+0xcb) [0x8087afb] #9 /usr/sbin/smbd(vfswrap_disk_free+0x39) [0x80d83b9] #10 /usr/sbin/smbd(reply_dskattr+0x81) [0x80a4921] #11 /usr/sbin/smbd [0x80e2287] #12 /usr/sbin/smbd(process_smb+0x1aa) [0x80e282a] #13 /usr/sbin/smbd(smbd_process+0x16b) [0x80e2c9b] #14 /usr/sbin/smbd(main+0x526) [0x827cff6] #15 /lib/tls/libc.so.6(__libc_start_main+0xe0) [0x402a94b0] #16 /usr/sbin/smbd [0x8078ba1] georg@elfi:~> smbclient //localhost/documents Domain=[ELFI] OS=[Unix] Server=[Samba 3.0.4-SUSE] smb: \> dir . D 0 Mon Jun 28 15:59:13 2004 .. D 0 Mon Jun 28 15:31:07 2004 Jaan D 0 Mon Jun 28 16:00:16 2004 Hergo D 0 Mon Jun 28 15:52:04 2004 Jaak D 0 Mon Jun 28 15:52:10 2004 Andres D 0 Mon Jun 28 15:52:26 2004 Uno D 0 Mon Jun 28 15:52:37 2004 Liina D 0 Mon Jun 28 15:52:54 2004 Tiina D 0 Mon Jun 28 15:53:07 2004 Janne D 0 Mon Jun 28 15:53:15 2004 K-Büroo D 0 Mon Jun 28 15:59:10 2004 Peeter D 0 Mon Jun 28 15:53:20 2004 Ivo D 0 Mon Jun 28 15:55:03 2004 Siim D 0 Mon Jun 28 15:55:21 2004 Error in dskattr: Call returned zero bytes (EOF) smb: \> cd Jaan cd \Jaan\: Call returned zero bytes (EOF) smb: \> dir Call returned zero bytes (EOF) listing * Error in dskattr: Call returned zero bytes (EOF) smb: \> Perhaps I should add more details to my first report: SuSE Linux 8.2 is running with an SMP Kernel, 2xCPUs in Dell PowerEdge 2650 server, Intel platform. Kernel version is 2.40.20. The bug occures even with only one user connected (problem is not caused by high system load). It is not possible to say what causes the problem. Sometimes the system is fine for hours, and sometimes it takes only a few file operations to cause Signal 11 panic. The easiest way to describe the problem is the following: the MSOfficeXP document located on Samba share is opened by Windows client for editing. It reads fine, but as soon as the user tries to save the document back to the share, the Panic on Signal 11 occures and the file on the Samba share becomes zero in length. Windows client gets file access error and saving is impossible. The strange thing is that the file can be saved locally on Windows client machine and then *copied* to Samba share without any problem! (Even overwriting that zero-length file *by copying over* works fine). This problem was present in Samba 2.8.0a, where it occured immediately every time the document was saved. Now with 3.0.4 it happens sporadically, and makes it impossible to run Samba in a production environment. I saw several reports like that with suse 8.2 installations. is it possible for you to set up the same environment on a suse 9.0/9.1 ? Might be a non samba issue. Björn, Thank you for addressing this issue. I would love to go SuSE 9.1, but unfortunately the newer kernel versions supplied with 9.x do not have PERC RAID controller support compiled in. I tried to ask at the official Dell Linux forum if there's a way to run SuSE on these Dell PowerEdge machines, but I got no reply. See? board.id=pes_linux&message.id=1951 I have gone back to Samba 3.0.2 as it was the only older RPM I could find for SuSE 8.2 distribution. It has been running for 15 hours now in the production environment and so far there hasn't been a Signal 11 panic. (I keep my fingers crossed). I have SuSE 9.1 and it appears that I got rid of the problem after setting up disk quotas on my system. Thanks for Jeremy Allison (news://news.gmane.org:119/20040701230543.GB24250@legion.cup.hp.com) Well, it does not seem to be an OS issue. At least not directly. I am now running Samba 3.0.2 on the same machine that had problems with 3.0.4. The older version (3.0.2) has been running in a productive environment for 4 working days now, with ~25 users connecting simultaneously and no problems whatsover. I cannot say with a 100% reliability that the the problem is gone for good with an older version, but with 3.0.4 usually I would get it at least 2 times a day (sometimes much more). Now with 3.0.2 I haven't had it for 4 days. I'll keep watching... This is just to confirm that 3.0.2 has been running without any problems for 10 days now, meaning there is definitely an issue with 3.0.4. I have noticed theres a 3.0.5RC1 available for download, with some bugs fixed that sound similar to the problems I've been having. Going to give it a try... Tried 3.0.5RC1 on the above mentioned machine running SuSE 8.2. Unusable at all... :( Browsing folders is ok, but as soon as *any* operation is attempted on a file, the Windows explorer freezes, and after a couple of minutes it says that share is unavailable. The message in samba log looks like this: [2004/07/19 21:23:49, 0] lib/util_sock.c:read_socket_data(384) read_socket_data: recv failure for 4. Error = Connection reset by peer So back to 3.0.2... :(( can you give 3.0.6 a quick test and see if the bug is fixed there. The diff is fairly large. Thanks. I had the same problem on a SuSE 9.1 machine with the security update from Samba 3.0.2 to 3.0.4 provided by Suse. Today I've installed the 3.0.6 Suse rpm from and it works fine now. fixed in 3.0.6. Thanks for testing. Tried 3.0.6 today. Got a bit further with it than with 3.0.5RC, which did not work at all, but still far from perfect. :( The testing is done on SuSE 8.2 Intel machine, 2xCPUs, SMP Kernel 2.4.20 I have only installed 2 RPM packages from the official Samba binaries: samba3-3.0.6-1.rpm samba3-client-3.0.6-1.rpm The config file is very basic, as I only need filesharing features: =========================================== [global] server string = Samba map to guest = Bad User guest account = nobody syslog = 3 unix charset = ISO8859-1 display charset = ISO8859-1 [htdocs] path = /srv/www valid users = +FERM write list = +FERM force user = wwwrun force group = www create mask = 0660 directory mask = 0770 read only = No browseable = No ====================================================== Once I start the smb daemon I can successfully map and browse the "htdocs" share. Then I try to create a text file from Windows explorer. I open this text file with the Notepad or Wordpad and edit it. When I try to save I get the error message saying that the access to file is denied. After that I close Notepad/Wordpad and do the smbstatus command. Here is the result: Samba version 3.0.6-SUSE PID Username Group Machine ------------------------------------------------------------------- 6106 nazaand FERM pc39789 (172.17.28.101) Service pid machine Connected at ------------------------------------------------------- htdocs 6106 pc39789 Mon Sep 6 20:22:56 2004 Locked files: Pid DenyMode Access R/W Oplock Name -------------------------------------------------------------- 6105 DENY_ALL 0x2019f RDWR EXCLUSIVE+BATCH /srv/www/htdocs/New Text Document.txt Mon Sep 6 20:22:50 2004 The oplock remains long after the file editing application was closed. And here is the contents of the log file. Signal 11 panic is still there. Starting Samba SMB daemon done cts2:/etc/samba # cat /var/log/samba/log.smbd [2004/09/06 20:27:30, 0] smbd/server.c:main(760) smbd version 3.0.6-SUSE started. [2004/09/06 20:27:32, 1] smbd/service.c:make_connection_snum(648) pc39789 (172.17.28.101) connect to service htdocs initially as user wwwrun (uid=30, gid=8) (pid 6136) [2004/09/06 20:27:37, 0] lib/fault.c:fault_report(36) =============================================================== [2004/09/06 20:27:37, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 6136 (3.0.6-SUSE) Please read the appendix Bugs of the Samba HOWTO collection [2004/09/06 20:27:37, 0] lib/fault.c:fault_report(39) =============================================================== [2004/09/06 20:27:37, 0] lib/util.c:smb_panic2(1385) PANIC: internal error [2004/09/06 20:27:37, 0] lib/util.c:smb_panic2(1393) BACKTRACE: 22 stack frames: #0 /usr/sbin/smbd(smb_panic2+0x1b6) [0x81d6e1f] #1 /usr/sbin/smbd(smb_panic+0x19) [0x81d6c67] #2 /usr/sbin/smbd [0x81c50ed] #3 /usr/sbin/smbd [0x81c5162] 53a] #8 /usr/sbin/smbd(sys_get_quota+0xa0) [0x80dde5c] #9 /usr/sbin/smbd(disk_quotas+0x46) [0x80e1392] #10 /usr/sbin/smbd [0x808e60b] #11 /usr/sbin/smbd(sys_disk_free+0x2d) [0x808e861] #12 /usr/sbin/smbd(vfswrap_disk_free+0x2d) [0x80cf244] #13 /usr/sbin/smbd [0x80bbbd5] #14 /usr/sbin/smbd(reply_trans2+0x8e4) [0x80c3ca4] #15 /usr/sbin/smbd [0x80d9079] #16 /usr/sbin/smbd [0x80d9129] #17 /usr/sbin/smbd(process_smb+0x1eb) [0x80d946e] #18 /usr/sbin/smbd(smbd_process+0x170) [0x80da043] #19 /usr/sbin/smbd(main+0x7d6) [0x8248464] #20 /lib/libc.so.6(__libc_start_main+0xce) [0x402068ae] #21 /usr/sbin/smbd(ldap_msgfree+0x71) [0x80814f1] Could this be perhaps related to 2xCPU kernel/oplocks? Version 3.0.2 still works fine, by the way. All versions after that exhibit this problem. Tried Samba 3.0.7 today. It is getting better, but not there yet... Maybe this bug should be closed and another open instead, because the Signal 11 Panic is gone, however Samba is still unusable. Same setup and config as in the comment #15. Somewhat different results (similar to #11, but not exactly the same): What is OK: - No more Signal 11 Panic - Browsing the share is OK - Creating, copying, moving, renaming, deleting files/folders is OK What is not OK: the following sequence of actions gives a strange new error: 1) create new text file (0 bytes length) - OK 2) open this file with the Windows Notepad - OK 3) type text ("test test test") and save the file - OK (the byte length changes correctly) 4) try to open this file AGAIN with Notepad - NOT OK: after a long waiting time (~1 minute) the file is opened, but instead of the original contents there is a string: " IÿSMB. ˆ" (without quotes) If other string is used, the first letter changes, but "ÿSMB." is always present. During the opening time the "smbstatus" command gives the following output: Samba version 3.0.7-SUSE PID Username Group Machine ------------------------------------------------------------------- 1329 nazaand FERM pc35632 (172.17.27.106) Service pid machine Connected at ------------------------------------------------------- IPC$ 1329 pc35632 Mon Sep 27 15:00:31 2004 htdocs 1329 pc35632 Mon Sep 27 14:54:58 2004 Locked files: Pid DenyMode Access R/W Oplock Name -------------------------------------------------------------- 1329 DENY_NONE 0x20089 RDONLY EXCLUSIVE+BATCH /srv/www/test.txt Mon Sep 27 15:00:47 2004 After the file is "open" (in a corrupt manner), the "IPC$" share is not present and there are no "Locked files". 5) If one tries to save this wrongly opened file, Windows gives an error "Delayed Write Failed", after 1 minute waiting time and the "smbstatus" shows the same file being in "1355 DENY_WRITE 0x2019f RDWR EXCLUSIVE+BATCH" lock mode. 6) Every time the ~1 minute wait is experienced, the log file has the following entries: [2004/09/27 15:02:05, 1] smbd/service.c:make_connection_snum(648) pc35632 (172.17.27.106) connect to service htdocs initially as user wwwrun (uid=30, gid=8) (pid 1355) [2004/09/27 15:07:33, 0] lib/util_sock.c:read_socket_data(384) read_socket_data: recv failure for 4. Error = Connection reset by peer [2004/09/27 15:07:33, 1] smbd/service.c:close_cnum(837) pc35632 (172.17.27.106) closed connection to service htdocs? Here is the log file.... [2004/12/01 22:07:58, 1] smbd/service.c:make_connection_snum(648) abx-eurex2 (172.17.27.106) connect to service htdocs initially as user wwwrun (uid=30, gid=8) (pid 5695) [2004/12/01 22:08:38, 0] lib/fault.c:fault_report(36) =============================================================== [2004/12/01 22:08:38, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 5695 (3.0.9-SUSE) Please read the appendix Bugs of the Samba HOWTO collection [2004/12/01 22:08:38, 0] lib/fault.c:fault_report(39) =============================================================== [2004/12/01 22:08:38, 0] lib/util.c:smb_panic2(1403) PANIC: internal error [2004/12/01 22:08:38, 0] lib/util.c:smb_panic2(1411) BACKTRACE: 22 stack frames: #0 /usr/sbin/smbd(smb_panic2+0x1b6) [0x81d7e0c] #1 /usr/sbin/smbd(smb_panic+0x19) [0x81d7c54] #2 /usr/sbin/smbd [0x81c5c85] #3 /usr/sbin/smbd [0x81c5cf512] #8 /usr/sbin/smbd(sys_get_quota+0xa0) [0x80dde34] #9 /usr/sbin/smbd(disk_quotas+0x46) [0x80e136a] #10 /usr/sbin/smbd [0x808e6d3] #11 /usr/sbin/smbd(sys_disk_free+0x2d) [0x808e929] #12 /usr/sbin/smbd(vfswrap_disk_free+0x2d) [0x80cf1cc] #13 /usr/sbin/smbd [0x80bbbeb] #14 /usr/sbin/smbd(reply_trans2+0x907) [0x80c3ca4] #15 /usr/sbin/smbd [0x80d9015] #16 /usr/sbin/smbd [0x80d90c5] #17 /usr/sbin/smbd(process_smb+0x1eb) [0x80d940a] #18 /usr/sbin/smbd(smbd_process+0x170) [0x80d9fed] #19 /usr/sbin/smbd(main+0x7e8) [0x824b057] #20 /lib/libc.so.6(__libc_start_main+0xce) [0x402068ae] #21 /usr/sbin/smbd(ldap_msgfree+0x75) [0x8081561] [2004/12/01 22:08:39, 1] smbd/service.c:make_connection_snum(648) abx-eurex2 (172.17.27.106) connect to service htdocs initially as user wwwrun (uid=30, gid=8) (pid 5696) Created attachment 815 [details] Here is level 10 log Here is level 10 log in which: * started smbd on Linux machine * opened existing file "New Text Document.txt" with Notepad from the XP client * typed a test string and tried to save the file * immediately experienced Signal 11 panic * stopped smbd (In reply to comment #17) >? > You are not the only one. After having installed Samba 3.0.7 and 3.0.10, we are having almost the same problems. As you can see in the log printout, it's the same error message. Our problem is that if we follow your wordpad example, we can not save the file. We can create it, delete it, rename it but we can't save it with a new content. In some of the other bug rapports, I have seen a user solving, what looks like a related problem, by implementing quotas on the harddrive. Have you tried it. We have also other problems with Samba, like disconnecting shares, unability to save more the mayde 1GB data to a share before it disconnets etc. We tried to moved our old Samba 2.2.8a installation to a new system (SuSE9.2), but we have had to go back to the old system, while we try to figure out, what's wrong. It doesn't look to promising though. Our system is a single CPU machine with SuSE9.2 and W2K SP4 clients. ################ # Log printout ################ [2004/12/21 15:54:59, 0] lib/fault.c:fault_report(36) =============================================================== [2004/12/21 15:54:59, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 5733 ba8fb] ] [2004/12/21 15:54:59, 1] smbd/service.c:make_connection_snum(647) bopc2 (192.168.7.43) connect to service diverse initially as user bo (uid=1000, gid=100) (pid 5751) [2004/12/21 15:54:59, 0] lib/fault.c:fault_report(36) =============================================================== [2004/12/21 15:54:59, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 5751 b9d15] ] What I need to fix this is a good stack backtrace including symbols. I need a smbd compiled with -g, and an smb.conf panic action set to : panic action = /bin/sleep 90000 When it crashes, attach to the parent process of the sleep with gdb and type "bt" at the gdb prompt. Then paste that into the bug report. The problem is I don't know where in the quota code it's crashing, and I won't until I get this. Thanks, Jeremy. please retest against 3.0.11 and reopen if necessary. Also reset the version if you reopen the bug report. Thanks. Thank you for trying to fix the issue! 3.0.11 is a lot better but still produces same problem from time to time. Previous releases 3.0.4-3.0.10 were much worse, crashing almost immediately, as soon as any write operation was performed on Samba share. 3.0.11 works for some time, and then gives the following error: [2005/02/10 17:50:25, 0] lib/fault.c:fault_report(36) =============================================================== [2005/02/10 17:50:25, 0] lib/fault.c:fault_report(37) INTERNAL ERROR: Signal 11 in pid 4738 (3.0.11) Please read the appendix Bugs of the Samba HOWTO collection [2005/02/10 17:50:25, 0] lib/fault.c:fault_report(39) =============================================================== [2005/02/10 17:50:25, 0] lib/util.c:smb_panic2(1495) PANIC: internal error [2005/02/10 17:50:25, 0] lib/util.c:smb_panic2(1503) BACKTRACE: 22 stack frames: #0 /usr/sbin/smbd(smb_panic2+0x1b6) [0x81de7c5] #1 /usr/sbin/smbd(smb_panic+0x19) [0x81de60d] #2 /usr/sbin/smbd [0x81cc311] #3 /usr/sbin/smbd [0x81cc386] e00ca] #8 /usr/sbin/smbd(sys_get_quota+0xa0) [0x80e09ec] #9 /usr/sbin/smbd(disk_quotas+0x46) [0x80e3f22] #10 /usr/sbin/smbd [0x808f0bb] #11 /usr/sbin/smbd(sys_disk_free+0x2d) [0x808f311] #12 /usr/sbin/smbd(vfswrap_disk_free+0x2d) [0x80d0a5c] #13 /usr/sbin/smbd [0x80bd04e] #14 /usr/sbin/smbd(reply_trans2+0x907) [0x80c54f9] #15 /usr/sbin/smbd [0x80dbca1] #16 /usr/sbin/smbd [0x80dbd51] #17 /usr/sbin/smbd(process_smb+0x1eb) [0x80dc096] #18 /usr/sbin/smbd(smbd_process+0x170) [0x80dcc83] #19 /usr/sbin/smbd(main+0x7f1) [0x8253cde] #20 /lib/libc.so.6(__libc_start_main+0xce) [0x402068ae] #21 /usr/sbin/smbd(ldap_msgfree+0x71) [0x8081f01] As before, when panic 11 appears just once it "spoils everything": restarting smbd does not help -- even after "/etc/init.d/smb restart" Samba keeps genereating panic 11 every time the write operation is performed or even when I simply highlight Samba share name on a Windows client! (In "My Computer" view). The machine needs to be rebooted completely for it to work properly again. It is really quite hard to say what exactly causes this panic. I was not able to reproduce the problem manually as it happens spontaneously. If you still want me to test Samba using the directions in Comment #20, please tell me so. And just in case here is my smb.conf again: [global] server string = Samba map to guest = Bad User guest account = nobody syslog = 3 interfaces = eth0, lo bind interfaces only = Yes socket options = IPTOS_LOWDELAY TCP_NODELAY write cache size = 262144 unix charset = ISO8859-1 display charset = ISO8859-1 [rsm] path = /srv/www/htdocs/rsm valid users = elkanis, nazaand write list = elkanis, nazaand force group = RSM create mask = 0660 directory mask = 0770 browseable = No [ferm] path = /srv/www/htdocs/ferm valid users = nazaand write list = nazaand force group = FERM create mask = 0660 directory mask = 0770 browseable = No And here is what I hope to be a good stack backtrace, obtained as described in Comment #20: GNU gdb 5.3 This GDB was configured as "i586-suse-linux". Attaching to process 1224 Reading symbols from /usr/local/samba/sbin/smbd...done. Reading symbols from /lib/libcrypt.so.1...done. Loaded symbols for /lib/libcrypt.so.1 Reading symbols from /lib/libresolv.so.2...done. Loaded symbols for /lib/libresolv.so.2 Reading symbols from /lib/libnsl.so.1...done. Loaded symbols for /lib/libnsl.so.1 /usr/lib/gconv/UTF-16.so...done. Loaded symbols for /usr/lib/gconv/UTF-16.so Reading symbols from /usr/lib/gconv/ISO8859-1.so...done. Loaded symbols for /usr/lib/gconv/ISO8859-1.so Reading symbols from /usr/lib/gconv/IBM850.so...done. Loaded symbols for /usr/lib/gconv/IBM850.so Reading symbols from /lib/libnss_compat.so.2...done. Loaded symbols for /lib/libnss_compat.so.2 Reading symbols from /lib/libnss_files.so.2...done. Loaded symbols for /lib/libnss_files.so.2 0x40127c17 in waitpid () from /lib/libc.so.6 (gdb) bt #0 0x40127c17 in waitpid () from /lib/libc.so.6 #1 0x400ba0f4 in do_system () from /lib/libc.so.6 #2 0x081eac14 in smb_panic2 (why=0x82ae2be "internal error", decrement_pid_count=1) at lib/util.c:1486 #3 0x081eab20 in smb_panic (why=0x82ae2be "internal error") at lib/util.c:1445 #4 0x081d59c2 in fault_report (sig=11) at lib/fault.c:41 #5 0x081d5a29 in sig_fault (sig=11) at lib/fault.c:64 #6 <signal handler called> #7 0x400e5b16 in fgets_unlocked () from /lib/libc.so.6 #8 0x00000807 in ?? () #9 0x401531e6 in getmntent_r () from /lib/libc.so.6 #10 0x4015307d in getmntent () from /lib/libc.so.6 #11 0x080e24ae in sys_path_to_bdev (path=0x82622e3 ".", mntpath=0xbfffe908, bdev=0xbfffe904, fs=0xbfffe900) at lib/sysquotas.c:71 #12 0x080e2f8a in sys_get_quota (path=0x82622e3 ".", qtype=SMB_USER_QUOTA_TYPE, id={uid = 500, gid = 500}, dp=0xbfffe980) at lib/sysquotas.c:385 #13 0x080e6a77 in disk_quotas (path=0x82622e3 ".", bsize=0xbfffee48, dfree=0xbfffee50, dsize=0xbfffee40) at smbd/quotas.c:1411 #14 0x0808628c in disk_free (path=0x82622e3 ".", small_query=0, bsize=0xbffff0e0, dfree=0xbffff0d0, dsize=0xbffff0d8) at smbd/dfree.c:123 #15 0x0808662a in sys_disk_free (path=0x82622e3 ".", small_query=0, bsize=0xbffff0e0, dfree=0xbffff0d0, dsize=0xbffff0d8) at smbd/dfree.c:163 #16 0x080d047c in vfswrap_disk_free (handle=0x0, conn=0x83ac0b8, path=0x82622e3 ".", small_query=0, bsize=0xbffff0e0, dfree=0xbffff0d0, dsize=0xbffff0d8) at smbd/vfs-wrap.c:49 #17 0x080b9df8 in call_trans2qfsinfo (conn=0x83ac0b8, inbuf=0x40428008 "", outbuf=0x40449008 "", length=74, bufsize=131072, pparams=0xbffff21c, total_params=2, ppdata=0xbffff218, total_data=0, max_data_bytes=560) at smbd/trans2.c:1942 #18 0x080c3444 in reply_trans2 (conn=0x83ac0b8, inbuf=0x40428008 "", outbuf=0x40449008 "", length=74, bufsize=131072) at smbd/trans2.c:4482 #19 0x080dd7a6 in switch_message (type=50, inbuf=0x40428008 "", outbuf=0x40449008 "", size=74, bufsize=131072) at smbd/process.c:968 #20 0x080dd865 in construct_reply (inbuf=0x40428008 "", outbuf=0x40449008 "", size=74, bufsize=131072) at smbd/process.c:998 #21 0x080ddbdd in process_smb (inbuf=0x40428008 "", outbuf=0x40449008 "") at smbd/process.c:1098 #22 0x080dea30 in smbd_process () at smbd/process.c:1558 #23 0x082505b4 in main (argc=3, argv=0xbffff4e4) at smbd/server.c:951 #24 0x4008c8ae in __libc_start_main () from /lib/libc.so.6 Just to add, that I have tried to implement user and group quota on my Linux machine as someone suggested above. The quota is on and working but at the moment all users have no limits set. Unfortunately the problem is not gone even with the quota management enabled. I will try to set some quota limits and see if it changes anything in Samba behaviour. First I only added quota limits to groups, and left user quotas unlimited. I got first segfault after several hours of work, where I was editing and saving some documents. After that I added quota limits to users, and Samba behaved fine for 2 or 3 days, but eventually gave a segfault again. To sum it up: implementing quotas somewhat improved the situation but it is still far from perfect as the segfaults still occur. Not sure if that is connected but when I try to right click on a mounted network share (in WindowsXP) and select 'Properties' to see the total/occupied disk space on a share, I get the following error in Samba's log file. [2005/03/21 17:30:52, 1] smbd/fake_file.c:open_fake_file_shared1(45) access_denied to service[htdocs] file[$Extend/$Quota:$Q:$INDEX_ALLOCATION] user[wwwrun] Still occures in 3.0.14a Problem still occurs on SuSE9.3 with smbd version 3.0.20b-3.1-SUSE *and* file permission paranoid (which means /etc/fstab and /etc/mtab mode 0600). Otherwise it seems to work fine. My guess from a quick look at the smbd sources: The following code ------------------------------------ #include <stdio.h> #include <mntent.h> int main () { FILE *fp; struct mntent *mnt; fp = setmntent(MOUNTED,"r"); while ((mnt = getmntent(fp))) { /*do something */ } endmntent(fp) ; return 0; } -------------------------------------- segfaults on my machine, if run as ordinary user. And source/smbd/quotas.c uses a similar code in function "disk_quotas" and maybe others. Maybe there should be a check for fp==null before calling getmntnet(fp)?? Thanks. I think Jeremy just fixed this. Can you retest against 3.0.21rc1? And reopen if the bug is not fixed. Oh my! I've almost given up on this after 1.5 years! I just want to confirm that my SuSE installation actually has been running with the paranoid permissions on the above mentioned files. If it is really the root of this problem Torsten - you are a STAR! And Geremy and Gerald, thanks for fixing it too! *** Bug 3279 has been marked as a duplicate of this bug. *** Andrei, Could you please post the part of your smb.conf that you believe allow you to run without crashing Samba (was it limits on disk quota's)? This might help understand. Also if you would not mind posting your results from 3.0.21rc1 I will follow this thread instead of the one I created. Yesterday I noticed a new (related?) issue with samba 3.0.20b-2.1: the smbd process will terminate without any information in the log. Since I can't find any log entries, Samba or system (/var/log/warn or /var/log/messages) I can't add more information at this time. If someone can enlighten me where else I can look, then maybe I can help with this problem. I am not a SuSE or Linux expert, just a volunteer trying to keep my schools system running. Thanks. Mark, There is nothing in my smb.conf related to disk quotas. My previous posts about implementing quotas were about installing the 'quota' RPM package and configuring it accordingly with the appropriate utilities from the 'quota'. I do not have possibility to test the new RC1 version at the moment. Is it only fstab and mtab that need their permissions changed to 0644? I just ran SuSEconfig and it changes the permission of fstab back to 0600 on my SuSE 9.2 system. Does 3.0.21rc2 eliminate the need of manually changing permissions on fstab? (SuSEconfig does not seem to change mtab on my system.). > Is it only fstab and mtab that need their permissions changed to 0644? I do not know for sure but those were the files suggested by Torsten. > I just ran SuSEconfig and it changes the permission of fstab > back to 0600 on my SuSE 9.2 system. SuSEconfig sets permissions according to the files /etc/permissions.easy /etc/permissions.secure or /etc/permissions.paranoid. Which file is chosen depends on what you have configured in YaST as you security setting. If you want to override the standard files, you can create/edit file called /etc/permissions.local You should put the entries there in the same format as in the rest of the /etc/permissions.* files. > Does 3.0.21rc2 eliminate the need of manually changing permissions > on fstab? This is a very good question which I also would like to get an answer to. Yes. should be fixed now. See comment #29 I just looked at the samba-3.0.21rc2 sources. The linux version of disk_quotas still doesn't test for (fp = setmntent(MOUNTED,"r")) == NULL) [line 219 of source/smbd/quotas.c]. Since the test is done for Solaris and Cray version of "disk_quotas", I think it be there, too. Torsten: You're right. I've added your suggestion with svn rev 12076 and will provide SuSE packages including this later today. (In reply to comment #38) > Torsten: You're right. I've added your suggestion with svn rev 12076 and will > provide SuSE packages including this later today. I have noticed new dates, but the same revision number on the SuSE 9.2. Also the files are not the same size. samba-client-3.0.21rc2-0.1.i586.rpm is almost 2.5mb smaller. So now I am confused. Is 12/04 or 12/05 the correct update? Shouldn't we change the sub-version if there is any change? Thanks. We don't use the code path changed with svn rev 12076 if we have sys quotas. In this case we use the last function of smbd/quotas.c Therefore newer packages don't change anything. The package release numbering is done by our build system with the focus to allow updates from one product to the next. As we don't provide any build available via Samba.org or as an official update the release numbers don't chnage with any rebuild. I'll see if there is a way to solve this in the future. And just for the record: We fixed the problem with the RPM release of the packages for all SuSE Linux products. From now on every rebuild will result in an increased release number and users can always update the packages with a simple rpm -Fvh *.rpm instead of any evil --force and other options. Thank you for taking care about this.
https://bugzilla.samba.org/show_bug.cgi?id=1465
CC-MAIN-2020-40
refinedweb
5,644
67.65
Krzysztof Cwalina, Brad Abrams Mentioned 56. What are the differences in implementing interfaces implicitly and explicitly in C#? When should you use implicit and when should you use explicit? Are there any pros and/or cons to one or the other? Microsoft's official guidelines (from first edition Framework Design Guidelines) states that using explicit implementations are not recommended, since it gives the code unexpected behaviour. I think this guideline is very valid in a pre-IoC-time, when you don't pass things around as interfaces. Could anyone touch on that aspect as well? Implicit is when you define your interface via a member on your class. Explicit is when you define methods within your class on the interface. I know that sounds confusing but here is what I mean: IList.CopyTo would be implicitly implemented as: public void CopyTo(Array array, int index) { throw new NotImplementedException(); } and explicitly as: void ICollection.CopyTo(Array array, int index) { throw new NotImplementedException(); } The difference being that implicitly is accessible through your class you created when it is cast as that class as well as when its cast as the interface. Explicit implementation allows it to only be accessible when cast as the interface itself. MyClass myClass = new MyClass(); // Declared as concrete class myclass.CopyTo //invalid with explicit ((IList)myClass).CopyTo //valid with explicit. I use explicit primarily to keep the implementation clean, or when I need two implementations. But regardless I rarely use it. I am sure there are more reasons to use it/not use it that others will post. See the next post in this thread for excellent reasoning behind each.: To throw exceptions, I usually use built-in exception classes, e.g. ArgumentNullException and NotSupportedException. However, sometimes I need to use a custom exception and in that case I write: class SlippedOnABananaException : Exception { } class ChokedOnAnAppleException : Exception { } and so on. Then I throw and catch these in my code. But today I came across the ApplicationException class - should I be using that instead? What's it for? It does seem inefficient to have lots of effectively identical Exception classes with different names (I don't usually need any individual functionality). But I dislike the idea of catching a generic ApplicationException and having to use extra code to determine what the error was. Where should ApplicationException fit in with my code? The short answer is: nowhere. It is a relic of the past, where Microsoft intended developers to inherit all their custom exceptions from ApplicationException. Shortly after, they changed their mind and advised that custom exceptions should derive from the base Exception class. See Best Practices for Handling Exceptions on MSDN. One of the more widely circulated reasons for this comes from an exerpt from Jeffery Richter in Framework Design Guidelines:. So there you have it. The executive summary is that ApplicationException is not harmful, just useless. Having read the threads Is SqlCommand.Dispose enough? and Closing and Disposing a WCF Service I am wondering for classes such as SqlConnection or one of the several classes inheriting from the Stream class does it matter if I close Dispose rather than Close? According to Microsoft guidelines, it's a good practice to provide Close method where suitable. Here is a citation from Framework design guidelines Consider providing method Close(), in addition to the Dispose(), if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ... In most of cases Close and Dispose methods are equivalent. The main difference between Close and Dispose in the case of SqlConnectionObject is: An application can call Close more than one time. No exception is generated. If you called Dispose method SqlConnection object state will be reset. If you try to call any method on disposed SqlConnection object, you will receive exception. That said: It's common to see a _var variable name in a class field. What does the underscore mean? Is there a reference for all these special naming conventions? With C#, Microsoft Framework Design Guidelines suggest not using the underscore character for public members. For private members, underscores are OK to use. In fact, Jeffrey Richter (often cited in the guidelines) uses an m_ for instance and a "s_" for private static memberss. Personally, I use just _ to mark my private members. "m_" and "s_" verge on Hungarian notation which is not only frowned upon in .NET, but can be quite verbose and I find classes with many members difficult to do a quick eye scan alphabetically (imagine 10 variables all starting with m_). What is best practice when creating your exception classes in a .NET solution: To derive from System.Exception or from System.ApplicationException? The authors of the framework themselves consider ApplicationException worthless: with a nice follow-up here: When in doubt, I follow their book Framework Design Guidelines. The topic of the blog post is further discussed there. rp I? Thanks in advance! The Framework Design Guidelines has the best rules for using nested classes that I have found to date. Here's a brief summary list: - Do use nested types when the relationship between type and nested type is such the member-accessibility semantics are desired. - Do NOT use public nested types as a logical group construct - Avoid using publicly exposed nested types. - Do NOT use nested types if the type is likely to be referenced outside of the containing type. - Do NOT use nested types if they need to be instantiated by client code. - Do NOT define a nested type as a member of an interface. I'm working on an MVVM project, so I have folders in my project like Models, ViewModels, Windows, etc. Whenever I create a new class, Visual Studio automatically adds the folder name to the namespace designation instead of just keeping the project-level namespace. So, adding a new class to the ViewModels folder would result in the namespace, MyProject.ViewModels instead of just MyProject. When I first encountered this, it annoyed me. My class names are pretty clear, sometimes even containing the name of the folder in them (e.g., ContactViewModel). I quickly found myself manually removing the folder name on the namespaces. I even tried at one point to create a custom class template (see this question), but I couldn't get that to work, so continued doing it manually. I've begun to wonder, though, if this convention exists for a good reason that I'm just not seeing. I could see it being useful if you for some reason had lots of sets of identical class names organized into folders, but that doesn't seem like a particularly common scenario. Questions: If you want some solid advice I'd recommend buying Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries which gives you all you need to know from the actual framework design team. ...the goal when naming namespaces is creating sufficient clarity for the programmer using the framework to immediately know what the content of the namespace is likely to be... <Company>.(<Product>|<Technology>)[.<Feature>][.<Subnamespace>] And importantly Do not use the same name for a namespace and a type in that namespace Fragmenting every 1/2 types into namespaces would not meet the first requirement as you would have a swamp of namespaces that would have to be qualified or used, if you followed the Visual Studio way. For example Core - Domain - Users - Permissions - Accounts Would you create or just For Visual Studio's way it would be the former. Also if you use lowercase file/folder naming you're looking at renaming the class each time, as well as making one big namespace tangle. Most of it is common sense and really down to how you would expect to see the namespaces organised if you were a consumer of your own API or framework. I am taking a class at my university called "Software Constraints". In the first lectures we were learning how to build good APIs. A good example we got of a really bad API function is the socket public static void Select(IList checkRead, IList checkWrite, IList checkError, int microseconds); in C#. The function receives 3 lists of sockets, and destroys them making the user have to clone all the sockets before feeding them into the Select(). It also has a timeout (in microseconds) which is an int, that sets the maximum time the server can wait for a socket. The limits of this is +/-35 minutes (because it is an int). Many coding standards and longer documents and even books (Framework Design Guidelines) have been written on this topic, but much of this only helps at a fairly low level. There is also a matter of taste. APIs may obey every rule in whatever rulebook, and still suck, due to slavish adherence to various in-vogue ideologies. A recent culprit is pattern-orientation, wherein Singleton Patterns (little more than initialized global variables) and Factory Patterns (a way of parameterizing construction, but often implemented when not needed) are overused. Lately, it's more likely that Inversion of Control (IoC) and associated explosion in the number of tiny interface types that adds redundant conceptual complexity to designs. The best tutors for taste are imitation (reading lots of code and APIs, finding out what works and doesn't work), experience (making mistakes and learning from it) and thinking (don't just do what's fashionable for its own sake, think before you act). There are several other good answers on this already, so I thought I'd just throw in some links I didn't see mentioned. Articles Books: It's been discussed before on Stack Overflow that we should prefer attributes to marker interfaces (interfaces without any members). Interface Design article on MSDN asserts this recommendation too:. There's even an FxCop rule to enforce this recommendation: Avoid empty interfaces Interfaces define members that provide a behavior or usage contract. The functionality described by the interface can be adopted by any type, regardless of where the type appears in the inheritance hierarchy. A type implements an interface by providing implementations for the interface's members. An empty interface does not define any members, and as such, does not define a contract that can be implemented. If your design includes empty interfaces that types are expected to implement, you are probably using an interface as a marker, or a way of identifying a group of types. If this identification will occur at runtime, the correct way to accomplish this is to use a custom attribute. Use the presence or absence of the attribute, or the attribute's properties, to identify the target types. If the identification must occur at compile time, then using an empty interface is acceptable. The article states only one reason that you might ignore the warning: when you need compile time identification for types. (This is consistent with the Interface Design article). It is safe to exclude a warning from this rule if the interface is used to identify a set of types at compile-time. Here comes the actual question: Microsoft didn't conform to their own recommendation in the design of the Framework Class Library (at least in a couple cases): IRequiresSessionState interface and IReadOnlySessionState interface. These interfaces are used by the ASP.NET framework to check whether it should enable session state for a specific handler or not. Obviously, it's not used for compile time identification of types. Why they didn't do that? I can think of two potential reasons: Micro-optimization: Checking whether an object implements an interface ( obj is IReadOnlySessionState) is faster than using reflection to check for an attribute ( type.IsDefined(typeof(SessionStateAttribute), true)). The difference is negligible most of the time but it might actually matter for a performance-critical code path in the ASP.NET runtime. However, there are workarounds they could have used like caching the result for each handler type. The interesting thing is that ASMX Web services (which are subject to similar performance characteristics) actually use the EnableSession property of the WebMethod attribute for this purpose. Implementing interfaces are potentially more likely to be supported than decorating types with attributes by third-party .NET languages. Since ASP.NET is designed to be language agnostic, and ASP.NET generates code for types (possibly in a third-party language with the help of CodeDom) that implement the said interfaces based on the EnableSessionState attribute of the <%@ Page %> directive, it might make more sense to use interfaces instead of attributes. What are the persuasive reasons to use marker interfaces instead of attributes? Is this simply a (premature?) optimization or a tiny mistake in the framework design? (Do they think reflection is a "big monster with red eyes"?) Thoughts? Microsoft didn't strictly follow the guidelines when they made .NET 1.0, because the guidelines evolved together with the framework, and some of the rules they didn't learn until it was too late to change the API. IIRC, the examples you mention belong to BCL 1.0, so that would explain it. This is explained in Framework Design Guidelines. That said, the book also remarks that "[A]ttribute testing is a lot more costly than type checking" (in a sidebar by Rico Mariani). It goes on to say that sometimes you need the marker interface for compile time checking, which isn't possible with an attribute. However, I find the example given in the book (p. 88) unconvincing, so I will not repeat it here. Do you use singular or plural for enumerations? I think it makes best sense with plural in the declaration enum Weekdays { Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday } ... but I think it makes more sense with singular when using the type, e.g. Weekday firstDayOfWeek = Weekday.Monday; I read a recommendation somewhere to use singular whith regular enums and plural with flags, but I would like to hear some more pros and cons. One recommendation comes from the .NET Framework Design Guidelines, page 59-60: Do use a singular type name for an enumeration, unless its values are bit fields. public enum ConsoleColor { Black, Blue, Cyan, ... Do use a plural type name for an enumeration with bit fields as values, also called a flags enum. [Flags] public enum ConsoleModifiers { Alt, Control, Shift } This is probably a matter of personal preference, but when do you use properties instead of functions in your code For instance to get an error log I could say string GetErrorLog() { return m_ErrorLog; } or I could string ErrorLog { get { return m_ErrorLog; } } How do you decide which one to use? I seem to be inconsistent in my usage and I'm looking for a good general rule of thumb. Thanks. If there is more than something trivial happening in a property, then it should be a method. For example, if your ErrorLog getter property was actually going and reading files, then it should be a method. Accessing a property should be fast, and if it is doing much processing, it should be a method. If there are side affects of accessing a property that the user of the class might not expect, then it should probably be a method. There is .NET Framework Design Guidelines book that covers this kind of stuff in great detail. While certain guidelines state that you should use an interface when you want to define a contract for a class where inheritance is not clear ( IDomesticated) and inheritance when the class is an extension of another ( Cat : Mammal, Snake : Reptile), there are cases when (in my opinion) these guidelines enter a gray area. For example, say my implementation was Cat : Pet. Pet is an abstract class. Should that be expanded to Cat : Mammal, IDomesticated where Mammal is an abstract class and IDomesticated is an interface? Or am I in conflict with the KISS/YAGNI principles (even though I'm not sure whether there will be a Wolf class in the future, which would not be able to inherit from Pet)? Moving away from the metaphorical Cats and Pets, let's say I have some classes that represent sources for incoming data. They all need to implement the same base somehow. I could implement some generic code in an abstract Source class and inherit from it. I could also just make an ISource interface (which feels more "right" to me) and re-implement the generic code in each class (which is less intuitive). Finally, I could "have the cake and eat it" by making both the abstract class and the interface. What's best? These two cases bring up points for using only an abstract class, only an interface and using both an abstract class and an interface. Are these all valid choices, or are there "rules" for when one should be used over another? I'd like to clarify that by "using both an abstract class and an interface" that includes the case when they essentially represent the same thing ( Source and ISource both have the same members), but the class adds generic functionality while the interface specifies the contract. Also worth noting is that this question is mostly for languages that do not support multiple inheritance (such as .NET and Java). As a first rule of thumb, I prefer abstract classes over interfaces, based on the .NET Design Guidelines. The reasoning applies much wider than .NET, but is better explained in the book Framework Design Guidelines. The main reasoning behind the preference for abstract base classes is versioning, because you can always add a new virtual member to an abstract base class without breaking existing clients. That's not possible with interfaces. There are scenarios where an interface is still the correct choice (particularly when you don't care about versioning), but being aware of the advantages and disadvantages enables you to make the correct decision. So as a partial answer before I continue: Having both an interface and a base class only makes sense if you decide to code against an interface in the first place. If you allow an interface, you must code against that interface only, since otherwise you would be violating the Liskov Substitution Principle. In other words, even if you provide a base class that implements the interface, you cannot let your code consume that base class. If you decide to code against a base class, having an interface makes no sense. If you decide to code against an interface, having a base class that provides default functionality is optional. It is not necessary, but may speed up things for implementers, so you can provide one as a courtesy. An example that springs to mind is in ASP.NET MVC. The request pipeline works on IController, but there's a Controller base class that you typically use to implement behavior. Final answer: If using an abstract base class, use only that. If using an interface, a base class is an optional courtesy to implementers. Update: I no longer prefer abstract classes over interfaces, and I haven't for a long time; instead, I favour composition over inheritance, using SOLID as a guideline. (While I could edit the above text directly, it would radically change the nature of the post, and since a few people have found it valuable enough to up-vote it, I'd rather let the original text stand, and instead add this note. The latter part of the post is still meaningful, so it would be a shame to delete it, too.) Whenever i override a method of a base class, other than my implementation of this method, i seem to have 3 choices. 1) Call base.Method(), and then provide my implementation. 2) Provide my implementation and then call base.Method() 3) Just provide my implementation. Recently while using a library i have realized few bugs that were introduced because of not implementing the method as expected by the library. I am not sure if that is bad on part of library, or something wrong in my understanding. I will take one example. public class ViewManager { public virtual void Customize(){ PrepareBaseView(); } } public class PostViewManager { public override void Customize(){ base.Customize(); PreparePostView(); } } public class PreViewManager { public override void Customize(){ PreparePreView(); base.Customize(); } } public class CustomViewManager { public override void Customize(){ PrepareCustomView(); } } My question here is that how could a child class know (without taking a look at base class implementation) which order (or option) is being expected by the parent class? Is there a way in which parent class could enforce one of the three alternates to all the deriving classes? This is why I feel virtual methods are dangerous when you ship them in a library. The truth is you never really know without looking at the base class, sometimes you have to fire up reflektor, read documentation or approach it with trial and error. When writing code myself I've always tired to follow the rule that says: Derived classes that override the protected virtual method are not required to call the base class implementation. The base class must continue to work correctly even if its implementation is not called. This is taken from, however this is for Event design though I believe I read this in the Framework Design Guidelines book (). However, this is obviously not true, ASP.NET web forms for example require a base call on Page_Load. So, long and short, it varies and unfortunately there is no instant way of knowing. If I'm in doubt I will omit the call initially. I'm reading data in from a file and creating objects based on this data. The data format is not under my control and is occasionally corrupt. What is the most appropriate way of handling these errors when constructing the objects in C#? In other programming languages I have returned a null, but that does not appear to be an option with C#. I've managed to figure out the following options, but I would appreciate advice from more experienced C# programmers: Option 1. Read the file inside the constructor and throw an exception when the source data is corrupt: try { obj = Constructor(sourceFile); ... process object ... } catch (IOException ex) { ... } Option 2. Create the object, then use a method to read data from the source file: obj = Constructor(); obj.ReadData(sourceFile); if (obj.IsValid) { ... process object ... } or possibly throw exceptions on error: obj = Constructor(); try { obj.Read(sourceFile); ... process object ... } catch { ... } Option 3. Create the object using a static TryParse method: if (ObjClass.TryParse(sourceFile, out obj)) { ... process object ... } and if so, should I implement option 3 internally using option 1? public static bool TryParse(FileStream sourceFile, out ObjClass obj) { try { obj = Constructor(sourceFile); return true; } catch (IOException ex) return false; } From Microsoft Constructor Design Guidelines (MSDN), Do throw exceptions from instance constructors if appropriate. Constructors should throw and handle exceptions like any method. Specifically, a constructor should not catch and hide any exceptions that it cannot handle. Factory Method is not the right way to approach this problem. See Constructors vs Factory Methods From Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries 5.3 Constructor Design Consider using a static factory method instead of a constructor if the semantics of the desired operation do not map directly to the construction of a new instance, or if following the constructor design guidelines feels unnatural. Do throw exceptions from instance constructors if appropriate. .NET BCL implementations do throw exceptions from constructors For example, the List Constructor (Int32), throws an ArgumentOutOfRangeException when the capacity argument of the list is negative. var myList = new List<int>(-1); // throws ArgumentOutOfRangeException Similarly, your constructor should throw an appropriate type of exception when it reads the file. For example, it could throw FileNotFoundException if the file does not exist at the specified location, etc. More Information Is it possible to somehow mark a System.Array as immutable. When put behind a public-get/private-set they can't be added to, since it requires re-allocation and re-assignment, but a consumer can still set any subscript they wish: public class Immy { public string[] { get; private set; } } I thought the readonly keyword might do the trick, but no such luck. The Framework Design Guidelines suggest returning a copy of the Array. That way, consumers can't change items from the array. // bad code // could still do Path.InvalidPathChars[0] = 'A'; public sealed class Path { public static readonly char[] InvalidPathChars = { '\"', '<', '>', '|' }; } these are better: public static ReadOnlyCollection<char> GetInvalidPathChars(){ return Array.AsReadOnly(badChars); } public static char[] GetInvalidPathChars(){ return (char[])badChars.Clone(); } The examples are straight from the book. In the IDisposable.Dispose method is there a way to figure out if an exception is being thrown? using (MyWrapper wrapper = new MyWrapper()) { throw new Exception("Bad error."); } If an exception is thrown in the using statement I want to know about it when the IDisposable object is disposed. James, All wrapper can do is log it's own exceptions. You can't force the consumer of wrapper to log their own exceptions. That's not what IDisposable is for. IDisposable is meant for semi-deterministic release of resources for an object. Writing correct IDisposable code is not trivial. In fact, the consumer of the class isn't even required to call your classes dispose method, nor are they required to use a using block, so it all rather breaks down. If you look at it from the point of view of the wrapper class, why should it care that it was present inside a using block and there was an exception? What knowledge does that bring? Is it a security risk to have 3rd party code privy to exception details and stack trace? What can wrapper do if there is a divide-by-zero in a calculation? The only way to log exceptions, irrespective of IDisposable, is try-catch and then to re-throw in the catch. try { // code that may cause exceptions. } catch( Exception ex ) { LogExceptionSomewhere(ex); throw; } finally { // CLR always tries to execute finally blocks } You mention you're creating an external API. You would have to wrap every call at your API's public boundary with try-catch in order to log that the exception came from your code. If you're writing a public API then you really ought to read Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (Microsoft .NET Development Series) - 2nd Edition .. 1st Edition. While I don't advocate them, I have seen IDisposable used for other interesting patterns: * These patterns can be achieved with another layer of indirection and anonymous delegates easily and without having to overload IDisposable semantics. The important note is that your IDisposable wrapper is useless if you or a team member forget to use it properly. Every time I start in deep in a C# project, I end up with lots of events that really just need to pass a single item. I stick with the EventHandler/ EventArgs practice, but what I like to do is have something like: public delegate void EventHandler<T>(object src, EventArgs<T> args); public class EventArgs<T>: EventArgs { private T item; public EventArgs(T item) { this.item = item; } public T Item { get { return item; } } } Later, I can have my public event EventHandler<Foo> FooChanged; public event EventHandler<Bar> BarChanged; However, it seems that the standard for .NET is to create a new delegate and EventArgs subclass for each type of event. Is there something wrong with my generic approach? EventHandler<TEventArgs>, so you don't need to create the generic delegate, but you still need the generic EventArgs<T>class, because TEventArgs: EventArgs. public event EventHandler<EventArgs<Foo>> FooChanged; vs. public event EventHandler<Foo> FooChanged; It can be a pain for clients to register for your events though, because the System namespace is imported by default, so they have to manually seek out your namespace, even with a fancy tool like Resharper... Anyone have any ideas pertaining to that? No, I don't think this is the wrong approach. I think it's even recommended in the [fantastic] book Framework Design Guidelines. I do the same thing. I see little functional difference between using a property public readonly property foo as string get return bar end get end property or a function public function foo() as string return bar end function Why would I want to use one form over the other? Thanks! If you are basing yourself upon the Framework Design Guidelines, you must be using a method only when you are actually performing an action or accessing resouces that could be expensive to use(database, network). The property give the user the impression that the values are stored in memory and that reading a property is fast while calling a method might have further implication than just "get the value". Brad Abrams actually wrote an article about it and is even posted on MSDN here. I would highly suggest that you buy the book Framework Design Guidelines. It's a must read for every developer. The Close method on an ICommunicationObject can throw two types of exceptions as MSDN outlines here. I understand why the Close method can throw those exceptions, but what I don't understand is why the Dispose method on a service proxy calls the Close method without a try around it. Isn't your Dispose method the one place where you want make sure you don't throw any exceptions? It seems to be a common design pattern in .NET code. Here is a citation from Framework design guidelines Consider providing method Close(), in addition to the Dispose(), if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ... Here is a blog post in which you can find workaround for this System.ServiceModel.ClientBase design problem Is there any relevance of a 'public' constructor in an abstract class? I can not think of any possible way to use it, in that case shouldn't it be treated as error by compiler (C#, not sure if other languages allow that). Sample Code: internal abstract class Vehicle { public Vehicle() { } } The C# compiler allows this code to compile, while there is no way i can call this contructor from the outside world. It can be called from derived classes only. So shouldn't it allow 'protected' and 'private' modifiers only. Please comment. Dupe: there is another question on SO just like this: Abstract class constructor access modifier The answers on that question come down to the same thing in the end: it does not really matter if you declare it protected or public. Also there seems to be some discussion about it in literature (e.g. in Framework Design Guidelines). This is referenced in this blogpost: Good design or bad design of abstract class? I am writing an .NET wrapper API for the Netflix API. At this point I can choose to represent URLs as either strings or URI objects. Seems to me there is a good case for both. So if you were using an API, which would you prefer? The below quote is from: Framework Design Guildelines I highly recommend this book to anyone developing frameworks on .Net Do use System.Uri to represent URI / URL data. (For Parameters, properties, and return values) System.Uri is a much safer and richer way of representing URIs. Extensive manipulation of URI-related data using plain strings has been shown to cause many security and correctness problems.. EDIT (per comments): The book specifically states: "Extensive manipulation of URI-related data using plain strings has been shown to cause many security and correctness problems." I am not sure what additional justification you want for using System.Uri / UriBuilder. Additionally, why wouldn't you want to take advantage of the framework to read/manipulate a URI? When designing an API that will be used by others it is important to make them approachable, as well as reliable. For this reason the book does mention, you should provide "nice" overloads for common functionality. However, to ensure correctness, you should always implement the underlying code with URIs. Can you please clarify your wants, or reasons to use only strings? faced with writing a framework to simplify working with a large and complex object library (ArcObjects). What guidelines would you suggest for creating a framework of this kind? Are static methods preferred? How do you handle things like logging? How do you future proof your framework code from changes that a vendor might introduce? I think of all of the various wrappers and helpers I've seen for NHibernate, log4net, and code I've read from projects like NLog and NetTopologySuite and I see so many good approaches, but honestly I'm at a loss where to start. BTW - I'm working in C# 3.5 but it's more about recommended approach rather than language. Brad Abrams' Framework Design Guidelines book is all about this. Might be worth a look. My background is primarily as a Java Developer, but lately I have been doing some work in .NET. So I have been trying to do some simple projects at home to get better at working with .NET. I have been able to transfer much of my Java experience into working with .NET (specifically C#), but the only thing that has really perplexed me is namespaces. I know namespaces are similar to Java packages, but as from what I can tell the main difference is that with Java packages they use actual file folders to show the seperation, while in .NET it does not and all the files reside in a single folder and the namespace is simply declared in each class. I find this odd, because I always saw packages as a way to organize and group related code, making it easier to navigate and comprehend. Since in .NET it does not work this work this way, overtime, the project appears more overcrowded and not as easy to navigate. Am I missing something here? I have to be. Should I be breaking things into separate projects within the solution? Or is there a better way to keep the classes and files organized within a project? Edit: As Blair pointed out this is pretty much the same question asked here. Yep, in .NET namespace doesn't depend on file system or anything else. It's a great advantage in my opinion. For example you can split your code across different assemblies which allows flexible distribution. When working in Visual Studio, IDE tends to introduce new namespace when you add new folder to project tree. Here is a useful link from MSDN: Namespace Naming Guidelines The general rule for naming namespaces is to use the company name followed by the technology name and optionally the feature and design as follows. CompanyName.TechnologyName[.Feature][.Design] Of course you can use namespaces in the way you find more suitable. However if you going to share your code, I would recommend to go along with accepted standards. EDIT: I highly recommend to any .net developer to get a copy of Framework design guidelines This book will help you to understand how and why .NET is designed. I'm writing a c# application which uses automation to control another program. Naturally that program must be running for my program to work. When my program looks for the application and can't find it I'd like to throw an exception (for now later of course I could try opening the application, or telling the user to open it, or ...). Should I implement a custom exception - or use the existing NotSupportedException (or one of the other .NET exceptions). If a custom exception, what would you suggest? I was thinking of implementing a custom exception I'd call it MyAppNameException and then just use the message to declare what the problem was? Are there any general rules to throwing exceptions in a way that makes your program more readable and user friendly, or am I just giving this too much thought :)? Thanks! The Framework Guidelines book that I use indicates that you should only create a custom exception when the error condition can be programmatically handled in a different way than any existing exceptions. In your case, if you wanted to create a custom exception in order to launch a back-end installation program, that is unique and I think a custom exception would be okay. Otherwise, something from the System.Runtime.InteropServices.ExternalException heirarchy may be appropriate. Just thought I'd see if somebody could explain why Anders decided that this is valid... if(...) //single statement else ///single statement but this is not... try //single statement catch //single statement To quote from Framework Design Guidelines in the section about "General Style Conventions" this is said about braces: AVOID omitting braces, even if the language allows it. Braces should not be considered optional. Even for single statement blocks, you should use braces. This increase code readability and maintainability. There are very limited cases when omitting braces might be acceptable, such as when adding a new statement after an existing singöe-line statement is either impossible or extremely rare. For example, it is meaningless to add a statement after a throwstatement: if(someExpression) throw new ArgumentOutOfRangeExcetion(...); Another exception to the rule is braces in case statements. These braces can be omitted as the caseand breakstatements indicate the begining and the start of the block. What Anders thinks is subjective and argumentative, this is the recommendation. You might also want to look at the section about bracing in the coding convention over at msdn. As a beginning programmer, I'm trying to settle on a standard naming convention for myself. I realize that it's personal preference, but I was trying to get some ideas from some of you (well a LOT of you) who are much smarter than myself. I'm not talking about camel notation but rather how do you name your variables, etc. IMHO, var_Quantity is much more descriptive than Q or varQ. However, how do you keep the variable from becoming too long. I've tried to be more descriptive with naming my controls, but I've ended up with some like "rtxtboxAddrLine1" for a RadTextBox that holds address line 1. Too me,that is unmanageable, although it's pretty clear what that control is. I'm just curious if you have some guides that you follow or am I left up to my own devices? For .NET API design (and some general C# guidelines) check Krzysztof Cwalina and Brad Abrams' Framework Design Guidelines Regards, tamberg. I have created a Windows Forms application in .NET 2 using C# that runs continuously. For most accounts I am pleased with it, but it has been reported to me that it has failed occasionally. I am able to monitor its performance for 50% of the time and I have never noticed a failure. At this point I am concerned that perhaps the program is using too many resources and is not disposing of resources when they are no longer required. What are the best practices for properly disposing created objects that have created timers and graphical objects like graphics paths, SQL connections, etc. or can I rely on the dispose method to take care of all garbage collection? Also: Is there a way that I could monitor resources used by the application? There are a few ways of ensuring this. The main help I find is by utilising the "using" keyword. This is applied as such: using(SqlConnection connection = new SqlConnection(myConnectionString)) { /* utilise the connection here */ } This basically translates into: SqlConnection connection = null; try { connection = new SqlConnection(myConnectionString); } finally { if(connection != null) connection.Dispose(); } As such it only works with types that implement IDisposable. This keyword is massively useful when dealing with GDI objects such as pens and brushes. However there are scenarios where you will want to hold onto resources for a longer period of time than just the current scope of a method. As a rule it's best to avoid this if possible but for example when dealing with SqlCe it's more performant to keep one connection to the db continuously open. Therefore one can't escape this need. In this scenario you can't use the "using" but you still want to be able to easily reclaim the resources held by the connection. There are two mechanisms that you can use to get these resources back. One is via a finaliser. All managed objects that are out of scope are eventually collected by the garbage collector. If you have defined a finaliser then the GC will call this when collecting the object. public class MyClassThatHoldsResources { private Brush myBrush; // this is a finaliser ~MyClassThatHoldsResources() { if(myBrush != null) myBrush.Dispose(); } } However the above code is unfortunately crap. The reason is because at finalizing time you cannot guarantee which managed objects have been collected already and which have not. Ergo the "myBrush" in the above example may have already been discarded by the garbage collector. Therefore it isn't best to use a finaliser to collect managed objects, its use is to tidy up unmanaged resources. Another issue with the finaliser is that it is not deterministic. Lets say for example I have a class that communicates via a serial port. Only one connection to a serial port can be open at one time. Therefore if I have the following class: class MySerialPortAccessor { private SerialPort m_Port; public MySerialPortAccessor(string port) { m_Port = new SerialPort(port); m_Port.Open(); } ~MySerialPortAccessor() { if(m_Port != null) m_Port.Dispose(); } } Then if I used the object like this: public static void Main() { Test1(); Test2(); } private static void Test1() { MySerialPortAccessor port = new MySerialPortAccessor("COM1:"); // do stuff } private static void Test2() { MySerialPortAccessor port = new MySerialPortAccessor("COM1:"); // do stuff } I would have a problem. The issue is that the finaliser is not deterministic. That is to say I cannot guarantee when it will run and therefore get round to disposing my serial port object. Therefore when I run test 2 I might find that the port is still open. While I could call GC.Collect() between Test1() and Test2() which would solve this problem it isn't recommended. If you want to get the best performance out of the collector then let it do its own thing. Therefore what I really want to do is this: class MySerialPortAccessor : IDispable { private SerialPort m_Port; public MySerialPortAccessor(string port) { m_Port = new SerialPort(port); m_Port.Open(); } public void Dispose() { if(m_Port != null) m_Port.Dispose(); } } And i'll rewrite my test like this: public static void Main() { Test1(); Test2(); } private static void Test1() { using( MySerialPortAccessor port = new MySerialPortAccessor("COM1:")) { // do stuff } } private static void Test2() { using( MySerialPortAccessor port = new MySerialPortAccessor("COM1:")) { // do stuff } } This will now work. So what of the finaliser? Why use it? Unmanaged resources and possible implementations that don't call Dispose. As the writer of a component library that others use; their code may forget to dispose of the object. It's possible that something else might kill the process and hence the .Dispose() would not occur. Due to these scenarios a finaliser should be implemented to clean any unmanaged resource as a "worst case" scenario but Dispose should also tidy these resources so you have your "deterministic clean up" routine. So in closing, the pattern recommended in the .NET Framework Guidelines book is to implement both as follows: public void SomeResourceHoggingClass, IDisposable { ~SomeResourceHoggingClass() { Dispose(false); } public void Dispose() { Dispose(true); } // virtual so a sub class can override it and add its own stuff // protected virtual void Dispose(bool deterministicDispose) { // we can tidy managed objects if(deterministicDispose) { someManagedObject.Parent.Dispose(); someManagedObject.Dispose(); } DisposeUnmanagedResources(); // if we've been disposed by .Dispose() // then we can tell the GC that it doesn't // need to finalise this object (which saves it some time) // GC.SuppressFinalize(this); } } Basically, the question is: Do the Exceptions in C# affect the performance a lot? Is it better to avoid Exceptions rethrow? If i generate an exception in my code, does it affect a performance? Sorry for the sillines of the question itself Microsoft's Design Guidelines for Developing Class Libraries is a very valuable resource. Here is a relevant article: Exceptions and Performance I would also recommend the Framework Design Guidelines book from Microsoft Press. It has a lot of the information from the Design Guidelines link, but it is annotated by people with MS, and Anders Hejlsberg, himself. It gives a lot of insight into the "why" and "how" of the way things are. I guess I need to create an Assembly but how do I do this concretely when I have multiple classes ? There seems to be many steps involved and searching on the net I cannot find any article on this subject. Can anyone point me to one if it exists ? I would suggest reading Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries Enjoy! Sometimes I've made a namespace in C# (I don't know if the problem is the same in VB.NET) containing 'System' and when I include it from a different DLL it goes crazy and conflicts with everything containing 'System'. This leads to crazy errors such as the following : The type or namespace name 'ServiceModel' does not exist in the namespace 'RR.System' The type or namespace name 'Runtime' does not exist in the namespace 'RR.System' The type or namespace name 'SerializableAttribute' does not exist in the namespace 'RR.System' If you don't know what I'm talking about then good for you :) I'm sure many have seen this issue. I'm not completely sure why it does this. It will occur even in files, such as generated code for web services that doesn't contain any reference to RR.System. This all occurs just because I'm including RR.System the DLL in a different project. How can I avoid this happening? Or fix it? If your project contains references to both System and your custom library (RR.System), the compiler will have an ambiguous reference to sort out. It's not sure which one you want. You can always use aliasing to ensure that your code is explicitly referencing the correct code from your project. BTW, there's a huge amount of best practice information to follow from Brad Abrams in Framework Design Guidelines. Why wouldn't I choose abstract? What are the limitations to declaring a class member virtual? Can only methods be declared virtual? You question is more related to style than technicalities. I think that this book has great discussion around your question and lots of others. I have this class: class DoSomething { private int timesDone; ... } Which is the right way to named variable 'timesDone'? Sometimes I see named as m_timesDone. Is this correct? Where I can find information about naming guidelines? Thank you! According to MS standards your code is OK. Having prefixes as m_ is not really necessary when you have advanced IDE. However short prefix like _ can be used to take advantage of auto complete feature to quickly sort out class members. I would recommend you to get a copy of "Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries" book to learn more on MS standards In order to to get the number of subitems in dotnet sometimes i have to ask a property Lenght sometimes i have to ask a property Count. Is there any reason for the distinction? example: int[] a; if (a.Length == 0) .... IList<int> b; if (b.Count == 0) .... Note Difference between IEnumerable Count() and Length sounds similar but does not answer the semantic between Length and Count I can remember the Framework Design Guidelines contains an annotation about this difference (I will at a qoute of it tomorrow). What I recall is that the designers think this is a quirk in the design, because it doesn't make sense for a lot of developers. Remember that in the beginning there were no design guidelines for .NET and much of the .NET API was copied from Java, including the quirks. Sure, there's a type of project in Visual Studio that outputs a DLL that people can use. I know that. I'm just wondering what are some of the standards that devs will expect when using my DLL file. I'll be providing a class that searches for movies in IMDB and returns results for a dev to consume. Not a webservice, but a local DLL file. (I'm aware that IMDB frowns upon web scraping, but I'm also aware that they give permission to people if asked. My permission is already sent.) How should I approach this? If you, or anybody, is serious about creating a good framework for others to use, check out. Which of the following three options would you choose for a property name in C#, and why? It depends. If you are making a library that will see external use, the .NET Framework Design Guidelines say that #1 is preferred. If it's an internal application/library, then I recommend using the format that is consistent with your teams development standard. Have read through the MSDN naming guidelines and could not find a clear answer, other than that you should try to avoid underscores in general. Let's say I have the following: public class Employee { private string m_name; //to store property value called Name public string Name { get { return m_name; } set { m_name = value; } } public void ConvertNameToUpper() { //by convention should you use this return m_name.ToUpper(); //or this return Name.ToUpper(); } } What is the proper naming convention for m_name in the above? For example, in code I inherit I commonly see: Which one (or another) is most commonly accepted? As a follow-up, in the methods of the class, do you refer to the internal (private) identifier or to the public property accessor? The framework Design guidelines book says that you shouldn't prefix your variables with _ - you should just use a lower case for the name of the variable, and Code Complete 2nd edition I believe says you shouldn't prefix your variables with m_. Here's the setup I have in a vs2008 solution: Data layer in a project named MyProject.Data Web application in a project named MyProject.Web MyProject.Web has a reference to MyProject.Data In MyProject.Web I have a class I use called "MySite.Utils" I want to be able to use MySite.Utils in MyProject.Data but I can't because it would cause a circular reference. One solution which is NOT possible is creating a third project and moving "MySite.Utils" in there because MySite.Utils actually uses MyProject.Data (thus it needs to reference it and another circular reference would be created) What's the best/easiest way to fix this? Sounds like you could benefit (and enjoy!) from reading this... Most of the time during coding, i found that whether i can write code in less number of line. I don't know when we write some login in less number of line in c# then whether we achive good performance or not? whether dotnet compiler compile code faster or not? Is there any source of tutorial/ book/ guideline so that we will make checklist before writing code. You asked for a book or article. One of the best books for best practices in .NET is The book is written by members of the .NET development team themselves. "less number of line"? This is not so relevant. Computing n'th Fibonacci number can be implemented recursively with exponential complexity or by using matrix multiplication in logarithmic time, with more lines of code. Kolmogorov complexity is not always to be minimized. Look a rule: avoid unnecessary boxing, as in: int x=3; Console.WriteLine("x ={0}", x); //wrong; but: Console.WriteLine("x ={0}", x.ToString()); I always regarded the book Effective C#: 50 Specific Ways to Improve Your C# as a good book showing you how to write better code in C#; for example, it explains why you should use foreach instead of for when iterating over a collection. public struct Cache { public int babyGangters { get; set; } public int punks { get; set; } public int ogs { get; set; } public int mercs { get; set; } public int hsPushers { get; set; } public int collegeDealers { get; set; } public int drugLords { get; set; } public int streetHoes { get; set; } public int webcamGrls { get; set; } public int escort { get; set; } public int turns { get; set; } public int cash { get; set; } public int bank { get; set; } public int drugs { get; set; } public int totalValue { get; set; } public int attackIns { get; set; } public int attackOuts { get; set; } public int status { get; set; } public int location { get; set; } } The rule of thumb is that a struct should not be bigger than 16 bytes (according to the Framework Design Guidelines). Your struct is 76 bytes (= 19 * 4), so it is pretty big. However, you will have to measure the performance. Big structs can be beneficial for some applications. The Framework Design Guidelines state: Avoid defining a struct unless the type [...] an instance size under 16 bytes. One of the annotations from Jeffrey Richterto this guidlines state: Value types can be more than 16 bytes if you don't intend to pass them to other methods or copy them to and from a collection class (like an array). Silly question really but just was wondering about other peoples naming conventions for DAL and BLL if there were any better names for them than those ones. I guess you mean the projects you are creating. If you follow ".NET Framework design guidelines" I usually see something like this: CompanyName.Product.Data CompanyName.Product.Logic However one might argue where to put the logic or even if you should name it Logic or BLL at all. It depends on the system in wide, if you are creating a Banking system you might not want to put all logic in a Logic namespace but you might want to split it up into more spaces like: BankName.Web.Authentication BankName.Web.Transactions Where these layers have their own set of logic layers. What are the most common naming conventions in C# for classes, namespaces and methods? Is it common to have getter/setter style methods as in Java? No it is not common to use getter /setter style names in C#. Properties should be used for almost all places you would use a getter / setter in Java. IMHO, the defacto standard for naming conventions comes from the Framework Design Guidelines. It's enforced by several tools (FxCop) and is the dominant style of many libraries including the BCL. According to Martin Fowler "Something can be public but that does not mean you have published it." Does this mean something like this: public interface IRollsRoyceEngine { void Start(); void Stop(); String GenerateEngineReport(); } public class RollsRoyceEngine : IRollsRoyceEngine { public bool EngineHasStarted { get; internal set; } public bool EngineIsServiceable { get; internal set; } #region Implementation of IRollsRoyceEngine public void Start() { if (EngineCanBeStarted()) EngineHasStarted = true; else throw new InvalidOperationException("Engine can not be started at this time!"); } public void Stop() { if (EngineCanBeStopped()) EngineHasStarted = false; else throw new InvalidOperationException("Engine can not be started at this time!"); } public string GenerateEngineReport() { CheckEngineStatus(); return EngineIsServiceable ? "Engine is fine for now" : "Hmm...there may be some problem with the engine"; } #endregion #region Non published methods public bool EngineCanBeStarted() { return EngineIsServiceable ? true : false; } public bool EngineCanBeStopped() { return EngineIsServiceable ? true : false; } public void CheckEngineStatus() { EngineIsServiceable = true; //_EngineStatus = false; } #endregion } Can it be said that published interface of this is IRollsRoyceEngine not whatever is in RollsRoyceEngine? If so what is the real difference between public and published methods? In my opinion mentioned white paper talks about target audience of the API rather than the distinction between interface and its implementation. You can find analogy in Framework Design Guidelines which says that once your API shipped you have a contract with consumers. For example, if you shipped in v1 of your framework IService interface you cannot change it in v2 because it will introduce breaking changes to end developers. Instead you must create new interface IService2 inhereted from IService and ship it with v2. So basically public API becomes published once you "sign a contract" with end developers. Returning back to your code - it will be published when you ship it to development community for example. Hope this explanation will help. Socket.Dispose() is an inaccessible member. However, we can bypass this by doing the following: ((IDisposible)Socket).Dispose() Two questions: Whenever a class implements a method such as Close() which accomplishes the same work as Dispose(), then it is recommended to explicitly implement the IDisposable interface, so that a developer will typically only see the Close() method, yet the Dispose method is still accessible through the IDisposable interface for use by the framework where a Dispose method is expected. Sometimes it makes sense to essentially expose Dispose under a different name, such as Close, where it makes for more readable code. You see these throughout the .NET Framework with things that can be "Closed" such as file handles and connections. Edit: See I had a little discussion with a friend about the usage of collections in return/input values of a method. He told me that we have to use - the most derived type for return values. - the least derived type for input parameters. So, it means that, for example, a method has to get a ReadOnlyCollection as parameter, and as return a List. Moreover, he said that we must not use List or Dictionary in publics API, and that we have to use, instead Collection, ReadOnlyCollection, ... So, in the case where a method is public, its parameters and its return values must be Collection, ReadOnlyCollection, ... Is it right ? Maybe your friend read: Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries It is a great book, and it covers questions like this in detail. I have a simple solution with the following projects as follows (base namespaces match the project name)... MyCompany.MyProduct.BusinessLayer MyCompany.MyProduct.Web.Service MyCompany.MyProduct.Web.Site I'm just trying to find a better name for BusinessLayer, I just don't really like it for some reason. So my question is what do you call your BusinessLayer projects/namespaces. article on namespace guidelines I've seen BusinessLogic, but it as noted by the rest of the answers here a lot of time you will see... Another good reference for this type of stuff is the Framework Design Guidelines book. I'm writing unit tests for classes which have properties that have setters but no getters. I want to be able to test these setters to make sure they are setting the data correctly. I find my options are: testAllSetters()which test them all at once But both solutions are undesirable since it adds unneeded functionality to the class just for the sake of testing it. What is the best way to unit test setters on classes that do not have paired getters? The problem here is that your don't want to change your API for your unit tests. Start looking at your unit tests as another user/consumer of your API. Just like developers using that library, unit tests have their own set of requirements. When you see your unit tests as consumer of your API, there will be a user that uses those getters, and it will justify them. When this is not possible to change your API (for instance if you're developing an reusable framework), make the unit testing API internal and use the InternalsVisibleToAttribute to allow your testing library to access internal methods of your code. Leaving unit tests aside, you still might want to consider having getters on those properties, because having properties without getters is very unintuitive for developers. The Framework Design Guidelines even have a rule against this: DO NOT provide set-only properties or properties with the setter having broader accessibility than the getter. You might also want to take that into consideration. Good luck. Please bare with my I'm very new to programming. I'm working through a tutorial of C# at the moment. And so far all variables and helper methods that I've created are capitalizedLikeThis. I've been doing this all in the Progam class. Today I've created my first new class and within it a method, and the way the method was written WasLikeThis with the first letter also capitalized, there was no explanation to this. Is it just a common convention? And if so, is there a specific reason for it? Thanks, Yes there is a reason for it. It is a Pascal Case naming convention for method names that is a standard in the .NET framework. All the types in the framework class library (FCL) follow the mentioned naming convention, and when you create some custom types or add methods to existing types you are augmenting the capabilites of the framework for your specific application needs. Your method capitalizedLikeThis is part of the Program class, which has an API that follows the .NET naming conventions. For example it contains a ToString() instance method so you could do: var program = new Program(); Console.WriteLine(program.ToString()); So the real question is do you want to add a method to the Program class that breaks the naming convention of the existing API? Consistency is a good thing and that is why you should follow the convention. If you want more information on this topic you can check out the only relevant book for .NET design guidelines containing naming conventions, and many other details related to the .NET framework design decisions: Or read the reduced MSDN article based on the book: Sometimes I dont know witch type of exception should I throw. So I usually throw Exception(). Is there some nice article about this? The problem with throwing a generic exception is that it restricts the ability of your code further up the stack to handle it properly (i.e. best practice is to trap the most specific exception possible before falling back, and to only capture what you can handle). Jeffrey Richter has an excellent section on exception handling (including info on the System.Exception namespace) in CLR via C# Which one should you use when you want to set the state of a Windows Forms control: Setting a Windows Forms state using a public property? Setting a Windows Forms state using an overloaded constructor that accepts a parameter? They're exactly the same. Or at least they should be, according to the Framework Design Guidelines. So you can expect that any of the standard classes exposed by the .NET Framework behave this way. Any constructor method that accepts a parameter corresponding to a property should do nothing more than set that property to the specified value. Quoting from Framework Design Guidelines by Cwalina and Abrams: Do use constructor parameters as shortcuts for setting main properties. There should be no difference in semantics between using the empty constructor followed by some property sets, and using a constructor with multiple arguments. The following three code samples are equivalent: //1 EventLog applicationLog = new EventLog(); applicationLog.MachineName = "BillingServer"; applicationLog.Log = "Application"; //2 EventLog applicationLog = new EventLog("Application"); applicationLog.MachineName = "BillingServer"; //3 EventLog applicationLog = new EventLog("Application", "BillingServer"); Similar guidelines concerning constructors are also available online from MSDN here. I'm a bit bewildered on this subject, as I relate variable prefixes to being a thing of the past now, but with Visual Studio 2010 onwards (I'm currently using 2012), do people still do this and why? I only ask because, these days, you can hover over any variable and it'll tell you the variable type and scope. There's literally no requirement for pre-fixing being there for readability. By this I mean: string strHello int intHello etc. And I'm being language/tool biased here - as Visual Studio takes a lot of the legwork out for you in terms of seeing exactly what type the variable is, including after conversions in the code. This is not a "general programming" question. The significant point is that the variable name "should not" represent its type. Instead, It should indicate the "business semantic" of the variable; The type of the variable is subject to change during the code maintenance, but the semantics of that variable is rarely changed. Incorporating "StyleCop" into your development lifecycle can enforce consistent code style amongst team members. UPDATE: This excerpt from the Chapter 3 of "Framework Design Guidelines" which is dedicated to "Naming Guidelines" helps to clarify the issue: Identifier names should clearly state what each member does and what each type and parameter represents. To this end, it is more important that name be clear than that it be short. Names should correspond to scenarios , logical or physical parts of the system, and well-known concepts rather than to technologies or architecture. DO choose easily readable identifier names.[...] DO favor readability over brevity.[...] DO NOT use underscores, hyphens, or any other non-alphanumeric characters.[...] DO NOT use Hungrian Notation. [...] I sometimes use if (this._currentToolForeColor.HasValue) return this._currentToolForeColor.Value; else throw new InvalidOperationException(); other times I use if (this._currentToolForeColor.HasValue) return this._currentToolForeColor.Value; throw new InvalidOperationException(); The two are equivalent, I know, but I am not sure which is the best and why. This goes even further as you can use other execution-control statements such as brake or continue : while(something) { if(condition) { DoThis(); continue; } else break; } versus while(something) { if(condition) { DoThis(); continue; } break; } EDIT 1 : Yes the loop example(s) suck because they are synthetic (i.e.: made up for this question) unlike the first which is practical. For a better and cleaner code, I think it is better to choose always this form: if(condition) { //do stuff } else { //do stuff } ..and so on. It's just a matter of "taste", but even if you have more rows of code, this is clearly readable, because you don't have to interpret nor understand, you can follow the code flow with your finger easily. So, for me, always the else even if a return prevents fallback into following code, always brackets event if you have a one-row if/else and so on. As far as I can remember, this is also suggested in Framework Design Guidelines, very easy book with lot of "do this" and "don't do this" guidelines. .net.net-2.0abstract-classaccessorapiapi-designarraysarticleasp.netattributesc#c++class-designcoding-styleconstructorcustom-exceptionsdesigndesign-patternsenumerationeventsexceptionexception-handlingframeworksgenericsgetter-setterhungarian-notationidisposableif-statementimmutabilityinheritanceinterfacejavalibrariesmarker-interfacesmiddlewarenamespacesnamingnaming-conventionsnested-classoopoverrideparsingpreferencesprogramming-languagespropertiespublic-methodscopeunit-testingvb.netvisual-studiovisual-studio-2008wcfweb-serviceswinforms
http://www.dev-books.com/book/book?isbn=0321246756&name=Framework-Design-Guidelines
CC-MAIN-2019-09
refinedweb
11,130
55.13
how to set class path how to set class path how to set class path in java set a class path - Development process set a class path thanks u sir i got a .jar file,but while... to set a classpath for that please help me sir thanks in advance...  ... to ToolBar. Go to Project>>Properties>>Java Build Path>>Add path - Java Beginners meaning of path and classpath what is the meaning of path and classpath. How it is set in environment variable. Path and ClassPath in in JAVAJava ClassPath Resources:- path - Java Beginners path how to set the path in environment variables to run java programs in my pc? Hi friend, Read for more information. poi & class path - Java Beginners . Also after downloading how to set class path. Hi Friend... of your jdk version.After this set the path in 'PATH' variable of Environment No action instance for path language="java" import="java.util.*" pageEncoding="ISO-8859-1"%> <% String path = request.getContextPath(); String basePath = request.getScheme()+"://"+request.getServerName()+":"+request.getServerPort()+path+"/"; %> < to obtain image path to obtain image path i have made a web application in which you can... or BROWSE button . and i am expecting to obtain the complete path of the image from..."); // create a file object for image by specifying full path of image Java Construct File Path Java Construct File Path In this section we will discuss about how to construct file path in Java. A file path can be constructed manually or by a Java program. It is best practice to construct a file path in Java by a Java Program Class path \utility\myapp, you would set the class path so that it contains C:\java... as classes in C:\java\OtherClasses, you would set the class path to: C:> java.... Setting the class path is mandatory to run a java application. Class path in java determinant of n*n matrix using java code determinant of n*n matrix using java code Here is my code... { double A[][]; double m[][]; int N; public input() { Scanner s=new Scanner(System.in); System.out.println("enter dimension of matrix"); N path How to connect to dao n bean classes with jsp How to connect to dao n bean classes with jsp I have made this edao...()); System.out.println("Bean set"); stmt.executeUpdate... page** <%@ page contentType="text/html; charset=utf-8" language="java Constructing a File Name path it is possible to set dynamic path, which is helpful for mapping local file name with the actual path of the file. Java API has provided us many packages...: C:\java>java ConstructingFileNamePath The path of the file Constructing a File Name path in Java ; In Java, it is possible to set dynamic path, which is helpful for mapping local file name with the actual path of the file using...; Download this Program Another program set the dynamic path using pls provide common path to set image in flex - XML pls provide common path to set image in flex hi, pls provide...\mannai1\src\pictures\useful_links_logo.gif .it works nicely.but when i set path... the coding in mxml to set common path of image in flex path setting - JSP-Servlet path setting Hi, friends How to set the oracle 10g path on browser to servlet program drag n drop - JSP-Servlet using drag and drop mode.when user drag n drop file then display the complete path...drag n drop I want to implement drag n drop functionality for simple HTML/JSP without using applet,flash or any heavy components.using browse button diff betn show n visible diff betn show n visible what is difference between show() & visible method in java How to round a number to n decimal places in Java How to round a number to n decimal places in Java How to round a number to n decimal places in Java retrieve data from database in java swing form using prev n next buttons retrieve data from database in java swing form using prev n next buttons i have a database having columns id(int),path(text),width(int),height(int... buttons.also first record should be visible as soon as the window opens n previous button Java IO Path Java IO Path In this section we will discuss about the Java IO Path. Storage... without complete information. To work with Path in Java, the Path class... is called a Path of that file or folder using which they can be accessed easily interview path pdf interview path pdf Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company for <1 year experience thanks for all of u in advance Please visit Setting of java1.4.2 path Setting of java1.4.2 path Hello I have uploaded java1.4.2 into my laptop and i have done path settings for System variabales as Path... code and executed the java program , ie When i compile as javacHelllo.java(in which implements runnable n extends thread - Java Beginners implements runnable n extends thread what is the difference between implements runnable n extends thread? public class...(); class...; private int num; StringThreadImplement(String s, int n){ str = new String(s No SDK with the name or path or path....version xyz. Base SDK Missing. What is this error and how can i set... Missing or No SDK with the name or path for XYZ version follow the steps below File Path compare in java File Path Comparison :compareTo File Path Comparison :compareTo... of their path. For this purpose, we use compareTo() method please tell anybody how can i set a value in hiperlink for edit n delete link please tell anybody how can i set a value in hiperlink for edit n delete link <logic:present <logic:notEmpty <logic:iterate id="user" name Modifying the Path Example. , you set the output of the path with conditional statement using isset(); command. As we have set the path if (isset($_POST['posted'])). Define...Modifying the Path In this example, you will learn how to modify the corrupted Set interface is the example of Set Interface in Java. import java.util.*; public class...) { System.out.println("Set Example in Java!"); // Create HashSet Object Set set...Set interface hello,, What is the Set interface? hii Java ClassPath for.We set the PATH variables like this i.e path C:\Java\jdk1.6.0_03\bin (i)on command prompt C:\>set path=%path;C:\Java\jdk1.6.0_03\bin... command prompt Java class path can be set using either the -classpath option Jsp Absolute Path Jsp Absolute Path  ... the absolute path in jsp. The absolute path is the full path that contains the root directory instead of a few directories contained within the absolute path Java file absolute path Java file absolute path In this section, you will learn how to get the absolute path of the file. Description of code: If you want to locate a file without requiring further information, you can use absolute path. It always contain how to get java path name Getting a absolute path Getting a absolute path  ... the path of the file or directory so that you can access it. If you know the path... in front you where you don't know the path of the file, then what will you do Full path of image to save it in a folder that the part where I set the path for storage has to be either one of local context...Full path of image to save it in a folder Sir ,I am trying to upload... to find that image path &upload it as well. I am just a beginner in jsp Java get Absolute Path Java get Absolute Path In this section, you will study how to obtain the absolute path... file.getAbsolutePath() returns the absolute path of the given file.   Java example to get the execution path Java example to get the execution path  ... path of the system in java by using the system property. For getting execution... java installation directory java.class.path java class path Here Java code for set...!!! Java code for set...!!! Create 2 classes in same package Product.java productId, name, price ProductImpl.java create a set in this and try... not allow the duplicate products to be added into the set(equal productId). Hint jsf n html integration - Java Server Faces Questions jsf n html integration how to intefgrate html design made by using dreamweaver and do coding in it with help of jsf framework using netbeans Java program to get the desktop Path Java program to get the desktop Path  ... the desktop path of the system. In the java environment we can get the desktop path also with the system's property. For getting the desktop path we have to add setting path problem for org.jfree files - Java Beginners setting path problem for org.jfree files Hi deepak, As u said, i... Java Get Class path Java Get Class path In this section, you will learn how to get the class path. The method System.getProperties() determines the current system properties.  Path of source code Description: This example demonstrate how to get the path of your java program file. The URI represents the abstract pathname and URL construct from URI. Code: import java.io. Result=Set - Java Beginners result set, first move the pointer from first record to last record and get Set Interface Set Interface The Set interface extends the Collection interface.... It permits a single element to be null. The Set interface contains only methods which path we have to give in order to upload a file from any where in the system which path we have to give in order to upload a file from any where... to specifying a path in the code from that path only it is taking image . what i have... quickly i need it 1)page.jsp: <%@ page language="java" %> < can pass list of n values in session and get in jsp can pass list of n values in session and get in jsp In dao: am geting username,companyname,usertype and set to userBean and add to arraylist In servlet: list=userBean.selectUserBo(); HttpSession session = request.getSession image is display from path of mysql database image is display from path of mysql database <%@ page import...); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile... = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n Java Set Java Set Collections are objects that hold other objects which are maintained under some set of rules. A set is a public interface that extends the collection interface and comes Platform dependent values like line separator, path separator - Java Beginners Platform dependent values like line separator, path separator Hi, How will you get the platform dependent values like line separator, path...("path.separator") for path separator Thanks Convert the path in to properties Convert the path in to properties This example illustrates how to convert path... that common classpath and path can be shared among targets. Many tasks have javascript set scroll height javascript set scroll height How to set scroll height in JavaScript? CSS Scroll Method div { width:150px; height:150px; overflow... the Value of Scroll Bar in Java Swing Java set example Java set example In this section you will learn about set interface in java. In java set is a collection that cannot contain duplicate element. The set... collection. Example of java set interface. import java.util.Iterator; import Java Set iterator with example Java Set Interface keeps the data without duplicate value. Its one subtype... sorted data. It uses iterator() method to traverse the data Example of Java Set Iterator import java.util.*; public class setiterator { public static How to set memory used by JVM in Ant How to set memory used by JVM in Ant  ... of JVM (java virtual machine), when ANT (another neat tool) is used outside of java virtual machine. In this example, <property name="sourcedir" Java Set Iterator data. It uses iterator() method to traverse the data Java Set Iterator... Set Interface keeps the data without duplicate value. Its one subtype...(String[] args) { Set s = new HashSet(); s.add("car we can create our own header file in java?n how to create? we can create our own header file in java?n how to create? we can create our own header file in java?n how to create how to set image - EJB how to set image public ActionForward execute(ActionMapping mapping...:3306/erpvimaljava", "java", "java"); System.out.println("connection==>" + connection...; } } hello sir, this is my java coding, i connect this java Display set names Display set names If i enter the First letter of a name it will display the list of names starting with that letter in command prompt using java import java.util.*; class DisplaySetOfNames{ public static void Executing Set of SQL statements in Java Executing Set of SQL statements in Java Hi, I am trying to execute a procedure which is stored in MS SQL server's database. I have configured the driver with ther server name, database name and uid/pwd using a callable N - Java Glossary N - Java Glossary Java Number Format In programming languages, a pattern of special characters is used to specify the format of the number. In java this is achieved Write a java program that prints the decimal representation in reverse. (For example n=173,the program should print 371.)c Write a java program that prints the decimal representation in reverse. (For example n=173,the program should print 371.)c class rose { int n; int i; rose(int n,int i) { n=173; i=371; } void out { System.out.println Navigable Set Example Navigable Set Example  ...;{ System.out.println("Navigable set Example!\n"...;iterator over the elements in navigable set,   Tree Set Example C:\vinod\collection>java TreeSetExample Tree Set Example... Tree Set Example  ... is empty, it displays the message "Tree set is empty." otherwise Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/24353
CC-MAIN-2015-18
refinedweb
2,362
64.81
This is the description of the MATLAB/Octave API bindings for the Load Cell Bricklet 2.0. General information and technical specifications for the Load Cell Bricklet 2.0. In MATLAB: import com.tinkerforge.BrickletLoadCellV2; loadCellV2 = BrickletLoadCellV2('YOUR_DEVICE_UID', ipcon); In Octave: loadCellV2 = java_new("com.tinkerforge.BrickletLoadCellV2", "YOUR_DEVICE_UID", ipcon); This object can then be used after the IP Connection is connected.() The following constants are available for this function: For config: Sets the currently measured weight as tare weight. Sets the length of a moving averaging for the weight value. Setting the length to 1 will turn the averaging off. With less averaging, there is more noise on the data. Returns the length moving average as set by setMovingAverage(). To calibrate your Load Cell Bricklet 2.0 you have to The calibration is saved in the flash of the Bricklet and only needs to be done once. We recommend to use the Brick Viewer for calibration, you don't need to call this function in your source code. The measurement rate and gain are configurable. The rate can be either 10Hz or 80Hz. A faster rate will produce more noise. It is additionally possible to add a moving average (see setMovingAverage()) to the measurements. The gain can be 128x, 64x or 32x. It represents a measurement range of ±20mV, ±40mV and ±80mV respectively. The Load Cell Bricklet uses an excitation voltage of 5V and most load cells use an output of 2mV/V. That means the voltage range is ±15mV for most load cells (i.e. gain of 128x is best). If you don't know what all of this means you should keep it at 128x, it will most likely be correct. The following constants are available for this function: For rate: For gain: Returns the configuration as set by setConfiguration(). The following constants are available for this function: For rate: For:WeightCallbackConfiguration(). The parameter is the same as getWeight(). In MATLAB the set() function can be used to register a callback function to this callback. In Octave a callback function can be added to this callback using the addWeightCallback() function. An added callback function can be removed with the removeWeightCallback() Load Cell Bricklet 2.0. The getIdentity() function and the IPConnection.EnumerateCallback callback of the IP Connection have a deviceIdentifier parameter to specify the Brick's or Bricklet's type. This constant represents the human readable name of a Load Cell Bricklet 2.0.
https://www.tinkerforge.com/en/doc/Software/Bricklets/LoadCellV2_Bricklet_MATLAB.html
CC-MAIN-2020-24
refinedweb
405
59.3
Converting A Single CSS Element Selector To XPath Using ColdFusion I wanted to explore the idea of converting natural CSS to XPath selectors for use in merging CSS with XHTML content. I don't think the concept is very complicated, but it does seem to have a bunch of little, moving parts. So, rather than solve all the problems at one time, I thought it would be good to break it down and start with the smallest problem first and then build on that to eventually convert natural CSS to XPath. So, the smallest problem that I can see is converting a single element selector into XPath. For example, converting: div ... into: div Or, converting: div.form ... into: div[ contains( @class, 'form' ) ] This doesn't get into how these are grouped when it comes to contextual selecting (ex. P within a Div tag) - that's the next level up. Let's tackle the lowest level problem first, then we can worry about how to join these together to perform more complex searches. When it comes to viable CSS selectors, I think we are dealing with about 6 different possibilties: - element#id - element.class - #id - .class - *.class - * Thanks to the finite nature of this set, we can easily match these patterns against a simple CFIF / CFELSE statement with regular expressions. And so, I have taken that idea and wrapped it up in a ColdFusion user defined function, CSSElementSelectorToXPath(): - <cffunction - name="CSSElementSelectorToXPath" - access="public" - returntype="string" - output="false" - - <!--- Define arguments. ---> - <cfargument - name="Selector" - type="string" - required="true" - hint="I am the single element selector." - /> - <!--- Define the local scope. ---> - <cfset var LOCAL = {} /> - <!--- - Trim the selector and remove any pseudo selector - information. While some of these can be applied - (ex. :event), for our purposes, we are not going to - apply them at this time. - ---> - <cfset LOCAL.Selector = Trim( - REReplace( - ARGUMENTS.Selector, - ":[^\s]*", - "", - "one" - ) - ) /> - <!--- - Check to see what kind of pattern we have here. As far - as CSS is concerned, we really only have six different - selectors to care about: - element#id - element.class - #id - .class - *.class - * - ---> - <cfif REFind( "^\w+##.+$", LOCAL.Selector )> - <!--- Return ID selector. ---> - <cfreturn ( - ListFirst( LOCAL.Selector, "##" ) & - "[ @id = """ & - ListLast( LOCAL.Selector, "##" ) & - """ ]" - ) /> - <cfelseif REFind( "^\w+\..+$", LOCAL.Selector )> - <!--- Return class selector. ---> - <cfreturn ( - ListFirst( LOCAL.Selector, "." ) & - "[ contains( @class, """ & - ListLast( LOCAL.Selector, "." ) & - """ ) ]" - ) /> - <cfelseif REFind( "^##.+$", LOCAL.Selector )> - <!--- Return ANY ID selector. ---> - <cfreturn ( - "*[ @id = """ & - ListLast( LOCAL.Selector, "##" ) & - """ ) ]" - ) /> - <cfelseif REFind( "^\..+$", LOCAL.Selector )> - <!--- Return ANY class selector. ---> - <cfreturn ( - "*[ contains( @class, """ & - ListLast( LOCAL.Selector, "." ) & - """ ) ]" - ) /> - <cfelseif REFind( "^\*\..+$", LOCAL.Selector )> - <!--- Return ANY class selector. ---> - <cfreturn ( - "*[ contains( @class, """ & - ListLast( LOCAL.Selector, "." ) & - """ ) ]" - ) /> - <cfelseif REFind( "^\w+$", LOCAL.Selector )> - <!--- Return element selector. ---> - <cfreturn LOCAL.Selector /> - <cfelse> - <!--- Not valid - return ANY selector. ---> - <cfreturn "*" /> - </cfif> - </cffunction> As you can see, I am stripping out any pseudo selectors before the conversion. It is true that some pseudo selectors can be used (ex. :even, :odd, :first), but for now, we'll strip them out. Really, it wouldn't be that hard to put them in; just as with the element selectors, each one would have to be mapped to a specific XPath predicate: :first ==> [ position() = 1 ] More on that late. Now, to test to see if the above UDF works, I am gonna run some various CSS selectors through it: - <cfoutput> - div ==> - #CSSElementSelectorToXPath( "div" )#<br /> - div##data-form ==> - #CSSElementSelectorToXPath( "div##data-form" )#<br /> - div.data-form ==> - #CSSElementSelectorToXPath( "div.data-form" )#<br /> - ##data-form ==> - #CSSElementSelectorToXPath( "##data-form" )#<br /> - .data-form ==> - #CSSElementSelectorToXPath( ".data-form" )#<br /> - *.data-form ==> - #CSSElementSelectorToXPath( "*.data-form" )#<br /> - </cfoutput> When the above code, we get the following output: div ==> div div#data-form ==> div[ @id = "data-form" ] div.data-form ==> div[ contains( @class, "data-form" ) ] #data-form ==> *[ @id = "data-form" ) ] .data-form ==> *[ contains( @class, "data-form" ) ] *.data-form ==> *[ contains( @class, "data-form" ) ] Looks like it's working nice. Next step will be to combine the above and create contextual selectors. Looking For A New Job? - Mobile Application Developer at Xorbia Tickets - 7 Year + Lead ColdFusion Developer at Atprime Media Services - Full Time ColdFusion Developer Needed at InterCoastal Net Designs - Developer-focused job platform at Honeypot Reader Comments
http://www.bennadel.com/blog/1526-converting-a-single-css-element-selector-to-xpath-using-coldfusion.htm?_rewrite
CC-MAIN-2016-40
refinedweb
666
51.65
Before: class A: def methodA(self): xyz = 1 self.methodB(xyz) <- do ctrl+1 here After: class A: def methodB(self, xyz): pass Attached offending code, undefined method at line number 46. Fabio Zadrozny 2012-04-11 Given the 'simple' instructions it doesn't happen for me... and in the attached file, line 46 is empty, so, I couldn't reproduce it there either. So, it's actually possible that this is already fixed... which PyDev version are you using? Can you attach a 'simple' example with instructions on how to reproduce it (if you're already in the latest released version, it could be that something is different in your file, such as indentation or some strange character. which I'm not putting when trying to reproduce it). Fabio Zadrozny 2012-04-11 I have uploaded a simplefied example that should reproduce this issue (at least is does for me). You should be able to just load both file up from Eclipse and do Ctrl+1 on the last line in the test.py file (ln# 43) to reproduce. Fabio Zadrozny 2012-04-24 I tested it here with the example and it worked (I must say that initially I though the error happened, but the ctrl+1 location was still there, just scrolled down). So, can you check with the latest nightly to see if it's working for you there? Fabio Zadrozny 2012-04-24
http://sourceforge.net/p/pydev/bugs/1496/
CC-MAIN-2013-48
refinedweb
238
72.76
I? Listing 1 demonstrates this: Listing 1. Multiline strings in Java String example1 = "This is a multiline " + "string which is going to " + "cover a few lines then " + "end with a period."; String example2 = "To preserve the line-breaks, \n" + "you must explicitly include them \n" + "with a backslash \"n\"."; String example3 = "You also must \"escape\" internal quotes."; Writing a multiline String in Groovy is much easier. Groovy supports the notion of here-docs, as shown in Listing 2. A here-doc is a convenient mechanism for creating formatted Strings, such as HTML and XML. To create a here-doc, simply surround your String with Python-like triple quotes. Listing 2. Here-docs in Groovy String itext ="""This is another multiline String that takes up a few lines. Heredocs are different than Java Strings in a few important ways: 1. They preserve newlines. 2. You don't have to "escape" internal quotes. """. Listing 3. GStrings in Groovy def. Listing 4. GString autocalling def lang = "Groovy" println "I dig any language with ${lang.size()}. Listing 5. A Template for creating GroovyTestCases import groovy.util.GroovyTestCase class <%=test_suite %> extends GroovyTestCase { <% test_cases.each{ tc ->. Listing 6. GStrings in action . For example, if a simple template had a variable named favlang, I'd have to define a map with a key value of favlang. The key's value would be whatever I chose as my favorite scripting language (in this case, Groovy, of course). In Listing 7, I've defined this simple template, and in Listing 8, I'll show you the corresponding mapping code. Listing 7. Simple template to demonstrate mapping My favorite dynamic language is ${favlang} Listing 8 shows a simple class that does five things, two of which are important. Can you tell what they are? Listing 8. Mapping values for a simple template import groovy.text.Template import groovy.text.SimpleTemplateEngine class SimpleTemplate{ static void main(String[] args) { def fle = new File("simple-txt.tmpl") def binding = [favlang: "Groovy"] def engine = new SimpleTemplateEngine() def template = engine.createTemplate(fle).make(binding) println template.toString() } } Mapping the values for the simple template in Listing 8 was surprisingly easy. First, I created a File instance pointing to the template,. Listing 9. A Person class in Groovy class Person{ int age String fname String lname } In Listing 10, you can see the mapping code that maps an instance of the above-defined Person class. Listing 10. Mapping a Person class with a template import groovy.text.Template import groovy.text.SimpleTemplateEngine class TemplatePerson{ static void main(String[] args) { def pers1 = new Person(age:12, fname:"Sam", lname:"Covery") def fle = new File("person_report.tmpl") def binding = [p:pers1] def engine = new SimpleTemplateEngine() def." When the code in Listing 10 is run, the output will be XML defining the person element, as shown in Listing 11. Listing 11. Person template output . Listing 12. Mapping a list of test cases import groovy.text.Template import groovy.text.SimpleTemplateEngine def fle = new File("unit_test.tmpl") def coll = ["testBinding", "testToString", "testAdd"] def binding = [test_suite:"TemplateTest", test_cases:coll] def engine = new SimpleTemplateEngine() def. Listing 13. Smelly code nfile.withPrintWriter{ pwriter -> pwriter.println("<md5report>") scanner.each{ f ->. Listing 14. Refactoring old code into a template <md5report> <% clazzes.each{ clzz->. The model then becomes the ChecksumClass defined in Listing 15. Listing 15. CheckSumClass defined in Groovy class CheckSumClass{ String name String value } Class definitions are fairly easy in Groovy, no? Creating a collection Next, I need to refactor the section of code that previously wrote to a file -- this time with logic to populate a list with the new ChecksumClass, as shown in Listing 16. Listing 16. Refactored code creating a collection of ChecksumClasses def classez = [] scanner.each{ f -> f.eachLine{ line -> def iname = formatClassName(bsedir, f.path) classe. Adding the template mapping The last thing I need to do is add the template engine-specific code. This code will perform the run-time mapping and write the corresponding formatted template to the original file, as shown in Listing 17. Listing 17. Refactoring with template mapping def fle = new File("report.tmpl") def binding = ["clazzes": classez] def engine = new SimpleTemplateEngine() def: Listing 18. Look, Ma! Less smelly code! void buildReport(String[] dirs, String todir){ def ant = new AntBuilder() dirs.each{bsedir -> def scanner = ant.fileScanner { fileset(dir:bsedir) { include(name:"**/*.class.md5.txt") } } def rdir = todir + File.separator + bsedir + File.separator + "xml" + File.separator def file = new File(rdir) if(!file.exists()){ ant.mkdir(dir:rdir) } def nfile = new File(rdir + File.separator + "checksum.xml") //newly refactored code using templates def classez = [] scanner.each{ f -> f.eachLine{ line -> def iname = formatClassName(bsedir, f.path) classez << new CheckSumClass(name:iname, value:line) } } def fle = new File("report.tmpl") def binding = ["clazzes": classez] def engine = new SimpleTemplateEngine() def template = engine.createTemplate(fle).make(binding) nfile.withPrintWriter{ pwriter -> pwriter.println template.toString() } } } String formatClassName(String dirName, String className){ def paths = dirName.split("\\/") return paths.join(".") + "." + className } - Don't miss the complete set of Practically Groovy articles, including JDBC programming with Groovy (developerWorks, January 2005), which featured the checksum reporting application example, and Ant Scripting with Groovy (developerWorks, December 2004), which showed how Groovy's built-in build reporting tool can facilitate more expressive Ant builds. - Malcolm Davis offers a nice overview of the MVC design pattern in Struts, an open source MVC implementation (developerWorks, February 2001). - FreeMarker is another rather slick Java template engine. - The next time you're playing with Python, take a look at Cheetah, an extremely effective Python-powered template engine. - See Python-Powered Templates with Cheetah (OnLamp.com, January 2005) to learn more about Cheetah. -.
http://www.ibm.com/developerworks/java/library/j-pg02155/index.html
CC-MAIN-2014-52
refinedweb
934
60.51
The Ins and Outs of Dependency Properties and Routed Events in WPF If you follow these rules, you will create a dependency property that the WPF runtime will understand. Suppose that the text you intended to show in various TextBlocks was so large that your window needed to scroll. You could use a ScrollViewer to scroll the various TextBlocks, kind of like this: <Window ... <Window.Resources> <Style TargetType="{x:Type TextBlock}"> <Setter Property="Margin" Value="10"/> </Style> </Window.Resources> <ScrollViewer Name="myScroll"> <StackPanel Name="myStackPanel"> <TextBlock TextWrapping="Wrap" Name="myTextBlock1"> Sample Text outside the ItemsControl block. Some more text. Even more Text. A lot more text. </TextBlock> <ItemsControl Foreground="Red"> <TextBlock TextWrapping="Wrap" Name="myTextBlock2"> Sample Text without foreground. More Red text, even more red text. A lot more red text. </TextBlock> <TextBlock Foreground="Blue" TextWrapping="Wrap" Name="myTextBlock3"> Sample Text with foreground. A terrific amount of blue text. A lot more amount of blue text. We need to make sure that this scrolls. </TextBlock> </ItemsControl> </StackPanel> </ScrollViewer> </Window> This XAML snippet ends up producing a window that looks like Figure 4. Figure 4. Using a ScrollViewer to scroll TextBlocks Click the title bar where it says MyWPFApp to make sure that the window frame has the right focus. Next, press the "Down" arrow key on your keyboard. What do you notice? You probably found that the text doesn't scroll downwards. Note: Due to a bug in .NET 3.0 extensions CTP for Visual Studio 2005, you may need to try this scrolling by directly running the EXE. Doing so in debug mode may result in an inexplicable exception. Now, click on the thumb of the scroll bar, and then try pressing the down arrow key on your keyboard. The text still won't scroll. Not until you explicitly click on the text itself and then hit the down arrow key will the text finally scroll. This problem is not unusual in, say, a NotePad.exe written with the Win32 API. However, it is accentuated in WPF. In the Win32 API, you could reasonably assume that the window you see representing NotePad.exe contains a bunch of sub-windows, and the huge multi-line textbox you type in is just another window. Each window receives messages, so when you create a WM_KEYDOWN message using the keyboard and press "A," the textbox responds by displaying "A" within its client area. However, because the window has focus, what happens when you press CTRL_O to open a file? The notepad window needs to somehow intercept that message and show the File Open dialog. I don't have access to the source code for notepad.exe, but I assume that its top level is using a Win32 API method such as PeekMessage to hear messages before the child textbox does and, if appropriate, to act upon them before the TextBox does. Routed Events Explained This problem is to some extent accentuated in WPF as well, because the control tree tends to get a little bit more complex when the framework is almost infinitely flexible and lets you do crazy things such as throw a TextBox on a button as the button's content. Thus, the solution to this problem in WPF is routed eventsevents that traverse up or down the control hierarchy. Thus, if you press the down arrow key, every relevant control in the control hierarchy somehow is informed that a key was pressedunless, of course, one of the links in this chain decides to break the communication. In the example application, a rather incomplete control hierarchy would look a bit like Figure 5 (incomplete because you don't see some controls). Figure 5. Incomplete Control Hierarchy To try and see who gets which events, modify the code of your Window as follows: public Window1() { InitializeComponent(); myWindow.KeyDown += new KeyEventHandler(GenericKeyDownHandler); myScroll.KeyDown += new KeyEventHandler(GenericKeyDownHandler); myStackPanel.KeyDown += new KeyEventHandler(GenericKeyDownHandler); myWindow.PreviewKeyDown += new KeyEventHandler(GenericKeyDownHandler); myScroll.PreviewKeyDown += new KeyEventHandler(GenericKeyDownHandler); myStackPanel.PreviewKeyDown += new KeyEventHandler(GenericKeyDownHandler); } void GenericKeyDownHandler(object sender, KeyEventArgs e) { myTextBlock1.Text += "\nSender: " + (sender as Control).Name + "\t RoutedEvent:" + e.RoutedEvent.Name; } Now run the application, click the title bar to give the window focus, and press the down arrow key. You will see the following event sequence: Sender: myWindow RoutedEvent:PreviewKeyDown Sender: myWindow RoutedEvent:KeyDown So, the myWindow gets the KeyDown message first and then eats the message so the underlying controls never get that message. Well, that certainly explains the mystery of the text not scrolling. Now, click on the TextBlock itself, and press the down arrow key once again. You would see the following event sequence: Sender: myWindow RoutedEvent:PreviewKeyDown Sender: myScroll RoutedEvent:PreviewKeyDown In this case, the window does get the PreviewKeyDown message first, but it sends it along to myScroll, which then dutifully acts on the message by scrolling the text. Thus, the event is being routed from top to bottom in the control hierarchy. In certain instances, you may want to bubble the event up the chain instead of tunneling it down the chain, or simply send the event directly. You can specify this behavior on a custom element when you register your event. The following is a very simple implementation of a custom Pop event on a MyCustomElement: public class MyCustomElement : UIElement { public static readonly RoutedEvent PopEvent ; public event RoutedEventHandler Pop { add {AddHandler(PopEvent, value);} remove{RemoveHandler(PopEvent, value);} } public static MyCustomElement() { PopEvent = EventManager.RegisterRoutedEvent( "Pop", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(MyCustomElement)) ; } } The next question obviously is how can you fix your code so the text will indeed scroll with the window in focus and the down arrow key being pressed? This example involves four major visual elements: - The window - The TextBlock - The Scrollviewer - The Stack Panel If you observe the class hierarchy of these, it looks like Figure 6. Figure 6. The Class Hierarchy of the Four Major Visual Elements As you can see, all of these end up inheriting from UIElement. If you run reflector and decompile the code for UIElement.OnKeyDown, you will find that it is implemented as a protected virtual void method that accepts a single parameter of type KeyEventArgs. Thus, controls further down in the hierarchy can choose to give an implementation to this method. As it turns out, this event is simply ignored all the way down to the window. After all, why should a generic window with no scroll bars need to bother about the key down event? However, if you look into the implementation of OnKeyDown for ScrollViewer, you will note that ScrollViewer verifies whether the event has already been handled by checking the KeyEventArgs.Handled property. If the event isn't handled yet and the appropriate cursor key is pressed, it responds to the event by scrolling in the appropriate direction. For the scrolling to work with the window in focus, the myWindow element will need to handle the OnKeyDown event and simply pass that event to the myScroll element. To ensure this happens, add the following code to Window1's code: protected override void OnKeyDown(KeyEventArgs e) { base.OnKeyDown(e); if (e.Key == Key.Down) { myScroll.RaiseEvent(e); } } The text will now scroll with just the window in focus. Be very careful of connecting events in this manner, though. For instance, if I forgot to check for Key.Down, ALT_F4 would also be routed to the scroll viewer. Thus, pressing ALT_F4 on the window would not close the window properly. As a rule, I always try to call the base class's implementation for key handling, just to be safe. also has been awarded the Microsoft MVP award.<<
https://www.developer.com/net/net/article.php/11087_3666151_2/The-Ins-and-Outs-of-Dependency-Properties-and-Routed-Events-in-WPF.htm
CC-MAIN-2018-51
refinedweb
1,266
56.05
This section illustrates you how to generate a temporary file. Description of code: A temporary file stores information temporarily to free the memory for other purposes. You can use this temporary file to store the content or various information which may be required again during the execution of the application. It is beneficial and secure to prevent loss of data and increases the efficiency of the application. In the given example we have used createTempFile() method to create an empty temporary file. We have used BufferedWriter class to write some text into it. Then we have used the method getAbsolutePath to find the location of temporary file. Here is the code: import java.io.*; public class CreateTempFile { public static void main(String[] argv) throws Exception { File temp = File.createTempFile("file", ".tmp"); BufferedWriter out = new BufferedWriter(new FileWriter(temp)); out.write("Hello World"); out.close(); System.out.println(temp.getAbsolutePath()); } } Through the use of createTempFile() method, you can create an empty temporary file. Output: Advertisements Posted on: August 1,
http://www.roseindia.net/tutorial/java/core/files/createTemporaryFile.html
CC-MAIN-2017-17
refinedweb
168
50.94
6 lines, 2 for loops. Fully explained. solution in Clear category for The Hidden Word by bavili8766 import itertools # we need this for itertools.zip_longest to arrange the lines into columns def checkio(text, word): for a, b in enumerate(text.lower().replace(' ', '').split('\n')): # horizontal check : makes the text lowercase, removes whitespace, then splits the text into a list of the lines if word in b: # if the word is found in a row : return [a+1, b.index(word[0])+1, a+1, b.index(word[0])+len(word)] # returns the coordinates for a, b in enumerate(list(itertools.zip_longest(*[i for i in text.lower().replace(' ', '').split('\n')], fillvalue=''))): # vertical check : uses zip_longest to take the lines we used earlier and organise them into columns if word in ''.join(b): # if the word is found in a column : return [b.index(word[0])+1, a+1, b.index(word[0])+len(word), a+1] # returns the coordinates !") Aug. 6, 2021 Forum Price For Teachers Global Activity ClassRoom Manager Leaderboard Jobs Coding games Python programming for beginners
https://py.checkio.org/mission/hidden-word/publications/bavili8766/python-3/6-lines-2-for-loops-fully-explained/share/95f642668ba7e051a60d2d5458719efc/
CC-MAIN-2022-27
refinedweb
179
59.4
4.14-stable review patch. If anyone has any objections, please let me know.------------------From: Helge Deller <deller@gmx.de>[ Upstream commit b845f66f78bf42a4ce98e5cfe0e94fab41dd0742 ]Carlo Pisani noticed that his C3600 workstation behaved unstable during heavyI/O on the PCI bus with a VIA VT6421 IDE/SATA PCI card.To avoid such instability, this patch switches the LBA PCI bus from Hard Failmode into Soft Fail mode. In this mode the bus will return -1UL for timed outMMIO transactions, which is exactly how the x86 (and most other architectures)PCI busses behave.This patch is based on a proposal by Grant Grundler and Kyle McMartin 10years ago::>Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>--- drivers/parisc/lba_pci.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-)--- a/drivers/parisc/lba_pci.c+++ b/drivers/parisc/lba_pci.c@@ -1403,9 +1403,27 @@ lba_hw_init(struct lba_device *d) WRITE_REG32(stat, d->hba.base_addr + LBA_ERROR_CONFIG); } - /* Set HF mode as the default (vs. -1 mode). */++ /*+ * Hard Fail vs. Soft Fail on PCI "Master Abort".+ *+ * "Master Abort" means the MMIO transaction timed out - usually due to+ * the device not responding to an MMIO read. We would like HF to be+ * enabled to find driver problems, though it means the system will+ * crash with a HPMC.+ *+ * In SoftFail mode "~0L" is returned as a result of a timeout on the+ * pci bus. This is like how PCI busses on x86 and most other+ * architectures behave. In order to increase compatibility with+ * existing (x86) PCI hardware and existing Linux drivers we enable+ * Soft Faul mode on PA-RISC now too.+ */ stat = READ_REG32(d->hba.base_addr + LBA_STAT_CTL);+#if defined(ENABLE_HARDFAIL) WRITE_REG32(stat | HF_ENABLE, d->hba.base_addr + LBA_STAT_CTL);+#else+ WRITE_REG32(stat & ~HF_ENABLE, d->hba.base_addr + LBA_STAT_CTL);+#endif /* ** Writing a zero to STAT_CTL.rf (bit 0) will clear reset signal
https://lkml.org/lkml/2018/5/28/953
CC-MAIN-2019-43
refinedweb
307
68.26
A Domain Object Model (DOM) is a set of classes that models concepts from your problem domain. We really cannot do justice to the concept of a DOM in just a small portion of this chapter, so we recommend that you read Patterns of Enterprise Application Architecture by Martin Fowler (Addison-Wesley, 2002) or Domain Driven Design: Tackling Complexity in the Heart of Software by Eric Evans (Addison-Wesley, 2003) for a more complete description of the DOM pattern. Although we do not go into great detail on this pattern, we do show you why we chose to create a DOM for the SpringBlog application and how we built our DOM. Given that this is a book on Spring, you might find it strange that we dedicate considerable page space to a topic that is not directly related to Spring in any way. Of the applications that we have built using Spring, the only objects that are consistently not managed by Spring are Domain Objects. The reason for this is that, really, Spring does not need to be involved with Domain Objects. Generally, you create many instances of your Domain Objects using the new() operator, and although you can have Spring create new instances for you as you need them, it seems like overkill to have to call BeanFactory.getBean() every time you need a new Domain Object instance. This is especially true when you consider that, typically, Domain Objects do not take advantage of Dependency Injection, because they generally have few dependencies outside of the DOM itself, and they don't require much configuration. You might well be wondering, then, why so much attention to the DOM? The answer is simple. The DOM affects so many other parts of the application, parts that are managed by Spring, that getting it right is very important to getting your whole application right. The important thing to understand about the DOM pattern is that it is not the same as the Value Object (often called Data Transfer Object) pattern. The Value Object pattern was created to overcome a shortcoming in the original EJB specification that meant that all calls to an EJB were remote. Configuring the state of an EJB typically means many calls, all of which are remote. Using a Value Object, object state is transferred in bulk using a single remote call, thus reducing the performance hit of making many remote calls. A DOM is an object-based representation of the application problem domain, intended to allow the programmer to code in terms of objects that exist in the problem domain, not objects that exist inside the computer. Where a Value Object purely encompasses state, it is perfectly acceptable for a Domain Object to encompass both state and behavior (although you may choose not to encapsulate behavior inside Domain Objects). Another key difference between Domain Objects and Value Objects is that a Value Object's structure is driven by the need to transfer data remotely, whereas a Domain Object is modeled to represent a real-world concept and is not driven by some need of the application infrastructure. As we discuss later, we believe there are no hard-and-fast rules for modeling Domain Objects; you have to choose a level of granularity that matches your application and the functions it will perform. It is possible for an application to have both Domain Objects and Value Objects. In this approach, Value Objects are used by the Business Tier to communicate with the Data Access Tier. These Value Objects are then converted as appropriate into Domain Objects and passed into the Presentation Tier for rendering. In our opinion, this approach is really not worth the hassle. With Spring, the data access framework is so powerful that it is simple to map data directly to Domain Objects. However, this approach is sometimes problematic when you have a DOM that is quite far removed from the model in the underlying data store. This issue is discussed in greater detail in the later section entitled "Modeling Domain Objects." Creating a DOM requires some up-front effort in order to identify Domain Objects and then create an in-code representation of these Objects. However, in all but the most trivial of applications, this up-front effort is far outweighed by the time you will save and the bugs you will avoid when it comes to implementing business logic to do something with your Domain Objects. We find that using a good DOM makes creating the code to solve business problems much easier, since you are able to code in terms of the problem rather than in terms of the machine. A good DOM makes it easier for developers to transform application requirements into application features. There are a great many different methodologies and approaches to Domain Object modeling. Some practices advocate letting the underlying data store drive your object model, whereas some practices say, "Let the business domain drive the object model." In practice, we have found that a happy medium between these two approaches results in a DOM that is both easy to work with and well performing. For small applications with only five or six database tables, it is often easier just to create one Domain Object that corresponds to each database table. Although these objects are not strictly Domain Objects—in that their creation is not driven by the problem domain, but rather the data structure—they are close enough for the purposes of such a simple application. Indeed, in many small applications, the result of an extensive domain modeling process is an object model that matches the database structure entirely. For larger applications, a little more thought has to be put into the real-world problem domain and the underlying data store. When we are building a DOM for an application, we usually focus on three main points: How the problem domain is structured How the Domain Objects will be used How the underlying data store is constructed What we are looking for is a DOM that is as close to the ideal model as possible without affecting the performance of the data store too much and without having too great an impact on code that has to use the Domain Objects. Typically, a DOM is quite granular, and you might end up with more than one class for a single logical concept. For instance, consider the concept of an order in a purchasing system. Typically an order is modeled as a single Order object with one or more OrderLine objects that represent each line item of the order. Trying to model an order using a single object leads to an object model that is unnecessarily coarse and unwieldy, not to mention difficult to implement. You should always look for opportunities to increase the granularity of your Domain Objects when it makes working with the DOM easier. You will also find that your DOM contains objects that do not exist in your data store. For instance, a typical purchasing system has some notion of a shopping cart, perhaps represented by Cart and CartItem objects. Unless you are required to persist contents across user sessions, chances are these Domain Objects do not have corresponding tables for data storage. Remember, you are not simply building an object-oriented representation of your database, you are modeling the business domain. This point cannot be stressed enough. We have seen plenty of projects that created a pseudo-DOM derived directly from the data store, and inevitably these projects suffered from the lack of abstraction that can be gained from a well-defined DOM. We have found that a solid DOM comes from taking the time to look at your problem domain, identifying the objects in the domain, and then looking at how the natural granularity of these objects fits into the requirements of your application. Although we take both the utilization of the Domain Objects and the underlying data store into consideration, we don't like to let these have undue influence on our DOM. It is important to remember that the goal of building a DOM is to create a set of classes that help you and other developers build the application at a level of abstraction that is closer to the application's problem domain. In general, we consider all other concerns secondary when building a DOM. If you find that performance is suffering due to the design of your DOM, feel free to tweak away, but we don't recommend that you do this on a hunch. Make absolutely sure that your DOM is to blame. You don't want to reduce the benefits of your DOM out of the mistaken belief that it is performing badly. Although database modeling and Domain Object modeling are quite similar, the results you get from each are rarely the same, and indeed, you rarely want them to be. When modeling a database, you are looking for the structure that allows you to store and retrieve data in the most efficient and consistent manner. When you are building a DOM, performance is obviously important, but so is building an API that is easy to work with and makes assembling your business logic simple. In general, we have found that it is best to model the database in the way that is best for the database, and model the DOM, initially at least, in the way that is best for the DOM. You can make any changes later on, if and when you identify performance bottlenecks. The most common mistake we see in a DOM, especially when the DOM is driven by the design of the database, is that Domain Objects are created to represent relationships between other Domain Objects. This comes from the fact that a many-to-many relationship between two tables in a database must have a third table to construct the relationship. Relationships in a DOM should be modeled in a much more OOP-style way, with Domain Objects maintaining references to other Domain Objects or lists of Domain Objects. A common mistake when populating Domain Object data from a database, such as what would be done in the Data Access Tier of an application, is to assume that all related Domain Objects must be loaded from the database as well—this is not so. See the later section entitled "Domain Object Relationships" for a more detailed discussion of this problem. You are not forced to have your Domain Objects encapsulate any behavior at all; indeed, you can choose to have your Domain Objects represent just the state of your problem domain. In most cases, we have found that it is better to factor out much of the business logic into a set of service objects that work with Domain Objects rather than encapsulate this logic inside the Domain Objects. Typically, we place all logic that interacts with components outside of the DOM into the service objects. In this way, we are reducing the coupling between the DOM and components involved in application logic. This allows the DOM to be used in a wider variety of scenarios, and often, you will find that the DOM can be reused in other applications that solve problems in the same domain. Where we like to encapsulate behavior in the DOM is in situations where the logic is implemented purely in interactions between Domain Objects. The jPetStore sample application included with Spring provides a great example of this that can be mapped to our purchasing system example. In this scenario, a user has a shopping cart, represented by a Cart object, and a list of CartItem objects. When the user is ready to purchase the items in her cart and create an order, the application has to create an Order object along with a list of OrderLine objects that corresponds to the data modeled by the Cart and CartItem objects. This is a perfect example of when behavior should be encapsulated inside the DOM. The conversion from Cart to Order is coded purely in terms of Domain Objects with no dependencies on other components in your application. In jPetStore, the Order class has an initOrder() method that accepts two arguments, Account and Cart. All the logic required to create an Order based on the Cart object for the user represented by the Account object is represented in this method. As with most things related to modeling, there are no hard-and-fast rules about when to put logic inside a Domain Object and when to factor it out into a service object. You should avoid placing logic inside your Domain Objects when it causes your Domain Objects to depend on other application components outside of the DOM. In this way, you are ensuring that your DOM is as reusable as possible. On the flipside of this, logic that involves only Domain Objects is ideally placed in the DOM, which allows it to be used wherever the DOM is used. Because the SpringBlog application is actually quite simple, the DOM is also quite simple. Figure 11-1 shows the SpringBlog DOM. Figure 11-1: The DOM in SpringBlog Although this is quite a simple DOM, it does highlight some of the points we have been talking about. These are discussed in the next three sections. Central to the SpringBlog application is the concept of a posting. Postings come in two types: entries, which are top-level postings to the blog; and comments, which are comments about a particular blog entry. Although the SpringBlog application contains no security, the intention is that only the blog owner can create entries whereas any anonymous user can create comments. We decided that we would define common postings characteristics in an interface, BlogPosting, shown in Listing 11-4, and have both Entry and Comment implement this interface. Listing 11-4: The BlogPosting Interface package com.apress.prospring.domain; import java.util.Date; import java.util.List; public interface BlogPosting { public List getAttachments(); public void setAttachments(List attachments); public String getBody(); public void setBody(String body); public Date getPostDate(); public void setPostDate(Date postDate); public String getSubject(); public void setSubject(String subject); } However, this results in undue code duplication, with both Entry and Comment having their own implementations of BlogPosting. To get around this, we introduce the AbstractBlogPosting class and have Entry and Comment extend this class. AbstractBlogPosting is shown in Listing 11-5. Listing 11-5: The AbstractBlogPosting Class package com.apress.prospring.domain; import java.util.Date; import java.util.List; public abstract class AbstractBlogPosting implements BlogPosting { protected String subject; protected String body; protected Date postDate; protected List attachments; public String getBody() { return body; } public void setBody(String body) { this.body = body; } public Date getPostDate() { return postDate; } public void setPostDate(Date postDate) { this.postDate = postDate; } public String getSubject() { return subject; } public void setSubject(String subject) { this.subject = subject; } public List getAttachments() { return attachments; } public void setAttachments(List attachments) { this.attachments = attachments; } } By extending this base class, we move all the BlogPosting implementation details out of Entry and Comment, reducing code duplication. As an example of this, Listing 11-6 shows the code for the Entry class. Listing 11-6: The Entry Class package com.apress.prospring.domain; public class Entry extends AbstractBlogPosting { private static final int MAX_BODY_LENGTH = 80; private static final String THREE_DOTS = "..."; private int entryId; public String getShortBody() { if (body.length() <= MAX_BODY_LENGTH) return body; StringBuffer result = new StringBuffer(MAX_BODY_LENGTH + 3); result.append(body.substring(0, MAX_BODY_LENGTH)); result.append(THREE_DOTS); return result.toString(); } public String toString() { StringBuffer result = new StringBuffer(50); result.append("Entry { , subject="); result.append(subject); result.append(" }"); return result.toString(); } public int getEntryId() { return entryId; } public void setEntryId(int entryId) { this.entryId = entryId; } } This is a pattern that is used extensively in Spring and throughout the SpringBlog application. Common functionality is defined in interfaces rather than abstract classes, but we provide a default implementation of the interface as an abstract class. The reason for this is that, where possible, we can take advantage of the abstract base class as with Entry and Comment, thus removing the need for each class to implement the BlogPosting interface directly. However, should a requirement arise for the Entry class to extend the Foo class, then we can simply mplement the BlogPosting interface directly in Entry. The main point to remember here is that you do not define common functionality in terms of abstract classes because doing so restricts you to a set inheritance hierarchy. Instead, define common functionality in terms of interfaces, along with default implementations of these interfaces as abstract base classes. This way you can take advantage of the inherited implementation wherever possible, but you are not artificially constraining your inheritance hierarchy. A point of note here is that we did not reflect this inheritance tree in the database. That is to say, we didn't create a BlogPosting table to store the shared data, and then two tables, Entry and Comment, to store the entity-specific data. The main reason for this is that we didn't think that an application of the size of SpringBlog warranted the complexity of that structure, plus this example highlights our point about having a DOM that is different in structure than the database. The main reason for defining this inheritance hierarchy, besides that it is a good design, is to allow the SpringBlog application to work with the common data in the Entry and Comment objects, without having to differentiate between the two. A good example of this is the obscenity filter that we covered in Chapter 7. Although the SpringBlog domain model is simplistic, we still need to encapsulate some logic in the domain model. Because the body of a blog posting could potentially be very long, we wanted a mechanism to get a snippet of the body to use when it displays a list of blog postings. For this reason, we create the Entry.getShortBody() method shown in Listing 11-7. Listing 11-7: Behavior in the Entry Class package com.apress.prospring.domain; public class Entry extends AbstractBlogPosting { private static final int MAX_BODY_LENGTH = 80; private static final String THREE_DOTS = "..."; public String getShortBody() { if (body.length() <= MAX_BODY_LENGTH) return body; StringBuffer result = new StringBuffer(MAX_BODY_LENGTH + 3); result.append(body.substring(0, MAX_BODY_LENGTH)); result.append(THREE_DOTS); return result.toString(); } /* omitted for clarity */ } Here you can see that to build the short body, we take the first 80 characters of the body and simply append three dots to the end. This is a simplistic implementation, but it does highlight a typical scenario for encapsulating logic in the DOM. In the DOM in Figure 11-1, notice that we defined an association between Entry and Attachment, and Comment and Attachment. As part of the SpringBlog requirements, we want to be able to upload and store files with both types of posting. In the database, we have a table to store the attachments called, strangely enough, attachments. Then to associate attachments with an entry or a comment, we have two other tables: entryattachments and commentattachments. A common mistake we see is that people create Domain Objects to model these relationships, rather than using standard Java features to relate the objects together. When you have a oneto-one relationship in your database, you can model this in the DOM by having one object maintain a reference to the other. For one-to-many or many-to-many relationships, using Java Collections makes it simple to represent these complex relationships in a familiar manner that is simple to work with in code. Listing 11-8, a snippet from the AbstractBlogPosting class, shows how we use a List to store the Attachment objects for each posting. Listing 11-8: Using List for Domain Object Relationships package com.apress.prospring.domain; import java.util.List; public abstract class AbstractBlogPosting implements BlogPosting { protected List attachments; public List getAttachments() { return attachments; } public void setAttachments(List attachments) { this.attachments = attachments; } } Rather than using additional objects to model relationships, we use a simple List to model the one-to-many relationship. Aside from reducing the amount of code we need to type, this method prevents the DOM from becoming polluted with needless classes, and allows familiar Java concepts such as Iterators to be used when navigating relationships. A consideration when modeling objects in your domain is the amount of memory taken up by these objects. Typically you have many instances of the same class in your application at the same time. Often you have the same logical entity represented by many different instances of a Domain Object at the same time. In many cases, you can't avoid this, but for some scenarios, you can avoid this by preventing multiple instances of a Domain Object from being created to represent the same logical entity—this technique is called canonicalization. Before we look at this technique, we should first discuss scenarios where applying it is valid. Consider again the purchasing system. One of the Domain Objects for the purchasing system is Product. Now it is possible for more than one user to be looking at the same product at the same time. Typically, this results in multiple instances of Product being created to represent the same physical product. Our fictional purchasing system sells over 10,000 different product lines, and it is this number that makes canonicalization impractical, as you will see. Another Domain Object in the purchasing system is ShippingCompany, which represents one of the companies that ships orders to the user. Our system only offers three choices of shipping company, yet there may be many more instances of shipping companies around in the JVM at any one time. This low number of fixed data sets makes the ShippingCompany ideal for canonicalization. Basic canonicalization works by making the constructor of the class private and then defining all possible instances of the class as public static final members. Listing 11-9 shows an example of this for the ShippingCompany class. Listing 11-9: Canonicalization for Domain Objects package com.apress.prospring.ch11.domain; public class ShippingCompany { public static final ShippingCompany UPS = new ShippingCompany(1, "UPS"); public static final ShippingCompany DHL = new ShippingCompany(2, "DHL"); public static final ShippingCompany FEDEX = new ShippingCompany(3, "FEDEX"); private final int id; private final String name; private ShippingCompany(int id, String name) { this.id = id; this.name = name; } public int getId() { return this.id; } public String getName() { return this.name; } public static ShippingCompany fromInt(int id) { if (id == UPS.id) { return UPS; } else if (id == DHL.id) { return DHL; } else if (id == FEDEX.id) { return FEDEX; } else { return null; } } } Here you can see that the three instances of the ShippingCompany class are created as public static final members. It is not possible for external classes to create more instances of ShippingCompany because the constructor is declared private. The fromInt() method isn't necessary and is in fact something that we inherited from Hibernate. The fromInt() method is useful when loading canonicalized objects from the data store. We discuss canonicalization from a data access point of view later in the chapter. When you have a large number of objects to canonicalize—say you have an application that performs lots of processing on Country objects—you may find that caching is a better solution than canonicalization. We are not going to discuss caching in this chapter. For a more detailed discussion, read Expert One-on-One J2EE Development without EJB by Rod Johnson and Juergen Hoeller (Wrox, 2004). In this section, we looked at the DOM for the SpringBlog application and we spent some time discussing the basics of Domain Object modeling and implementation. There is no doubt that this topic is much greater than what we have covered here. Indeed, a whole range of books is available that discusses the topic in detail. We only scratched the surface here, and we focused on why you want to build a DOM, what the focus is when building one, and some general topics related to the SpringBlog application. Although it is certainly possible to build applications without defining and building a DOM, it is our experience that taking the time to do so pays off in reduced complexity, lower maintenance costs, and fewer bugs.
https://flylib.com/books/en/1.144.1.98/1/
CC-MAIN-2018-39
refinedweb
4,030
50.77
Working with dates and times can be tricky, especially when dealing with timezone conversions. This guide will provide an overview of Python’s datetime module with an emphasis on timezone related functions. 1 What is a datetime object? First, if you’ve seen datetime used in Python, you’ll notice that it’s both the name of a module and one of many classes within the module. So the datetime module can be imported like this: import datetime # datetime.datetime # datetime.timedelta # datetime.timezone (python 3.2+) Or you can simply import the datetime classes you care about: from datetime import datetime, timedelta A datetime object is an instance of the datetime.datetime class that represents a single point in time. If you are familiar with object oriented programming, a datetime object is created by instantiating the datetime class with a date. An easy way to get a datetime object is to use datetime.now. import datetime datetime.datetime.now() > datetime.datetime(2016, 11, 15, 9, 59, 25, 608206) As you can see, the now method returned a datetime object that represents the point in time when now was called. You can also create a datetime object by specifying which date you want to represent. At a minimum, instantiating datetime requires at least 3 arguments – year, month, and day. Let’s instantiate my birthday. import datetime datetime.datetime(1985, 10, 20) > datetime.datetime(1985, 10, 20, 0, 0) From here, we’ll talk about manipulating, formatting and doing timezone conversions on datetime objects. 2 Formatting datetime objects According to the documentation, the “focus of the implementation [of the datetime library] is on efficient attribute extraction for output formatting and manipulation”. So we will discuss extracting attributes and formatting dates. For this example, we’ll choose a random date. import datetime d = datetime.datetime(1984, 1, 10, 23, 30) There are many occasions where we’ll want to format a datetime object in a specific way. For this, the strftime method comes in very handy. This method allows you to print a string formatted using a series of formatting directives. This is best understood with examples. d.strftime("%B %d, %Y") > 'January 10, 1984' d.strftime("%Y/%m/%d") > '1984/01/10' d.strftime("%d %b %y") > '10 Jan 84' d.strftime("%Y-%m-%d %H:%M:%S") > '1984-01-10 23:30:00' As you can hopefully tell, the same datetime object is used to generate each date format. The format is specified using various formatting directives. For example, %Y corresponds to the full four digit year, while %m corresponds to the two digit decimal number representing the month. See the documentation for a full list of formatting directives. It’s also possible to access various attributes of the datetime object directly. d.year > 1984 d.month > 1 d.day > 10 When discussing formatting, it’s valuable to be familiar with ISO 8601, which is an international standard for the representation of dates and times. Python has a method for quickly generating an ISO 8601 formatted date/time: d.isoformat() > '1984-01-10T23:30:00' Now we’ll discuss the opposite of strftime, which is strptime. This is where you create a datetime object from a string. But since the string can be formatted in any way, it’s necessary to tell datetime what format to expect. Using the same set of formatting directives, we can pass in a string and the expected format to create a datetime object. import datetime datetime.datetime.strptime("December 25, 2010", "%B %d, %Y") > datetime.datetime(2010, 12, 25, 0, 0) Notice how the pattern matches the string exactly. If you use a formatting directives or date doesn’t make sense it will raise an exception. 3 Enter timezones So far, instantiating and formatting datetime objects is fairly easy. However, timezones add a little bit of complexity to the equation. naive vs aware So far we’ve been dealing only with naive datetime objects. That means the object is naive to any sort of timezone. So a datetime object can be either offset naive or offset aware. A timezone’s offset refers to how many hours the timezone is from Coordinated Universal Time (UTC). A naive datetime object contains no timezone information. The easiest way to tell if a datetime object is naive is by checking tzinfo. tzinfo will be set to None of the object is naive. import datetime naive = datetime.datetime.now() naive.tzinfo > None To make a datetime object offset aware, you can use the pytz library. First, you have to instantiate a timezone object, and then use that timezone object to “localize” a datetime object. Localizing simply gives the object timezone information. import datetime import pytz d = datetime.datetime.now() timezone = pytz.timezone("America/Los_Angeles") d_aware = timezone.localize(d) d_aware.tzinfo > <DstTzInfo 'America/Los_Angeles' PST-1 day, 16:00:00 STD> A naive datetime object is limited in that it cannot locate itself in relation to offset aware datetime objects. For instance: import datetime import pytz d_naive = datetime.datetime.now() timezone = pytz.timezone("America/Los_Angeles") d_aware = timezone.localize(d_naive) d_naive < d_aware > TypeError: can't compare offset-naive and offset-aware datetimes When dealing with datetime objects, I’ve come across two pieces of advice with which I generally agree. First, always use “aware” datetime objects. And second, always work in UTC and do timezone conversion as a last step. More specifically, as pointed out by user jarshwah on reddit, you should store datetimes in UTC and convert on display. Once you’re familiar with aware datetime objects, timezone conversions are relatively easy. Let’s create a datetime object with a UTC timezone, and convert it to Pacific Standard. import datetime import pytz utc_now = pytz.utc.localize(datetime.datetime.utcnow()) pst_now = utc_now.astimezone(pytz.timezone("America/Los_Angeles")) pst_now == utc_now > True So pst_now and utc_now are different datetime objects with different timezones, yet they are equal. To be certain, we can print the time of each: utc_now.isoformat() > '2016-11-16T22:31:18.130822+00:00' pst_now.isoformat() > '2016-11-16T14:31:18.130822-08:00' 4 Measuring duration with timedelta Often we’ll be working with multiple datetime objects, and we’ll want to compare them. The timedelta class is useful for finding the difference between two dates or times. While datetime objects represent a point in time, timedelta objects represents a duration, like 5 days or 10 seconds. Suppose I want to know exactly how much older I am than my brother. I’ll create datetime object for each of us representing the day and time of our birth. import datetime import pytz my_birthday = datetime.datetime(1985, 10, 20, 17, 55) brothers_birthday = datetime.datetime(1992, 6, 25, 18, 30) Since we like to work with offset aware objects, we’ll add timezone information. indy = pytz.timezone("America/Indianapolis") my_birthday = indy.localize(my_birthday) brothers_birthday = indy.localize(brothers_birthday) To see how much older I am than my brother, we can simply subtract the two datetime objects. And to see the answer in a human readable way, we can simple print the difference. diff = brothers_birthday - my_birthday print(diff) > 2440 days, 0:35:00 The diff variable is actually a timedelta object that looks like this datetime.timedelta(2440, 2100). Subtracting a datetime object from another yields a timedelta object, so as you might suspect, subtracting a timedelta object from a datetime object yields a datetime object. datetime - datetime = timedelta # and datetime - timedelta = datetime Of course the same is true for addition. This is useful for answering questions like “what was the date 3 weeks ago from yesterday?” or “what day of the week is 90 days from today?”. To answer the second question, we need to have two things – first, a datetime object representing today and second, a timedelta object representing 90 days. import datetime today = datetime.datetime.now() ninety_days = datetime.timedelta(days=90) Then we can simply do the calculation. target_date = today + ninety_days And since we want to know the day of the week, we can use strftime. target_date.strftime("%A") > 'Wednesday' 5 Conclusion Dates and times can be tricky, but Python’s datetime class should make things a little bit easier. Hopefully you found this guide to be useful. If you think there are any other essentially examples or topics related to datetime objects and timezones please comment below, and I will try to add them to the guide. Source:
https://learningactors.com/working-with-datetime-objects-and-timezones-in-python/
CC-MAIN-2021-31
refinedweb
1,399
58.89
Results 1 to 2 of 2 Thread: Cannot capture values in array - Join Date - Jun 2014 - 4 - Thanks - 1 - Thanked 0 Times in 0 Posts Cannot capture values in arrayCode: import javax.swing.JOptionPane; public class TestB { public static void main(String args[]){ int[] num =new int[5]; int count=0; int sum=0; String showval; for (int i = 0; i < 5; i++){ //while (count<5){ showval=JOptionPane.showInputDialog("Enter the number: " + (count+1)); sum=sum+num[count]; count++; } int average = sum / num.length; JOptionPane.showMessageDialog(null, "Your number is: " + (num[]) + "\n" + "Your average is: " + average ); } } //} in the future please actually pose a question- the problem though is you never assign values to the int's in the array num[]... so when you try to look at num[whatevernumberitdoesntmatter] it will be 0 (or null, idk what Java defaults this to). So also with that, the line sum = sum + num[count] will always be 0 = 0 + num[#] but all the numbers in num are 0 so 0 = 0 + 0 equaling 0.
http://www.codingforums.com/java-and-jsp/326662-cannot-capture-values-array.html
CC-MAIN-2017-13
refinedweb
171
54.36
On Tue, May 31, 2016 at 12:41 PM, Olaf Hering <olaf@xxxxxxxxx> wrote: > On Tue, May 31, George Dunlap wrote: > >> Sorry, can you expand on this a bit? Are you saying that on SuSE, if >> you specify "vdev=xvda" in your config file, that you'll get PV >> devices named "/dev/xvda", but that if you specify "vdev=hda", that >> you'll get PV devices but named "/dev/hda"? > > Yes, thats exactly what the xenlinux block frontend does. > pvops forces xvda, independent of the name 'vdev' in domU.cfg. > Up to xen-4.2 'vdev=hd*' was required to tell qemu to create an emulated > disk to boot from. Starting with xen-4.3 qemu also recognized > 'vdev=xvd*' for the emulated disk. And starting with xen-4.7 qemu > requires 'xvda=hd*' again. > > I think if some domU.cfg for xen-4.3+ has 'vdev=xvd*' and the domU uses > for some reason kernel names in config files and the domU uses a > xenlinux kernel, then changing domU.cfg to 'hd*' will allow the guest to > boot again. But its userland will miss the /dev/xvd* device nodes. > That probably remained unnoticed during testing the referenced commit if > a pvops based kernel was used. Or if -- as is the case for most of my own test systems -- filesystem UUIDs are used rather than device names. (This means things work the same on PV with PV disks, HVM with PV disks, and HVM with emulated disks -- for instance, if you're using nested virtualization and your L1 dom0 can't access L0 xenbus.) Do you have a concrete proposal? Anthony, does the OVMF-with-pv-only-drivers actually still work at the moment? Really 'vdev' string in the the guest config file is only meant to tell libxl how it should behave -- it should ideally not have any effect on what devices you see in the backend. And furthermore, it seems to me that when Linux upstream rejected the idea of the pv drivers stealing the "hd*" namespace, that SuSE's xenlinux should have followed suit and had the pv drivers only create devices named xvd*. But I recognize if you're selling an enterprise kernel, those sorts of "you should have done this X years ago" doesn't really help you keep your promises to your users.
https://lists.xenproject.org/archives/html/xen-devel/2016-05/msg02987.html
CC-MAIN-2021-43
refinedweb
389
72.66
For whom is the Java scripting API prepared? Some useful features of scripting languages are: - Convenient: most scripting languages are dynamically typed. You can usually create new variables without declaring variable types, and you can reuse variables to store different types of objects. In addition, scripting languages often perform many types of transformations automatically, such as converting the number 10 to “10” if necessary. - Develop a rapid prototype: you can avoid editing the compilation run cycle and only use “Edit run”! - Application extension / Customization: you can “materialize” some applications, such as some configuration scripts, business logic / rules and mathematical expressions in financial applications. - Add a command line mode for the application for debugging, runtime configuration / deployment time. Most applications now have a web-based GUI configuration tool. But system administrators / Deployers often like command-line tools. A “standard” scripting language can be used for this purpose, rather than inventing a special scripting language. The java script API is a framework independent scripting language that uses a scripting engine from Java code. Through the Java scripting API, you can use the Java language to write custom / extensible applications and leave the custom scripting language selection to the end user. Java application developers do not need to choose an extension language during the development process. If you use the jsr-223 API to write applications, your users can use any jsr-223 compatible scripting language. Script package The JavaScript function is in the javax. Script package. This is a relatively small, simple API. The starting point of the script is the scriptenginemanager class. A scriptenginemanager object can discover the script engine through the service discovery mechanism of the jar file. It can also instantiate a script engine to interpret scripts written in a specific scripting language. The simplest way to use the scripting interface is as follows: 1. Create a scriptenginemanager object 2. Get scriptengine object from scriptenginemanager 3. Use the eval method of scriptengine to execute the script Now, it’s time to look at some sample code. Knowing some JavaScript helps to read these examples, but it’s not mandatory. Example “Hello,World” From the scriptenginemanager instance, we get a JavaScript engine instance through the getenginebyname method. Execute the given JavaScript code through the eval method of the script engine. For simplicity, we do not handle exceptions in this and subsequent examples. The javax.script API has check and run-time exceptions, which you must handle properly. import javax.script.*; public class EvalScript { public static void main(String[] args) throws Exception { // create a script engine manager ScriptEngineManager factory = new ScriptEngineManager(); // create a JavaScript engine ScriptEngine engine = factory.getEngineByName("JavaScript"); // evaluate JavaScript code from String engine.eval("print('Hello, World')"); } } Execute a script file In this example, we call the eval method to receive Java. Io. Reader as the input source. The script read in is executed. In this way, scripts can be executed as files, and URLs and resources can be read by related input stream])); } } Suppose we have a file called “test. JS”. The contents are as follows: println("This is hello from test.js"); We can run the script as follows java EvalFile test.js Script variable When your Java application embeds script engines and scripts, you may want to expose your application objects as global variables in the script. This example demonstrates how to expose your application object as a global variable in a script. We create a java.io.file object as a global variable in the application, with the name of file. The script can access variables, for example, it can call its public methods. Note that the syntax for accessing Java objects, domains, and methods depends on the scripting language. JavaScript supports the most “natural” Java like syntax. public class ScriptVars { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("JavaScript"); File f = new File("test.txt"); // expose File object as variable to script engine.put("file", f); // evaluate a script string. The script accesses "file" // variable and calls method on it engine.eval("print(file.getAbsolutePath())"); } } Call script functions and methods Sometimes, you may need to call a specific script function multiple times, for example, your application menu function may be implemented by script. In the action event handler in the menu, you may need to call a specific script function. The following example demonstrates calling a specific script in Java code. import javax.script.*; public class InvokeScriptFunction { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("JavaScript"); // JavaScript code in a String String script = "function hello(name) { print('Hello, ' + name); }"; // evaluate script engine.eval(script); // javax.script.Invocable is an optional interface. // Check whether your script engine implements or not! // Note that the JavaScript engine implements Invocable interface. Invocable inv = (Invocable) engine; // invoke the global function named "hello" inv.invokeFunction("hello", "Scripting!!" ); } } If your scripting language is object-based (such as JavaScript) or object-oriented, you can call script methods on script objects. import javax.script.*; public class InvokeScriptMethod { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("JavaScript"); // JavaScript code in a String. This code defines a script object 'obj' // with one method called 'hello'. String script = "var obj = new Object(); obj.hello = function(name) { print('Hello, ' + name); }"; // evaluate script engine.eval(script); // javax.script.Invocable is an optional interface. // Check whether your script engine implements or not! // Note that the JavaScript engine implements Invocable interface. Invocable inv = (Invocable) engine; // get script object on which we want to call the method Object obj = engine.get("obj"); // invoke the method named "hello" on the script object "obj" inv.invokeMethod(obj, "hello", "Script Method !!" ); } } Implementing java interface through script Sometimes it is easy to implement java interface by script function or method instead of calling in Java. At the same time, through the interface, we can avoid using the javax. Script API interface in many places. We can get an interface implementer object and pass it to different Java APIs. The following example demonstrates the implementation of the java.lang.runnable interface through scripts. import javax.script.*; public class RunnableImpl { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("JavaScript"); // JavaScript code in a String String script = "function run() { println('run called'); }"; // evaluate script engine.eval(script); Invocable inv = (Invocable) engine; // get Runnable interface object from engine. This interface methods // are implemented by script functions with the matching name. Runnable r = inv.getInterface(Runnable.class); // start a new thread that runs the script implemented // runnable interface Thread th = new Thread(r); th.start(); } } If your scripting language is object-based or object-oriented, you can implement the java interface through scripting methods of scripting objects. This avoids having to call interface methods of script global functions. Script objects can store interface implementation state. import javax.script.*; public class RunnableImplObject { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("JavaScript"); // JavaScript code in a String String script = "var obj = new Object(); obj.run = function() { println('run method called'); }"; // evaluate script engine.eval(script); // get script object on which we want to implement the interface with Object obj = engine.get("obj"); Invocable inv = (Invocable) engine; // get Runnable interface object from engine. This interface methods // are implemented by script methods of object 'obj' Runnable r = inv.getInterface(obj, Runnable.class); // start a new thread that runs the script implemented // runnable interface Thread th = new Thread(r); th.start(); } } Multi scope of script In the script variables example, we see how to expose the application object as a global variable of the script. It can be exposed to multiple global scopes. The single scope is in the instance of javax.script.bindings. This excuse derives from Java. Util. Map < string, Object >. The collection of scope key value pairs, where the key is a non empty, non empty string. Multiple scopes are supported by the javax.script.scriptcontext interface. Supports binding of one or more script contexts to related domains. By default, each script engine has a default script context. The default script context has at least one domain called “engine? Scope”. Script context support for different domains can be obtained through the getscopes method. import javax.script.*; public class MultiScopes { public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("JavaScript"); engine.put("x", "hello"); // print global variable "x" engine.eval("println(x);"); // the above line prints "hello" // Now, pass a different script context ScriptContext newContext = new SimpleScriptContext(); Bindings engineScope = newContext.getBindings(ScriptContext.ENGINE_SCOPE); // add new variable "x" to the new engineScope engineScope.put("x", "world"); // execute the same script - but this time pass a different script context engine.eval("println(x);", newContext); // the above line prints "world" } } JavaScript script engine Sun’s JDK 6 includes a Mozilla rhino JavaScript script engine. The engine is based on version 1.6r2 of Mozilla rhino. Most rhino implementations are included. A few components were excluded for size and security reasons: 1. JavaScript bytecode compilation (also known as “optimizer”). This feature relies on a class build library. Removing this feature means that JavaScript is interpreted execution and does not affect script execution because the optimizer is transparent. 2. Rhino’s Java adapter has also been removed. Javaadapter is a JavaScript extensible Java class and JavaScript can implement java interface functions. This feature also requires a class generation library. We replace rhino’s javaadapter with sun’s. In sun’s implementation, only the JavaScript object can realize the Java single interface function. For example, the following code will execute correctly. var v = new java.lang.Runnable() { run: function() { print('hello'); } } v.run(); In most cases, the Java adapter uses anonymous class syntax to implement a single interface. It is not common to use javaadapter to extend Java classes or implement multiple interfaces. 3. E4X (ECMAScript for XML – ECMA standard 357) is removed. Using XML JavaScript code will cause a syntax error. Please note that E4X supports ECMAScript standard is optional – the implementation of omitting E4X is supported and compatible with ECMAScript. 4. Rhino’s command-line tools (rhino shell, debugger, etc.) are not included. But you can use jrunscript instead. Communication between JavaScript and Java In most cases, accessing Java classes, objects, and methods is simple. Accessing properties and methods from JavaScript is the same as in Java. Here, we highlight the important aspects of JavaScript Java access. For more details, please read. Here are some snippets of JavaScript accessing Java. This section requires some JavaScript knowledge. If you plan to use the non JavaScript scripting language in jsr-223, you can skip this section. Introducing Java packages, classes The built-in functions importpackage and importclass can be used to introduce Java packages and classes. // Import Java packages and classes // like import package.*; in Java importPackage(java.awt); // like import java.awt.Frame in Java importClass(java.awt.Frame); // Create Java Objects by "new ClassName" var frame = new java.awt.Frame("hello"); // Call Java public methods from script frame.setVisible(true); // Access "JavaBean" properties like "fields" print(frame.title); The global variable packages can also be used to access Java packages. For example: packages.java.util.vector, packages.javax.swing.jframe. Please note that “Java” is a quick reference to “packages. Java”. There are also some equivalent quick reference prefixes: javax, org, edu, com, net, so almost all classes on the JDK platform can be accessed without using the “packages” prefix. Please note that java.lang is not introduced by default (different from Java), because it conflicts with JavaScript’s built-in object, Boolean, math, etc. The importpackage and importclass functions “pollute” global variables in JavaScript. To avoid this, you can use the Java importer. // create JavaImporter with specific packages and classes to import var SwingGui = new JavaImporter(javax.swing, javax.swing.event, javax.swing.border, java.awt.event); with (SwingGui) { // within this 'with' statement, we can access Swing and AWT // classes by unqualified (simple) names. var mybutton = new JButton("test"); var myframe = new JFrame("test"); } C creating and using java arrays In JavaScript, creating an object is the same as in Java, while creating a Java array requires explicit use of java reflection. But once created, accessing the elements or getting the size is the same as in Java. In addition, you can also use script arrays as Java arrays in Java methods (because they can be converted automatically). So in most cases we don’t need to explicitly create Java arrays. // create Java String array of 5 elements var a = java.lang.reflect.Array.newInstance(java.lang.String, 5); // Accessing elements and length access is by usual Java syntax a[0] = "scripting is great!"; print(a.length); Implement java interface In JavaScript, you can use Java anonymous class syntax to implement the interface in Java: var r = new java.lang.Runnable() { run: function() { print("running...\n"); } }; // "r" can be passed to Java methods that expect java.lang.Runnable var th = new java.lang.Thread(r); th.start(); When there is only one method in the interface that needs to be implemented, you can pass in the function of the script by yourself (because it can be converted automatically). function func() { print("I am func!"); } // pass script function for java.lang.Runnable argument var th = new java.lang.Thread(func); th.start(); heavy load Java methods are overloaded with parameter types. In Java, overloading occurs during the compilation phase (execution of javac). When the Java method is invoked in the script, the translator or compiler of the script needs to select the appropriate method. For the JavaScript engine, you don’t need to do anything special — the correct Java method overload variant is chosen based on the parameter type. But sometimes, you may want (or have) to explicitly select a specific overload variant. var out = java.lang.System.out; // select a particular println function out["println(java.lang.Object)"]("hello"); Custom script engine We will not cover the implementation details of jsr-223 compatible script engine. At least, you need to implement the javax.script.scriptengine and javax.script.scriptenginefactory interfaces. The abstract class javax.script.abstractscriptengine provides some methods defined in the scriptengine interface. Before you start implementing the jsr-223 engine, you may need to download the project. This project maintains the jsr-223 implementation of some popular open source scripting languages. The above is the whole content of this article. I hope it will help you in your study, and I hope you can support developepaer more.
https://developpaper.com/in-depth-understanding-of-java-scripting-api-programming/
CC-MAIN-2020-16
refinedweb
2,431
50.84
Exposing Services in Kubernetes Will Boyd DevOps Team Lead in Content With deployments, you can create a dynamically-managed set of replica pods. But this introduces a need for an equally dynamic way to access them. Services provide a layer of abstraction that provides access to pods and other entities, allowing dynamic, high-availability access to the necessary components of your applications. In this lab, you will have the opportunity to work with services by creating a service on top of an existing deployment. Exposing Services in Kubernetes The scenario Our company has just deployed two components of a web application to a Kubernetes cluster, using deployments with multiple replicas. We need a way to provide dynamic network access to these replicas so that there will be uninterrupted access to the components whenever replicas are created, removed, or replaced. One deployment is called auth-deployment, an authentication provider that needs to be accessible from outside the cluster. The other is called data-deployment, and it is a component designed to be accessed only by other pods within the cluster. The team wants us to create two services to expose these two components. We'll examine the two deployments, and create two services that meet the following criteria: auth-svc - The service name is auth-svc. - The service exposes the pod replicas managed by the deployment named auth-deployment. - The service listens on port 8080and its targetPort matches the port exposed by the pods. - The service type is NodePort. data-svc - The service name is data-svc. - The service exposes the pod replicas managed by the deployment named data-deployment. - The service listens on port 8080and its targetPort matches the port exposed by the pods. - The service type is ClusterIP. Note: All work should be done in the default namespace. Get logged in Use the credentials and server IP in the hands-on lab overview page to log in with SSH. Survey the landscape Just to see what's already been deployed, and what we're dealing with, run this: [user@host]$ kubectl get deploy We should see auth-deployment and data-deployment listed, with some details about each. Create the auth-svc service Examine the auth-deployment. Take note of the labels specified in the pod template ( app), as well as the containerPort exposed by the containers: [user@host]$ kubectl get deployment auth-deployment -o yaml Create a service descriptor file (using whichever text editor you like) called auth-svc.yml: apiVersion: v1 kind: Service metadata: name: auth-svc spec: type: NodePort selector: app: auth ports: - protocol: TCP port: 8080 targetPort: 80 Our selector and port should match what was in the yaml output (the app label and containerPort) from the last command. Create the service in the cluster: [user@host]$ kubectl apply -f auth-svc.yml Create the data-svc service Like we did with the auth-deployment, let's examine some of the data-deployment details. Again, note the labels specified in the pod template, as well as the containerPort exposed by the containers: [user@host]$ kubectl get deployment data-deployment -o yaml Create a service descriptor file (again, using whichever text editor you like) called data-svc.yml: apiVersion: v1 kind: Service metadata: name: data-svc spec: type: ClusterIP selector: app: data ports: - protocol: TCP port: 8080 targetPort: 80 The only real differences between this and the auth-svc.yml file we created are name:, type:, and app:. Once that file is good to go, create the service in the cluster: [user@host]$ kubectl apply -f data-svc.yml Now, to check on things, we can run this: [user@host]$ kubectl get svc We should see both auth-svc and data-svc running. But to make sure the service is mapping to the pods correctly, we'll run: [user@host]$ kubectl get ep auth-svc [user@host]$ kubectl get pods The first command will show us that auth-svc has two endpoints, and the second will show us that auth-svc is in fact running two pods. We can run those two command again, but for data-svc and see similar results, except that there are three pods involved, not two. Conclusion We needed to set things up so that our web app allowed uninterrupted access when pods get created, removed, or replaced. Everything is up and running. Congratulations!
https://linuxacademy.com/hands-on-lab/2cd43f15-7e3d-4ab8-8e06-f144a7b50718/
CC-MAIN-2020-05
refinedweb
722
59.94
One of the coolest features of Ruby is that you can extend it with an application programming interface (API) defined in C/C++. Ruby provides the C header ruby.h, which comes with a whole host of functions for creating Ruby classes, modules, and more. In addition to the Ruby-supplied header, several other high-level abstractions are available to extend Ruby that are built on top of the native ruby.h—one that this article investigates is the Ruby Interface for C++ Extensions, or Rice.. Creating a Ruby extension Before you jump into any of Ruby's C API or Rice extensions, I want to clearly describe the standard process of creating the extension: - You have one or multiple C/C++sources out of which you make a shared library. - If you create an extension using Rice, you need to link the code to both libruby.a and librice.a. - Copy the shared library to some folder, and have that folder as part of the RUBYLIB environment variable. - Use the usual require-based loading in the Interactive Ruby (irb) prompt/ruby script. If the shared library is called rubytest.so, just typing require 'rubytest'loads the shared library. Suppose the header ruby.h resides in /usr/lib/ruby/1.8/include, the Rice headers reside in /usr/local/include/rice/include, and the extension code is in the file rubytest.cpp. Listing 1 shows how you would compile and load the code. Listing 1. Compiling and loading a Ruby extension bash# g++ -c rubytest.cpp –g –Wall -I/usr/lib/ruby/1.8/include \ -I/usr/local/include/rice/include bash# g++ -shared –o rubytest.so rubytest.o -L/usr/lib/ruby/1.8/lib \ -L/usr/local/lib/rice/lib -lruby –lrice –ldl -lpthread bash# cp rubytest.so /opt/test bash# export RUBYLIB=$RUBYLIB:/opt/test bash# irb irb> require 'rubytest' => true The Hello World program Now, you're ready to create your first Hello World program using Rice. You create a class using the Rice API called Test with a method hello that displays the string "Hello, World!" When the Ruby interpreter loads the extension, it calls the function Init_<shared library name>. For the rubytest extension from Listing 1, this call implies that rubytest.cpp has a function Init_rubytest defined. Rice lets you create your own class using the API define_class. Listing 2 shows the code. Listing 2. Creating a class using the Rice API #include "rice/Class.hpp" extern "C" void Init_rubytest( ) { Class tmp_ = define_class("Test"); } When you compile and load the code in Listing 2 in irb, you should get the output in Listing 3. Listing 3. Testing the class created using Rice irb> require ‘rubytest’ => true irb> a = Test.new => #<Test:0x1084a3928> irb> a.methods => ["inspect", "tap", "clone", "public_methods", "__send__", "instance_variable_defined?", "equal?", "freeze", …] Note that several predefined class methods such as inspect are available. That happens because the Test class you defined is derived implicitly from the Object class (every Ruby class is derived from Object; in fact, everything in Ruby, including numbers, is an object that has Object as the base class). Now, add a method to the Test class. Listing 4 shows the code. Listing 4. Adding a method to the Test class void hello() { std::cout << "Hello World!"; } extern "C" void Init_rubytest() { Class test_ = define_class("Test") .define_method("hello", &hello); } Listing 4 uses the define_method API to add a method to the Test class. Note that define_class is a function returning an object of type Class; define_method is a member function of the class Module_Impl, which is the base class of Class. Here's the Ruby test verifying that everything is indeed fine: irb> require ‘rubytest’ => true irb> Test.new.hello Hello, World! => nil Passing arguments from Ruby to C/C++ code Now that your Hello World program is up, try to pass an argument from Ruby to the hello function and have the function display the same to standard output (sdtout). The simplest way to do so is just to add a string argument to the hello function: void hello(std::string args) { std::cout << args << std::endl; } extern "C" void Init_rubytest() { Class test_ = define_class("Test") .define_method("hello", &hello); } In the Ruby world, this is how you would invoke the hello function: irb> a = Test.new <Test:0x0145e42112> irb> a.hello "Hello World in Ruby" Hello World in Ruby => nil The best thing about using Rice is that you don't need to do anything specific to convert a Ruby string to std::string. Now, try to use an array of strings in the hello function, and then check how you would pass information from Ruby to the C++ code. The simplest way to do so is to use the Array data type that Rice provides. Defined in the header rice/Array.hpp, using Rice::Array is similar to using a Standard Template Library (STL) container. The usual STL style iterators and so on are defined as part of the Array interface. Listing 5 shows the count routine, which takes a Rice Array as an argument. Listing 5. Displaying a Ruby array #include "rice/Array.hpp" void Array_Print (Array a) { Array::iterator aI = a.begin(); Array::iterator aE = a.end(); while (aI != aE) { std::cout << "Array has " << *aI << std::endl; ++aI; } } Now here's the beauty of this solution: Suppose you have an std::vector<std::string> as the Array_Print argument. Here's the error that Ruby throws: >> t = Test.new => #<Test:0x100494688> >> t.Array_Print ["g", "ggh1", "hh1"] ArgumentError: Unable to convert Array to std::vector<std::string, std::allocator<std::string> > from (irb):3:in `hello' from (irb):3 However, with the Array_Print routine shown here, Rice takes care of the conversion from a Ruby array to the C++ Array type. Here's a sample run: >> t = Test.new => #<Test:0x100494688> >> t.Array_Print ["hello", "world", "ruby"] Array has hello Array has world Array has ruby => nil Now try it the other way round now, passing an array from C++ to the Ruby world. Note that in Ruby, array elements might not necessarily be of the same type. Listing 6 shows the code. Listing 6. Passing an array from C++ to Ruby #include "rice/String.hpp" #include "rice/Array.hpp" using namespace rice; Array return_array (Array a) { Array tmp_; tmp_.push(1); tmp_.push(2.3); tmp_.push(String("hello")); return tmp_; } Listing 6 clearly shows that you can create a Ruby array with different types right inside C++. Here's the test code in Ruby: >> x = t.return_array => [1, 2.3, "hello"] >> x[0].class => Fixnum >> x[1].class => Float >> x[2].class => String What if I don't have the flexibility to change a C++ argument list? More common than not, you'll find that the Ruby interface is meant to translate the data to C++ functions whose signature you can't change. For example, consider a case where you need to pass an array of strings from Ruby to C++. The C++ function signature looks like: void print_array(std::vector<std::string> args) In effect, what you're looking for here is some sort of from_ruby function that takes in a Ruby array and converts it to std::vector<std::string>. That's exactly what Rice provides—a from_ruby function with the following signature: template <typename T> T from_ruby(Object ); For every Ruby data type you need to convert to a C++ type, you need to template-specialize the from_ruby routine. For example, if you pass the Ruby array to the process function shown above, Listing 7 shows how you should define the from_ruby function. Listing 7. Converting ruby array to std::vector<std::string> template<> std::vector<std::string> from_ruby< std::vector<std::string> > (Object o) { Array a(o); std::vector<std::string> v; for(Array::iterator aI = a.begin(); aI != a.end(); ++aI) v.push_back(((String)*aI).str()); return v; } Note that the from_ruby function doesn't need to be explicitly invoked. When an array of string is passed as the function argument from the Ruby world, from_ruby converts it into std::vector<std::string>. The code in Listing 7 is not perfect, though, and you have already seen that arrays in Ruby can have different types. In contrast, you have made a call to ((String)*aI).str() to get a std::string from Rice::String. ( str is a method of Rice::String: Check String.hpp for more details.) If you were to handle the most generic case, Listing 8 shows how the code would look. Listing 8. Converting a ruby array to std::vector<std::string> (generic case) template<> std::vector<std::string> from_ruby< std::vector<std::string> > (Object o) { Array a(o); std::vector<std::string> v; for(Array::iterator aI = a.begin(); aI != a.end(); ++aI) v.push_back(from_ruby<std::string> (*aI)); return v; } Because each element of the Ruby array is also a Ruby object of type String and you are betting on Rice having a from_ruby method defined for converting that type into std::string, nothing else need be done. If such is not the case, you need to provide a from_ruby method for the conversion. Here's the from_ruby method from to_from_ruby.ipp in the Rice sources: template<> inline std::string from_ruby<std::string>(Rice::Object x) { return Rice::String(x).str(); } Test this code from the Ruby world. Begin by passing an array of all strings, as in Listing 9. Listing 9. Validating from_ruby functionality >> t = Test.new => #<Test:0x10e71c5c8> >> t.print_array ["aa", "bb"] aa bb => nil >> t.print_array ["aa", "bb", 111] TypeError: wrong argument type Fixnum (expected String) from (irb):4:in `print_array' from (irb):4 As expected, the first invocation of print_array went fine. Because there's no from_ruby method to convert Fixnum to std::string, the second invocation results in the Ruby interpreter throwing up TypeError. There are several ways to fix this error—for example, during the Ruby invocation, pass only strings as part of the array (such as t.print_array["aa", "bb", 111.to_s]), or inside the C++ code, make a call to Object.to_s. The to_s method is a part of the Rice::Object interface and returns a Rice::String, which has a predefined str method returning an std::string. Listing 10 uses the C++ approach. Listing 10. Using Object.to_s to populate the vector of strings template<> std::vector<std::string> from_ruby< std::vector<std::string> > (Object o) { Array a(o); std::vector<std::string> v; for(Array::iterator aI = a.begin(); aI != a.end(); ++aI) v.push_back(aI->to_s().str()); return v; } In general, the code in Listing 10 will be more involved, because you need to handle custom string representations for user-defined classes. Creating a complete class with variables using C++ You have already seen how to create a Ruby class and associated functions inside C++ code. For a more generic class, you need a way to define instance variables as well as provide for an initialize method. To set and get values of a Ruby object's instance variables, you use the Rice::Object::iv_set and Rice::Object::iv_get methods, respectively. Listing 11 shows the code. Listing 11. Defining an initialize method in C++ void init(Object self) { self.iv_set("@intvar", 121); self.iv_set("@stringvar", String("testing")); } Class cTest = define_class("Test"). define_method("initialize", &init); When a C++ function is declared as a Ruby class method using the define_method API, you have the option of declaring the first argument of the C++ function as Object, and Ruby fills in this Object with a reference to the calling instance. You then invoke iv_set on the Object to set the instance variables. Here's how the interface looks in the Ruby world: >> require 'rubytest' => true >> t = Test.new => #<Test:0x1010fe400 @stringvar="testing", @intvar=121> Likewise, to return an instance variable, the returning function needs to take an Object that refers to the object in Ruby and invoke iv_get on it. Listing 12 shows a snippet. Listing 12. Retrieving values from a Ruby object void init(Object self) { self.iv_set("@intvar", 121); self.iv_set("@stringvar", String("testing")); } int getvalue(Object self) { return self.iv_get("@intvar"); } Class cTest = define_class("Test"). define_method("initialize", &init). define_method("getint", &getvalue); Morphing a C++ class into a Ruby type So far, you have wrapped free functions (that is, non-class methods) as Ruby class methods. You have passed references to the Ruby object by declaring the C functions with the first argument Object. This approach works, but isn't good enough to wrap a C++ class as a Ruby object. To wrap a C++ class, you still use the define_class method, except that you now "templatize" it with the C++ class type. The code in Listing 13 wraps a C++ class as a Ruby type. Listing 13. Wrapping a C++ class as a Ruby type class cppType { public: void print(String args) { std::cout << args.str() << endl; } }; Class rb_cTest = define_class<cppType>("Test") .define_method("print", &cppType::print); Note that define_class is templatized, as discussed. Not all is well with this class, though. Here's the log from the Ruby interpreter when you try to instantiate an object of type Test: >> t = Test.new TypeError: allocator undefined for Test from (irb):3:in `new' from (irb):3 What just happened? Well, you need to explicitly bind the constructor to a Ruby type. (It's one of those Rice quirks.) Rice provides you with a define_constructor method to associate a constructor for the C++ type. You also need to include the header Constructor.hpp. Note that you must do so even when you don't have an explicit constructor in your code. Listing 14 provides the sample code. Listing 14. Associating a C++ constructor with a Ruby type #include "rice/Constructor.hpp" #include "rice/String.hpp" class cppType { public: void print(String args) { std::cout << args.str() << endl; } }; Class rb_cTest = define_class<cppType>("Test") .define_constructor(Constructor<cppType>()) .define_method("print", &cppType::print); It is also possible to associate a constructor with an argument list using the define_constructor method. The Rice way to do so is to add the argument types to the template list. For example, if cppType has a constructor that accepts an integer, then you must call define_constructor as define_constructor(Constructor<cppType, int>()). One caveat here: Ruby types don't have multiple constructors. So, if you have a C++ type with multiple constructors and you associate them all using define_constructor, then from the Ruby world, you can instantiate the type with arguments (or not) as defined by the last define_constructor in the source code. Listing 15 explains everything just discussed. Listing 15. Associating constructors with arguments class cppType { public: cppType(int m) { std::cout << m << std::endl; } cppType(Array a) { std::cout << a.size() << std::endl; } void print(String args) { std::cout << args.str() << endl; } }; Class rb_cTest = define_class<cppType>("Test") .define_constructor(Constructor<cppType, int>()) .define_constructor(Constructor<cppType, Array>()) .define_method("print", &cppType::print); Here's the log from the Ruby world. Note that the constructor associated last is the one Ruby understands: >> t = Test.new 2 TypeError: wrong argument type Fixnum (expected Array) from (irb):2:in `initialize' from (irb):2:in `new' from (irb):2 >> t = Test.new [1, 2] 2 => #<Test:0x10d52cf48> Defining a new Ruby type as part of a module Defining a new Ruby module from C++ boils down to making a call to define_module. To define a class that's available only as part of said module, you use define_class_under instead of the usual define_class method. The first argument to define_class_under is the module object. From Listing 14, if you were to define cppType as part of a Ruby module named types, Listing 16 shows how you would do it. Listing 16. Declaring a type as part of a module #include "rice/Constructor.hpp" #include "rice/String.hpp" class cppType { public: void print(String args) { std::cout << args.str() << endl; } }; Module rb_cModule = define_module("Types"); Class rb_cTest = define_class_under<cppType>(rb_cModule, "Test") .define_constructor(Constructor<cppType>()) .define_method("print", &cppType::print); And here is how you use the same in Ruby: >> include Types => Object >> y = Types::Test.new [1, 1, 1] 3 => #<Types::Test:0x1058efbd8> Note that module names and class names must begin with an uppercase letter in Ruby. Rice doesn't error out if, say, you name the module types instead of Types. Creating a Ruby struct using C++ code You use the struct construct in Ruby to quickly create a boilerplate Ruby class. Listing 17 shows the typical Ruby way of creating a new class of type NewClass with three variables named a, ab, and aab. Listing 17. Using a Ruby Struct to create a new class >> NewClass = Struct.new(:a, :ab, :aab) => NewClass >> NewClass.class => Class >> a = NewClass.new => #<struct NewClass a=nil, ab=nil, aab=nil> >> a.a = 1 => 1 >> a.> a.aab = 2.33 => 2.33 >> a => #<struct NewClass a=1, ab="test", aab=2.33> >> a.a.class => Fixnum >> a.ab.class => String >> a.aab.class => Float To code the equivalent of Listing 17 in C++, you need to use the define_struct( ) API declared in the header rice/Struct.hpp. This API returns a Rice::Struct. You associate the Ruby class that this struct creates and the module of which the class will be part. That's what the initialize method is for. The individual class members are defined using the define_member function call. Note that you have created a new Ruby type, except that you have not associated any C++ type or functions with it. Here's the code to create a class called NewClass: #include "rice/Struct.hpp" … Module rb1 = define_module("Types"); define_struct(). define_member("a"). define_member("ab"). define_member("aab"). initialize(rb1, "NewClass"); Conclusion This article covered a fair bit of ground—creating Ruby objects in C++ code, associating C-style functions as Ruby object methods, converting data types between Ruby and C++, creating instance variables, and wrapping a C++ class as a Ruby type. It is possible to achieve all of this work using the ruby.h header and libruby, but you would have to do a lot of boilerplate coding to make all ends work. Rice makes all that work easier. Here's wishing you a lot of fun writing new extensions in C++ for the Ruby world! Resources Learn - rice.rubyforge.org is your home for Rice and comes with excellent doxygen-style documentation. - Popularly known as the Pickaxe book, Programming Ruby: The Pragmatic Programmers' Guide (Dave Thomas, Chad Fowler, and Andy Hunt; 2nd edition) is a Ruby must-read. - Another invaluable Ruby resource is The Ruby Programming Language by Yukihiro "Matz" Matsumoto (Ruby's creator) and David Flanagan (O'Reilly, 2008). - To Ruby From C and C++ is a great site for C/C++programmers who want to learn Ruby. -.
https://www.ibm.com/developerworks/library/os-extendruby/index.html
CC-MAIN-2014-23
refinedweb
3,122
66.13
I have a JSON file like following: { "count": 60, "value": [{ "changesetId": 60, "url": "http://...", "author": { "id": "...", "displayName": "*...", "uniqueName": "...", "url": "http://...* "imageUrl": "http://..." }, "checkedInBy": { "id": "...", "displayName": "...", "uniqueName": "...", "url": "http://...", "imageUrl": "http://..." }, "createdDate": "2016-11-08T22:05:11.17Z", "comment": "..." }, public class Changesets{ int count; *TODO* // model for the JSON above. } public class Changesets { int count; int changeset; String url; Changeset.Author author; Changeset.CheckedInBy checkedInBy; String createdDate; String comment; } If you really need to model the respective Java classes, you will need to reverse engineering the JSON structure. In your case it will be something like this: public class Changesets{ int count; List<Change> value; } and I will let you complete the work. However, if you only need an ad hoc Java object to deal with a complex JSON object in which you are only interested in a very specific property value, you can use the solution I suggested in this answer: Dynamic JSON structure to Java structure
https://codedump.io/share/weNTMtVhXpW6/1/gson-model-for-array
CC-MAIN-2017-09
refinedweb
157
58.32
Need help with this homework problem. How do I write a function, nearest_larger(arr, i) which takes an array and an index. The function should return another index. The conditions are below. Thanks. This should satisfy: (a) `arr[i] < arr[j]`, AND (b) there is no `j2` closer to `i` than `j` where `arr[i] < arr[j]`. In case of ties (see example beow), choose the earliest (left-most) of the two indices. If no number in arris largr than arr[i], return nil. example: nearest_larger([2,3,4,8], 2).should == 3 end My code is: def nearest_larger(arr, idx) greater_nums = [] arr.each {|element| greater_nums << element if element>idx} sorted_greater_nums= greater_nums.sort nearest_larger = sorted_greater_nums[0] arr.index(nearest_larger) end THANKS a lot guys. See post below for solution I see at least two mistakes here. First, your code seems to assume the array is sorted. (Otherwise why would taking the least of greater_nums give you the closest index?) But from your requirements (choose the left-most index in case of a tie), that is clearly not guaranteed. More importantly, in your each loop you're comparing element to idx (the index passed in) rather than arr[idx]. I think what you really want to do is something like this: def nearest_larger(arr, idx) value = arr[idx] # Ensure idx is actually valid. return nil if idx < 0 || idx >= arr.length left, right = [idx - 1, idx + 1] while (left >= 0 || right < arr.length) # Always check left first, per the requirement. return left if left >= 0 && arr[left] > value return right if right < arr.length && arr[right] > value # Incrementally move farther and farther left/right from the specified index # looking for a larger value. left, right = [left - 1, right + 1] end # This will return nil if no values were larger than arr[idx]. end
http://www.dlxedu.com/askdetail/3/250395ca3f4e31e17aab55421c26eaac.html
CC-MAIN-2018-47
refinedweb
302
68.47
OData is an emerging set of extensions for the ATOM protocol that makes it easier to share data over the web. To show off OData in RIA Services, let’s continue our series. We think it is very interesting to expose OData from a DomainService to facilitate data sharing. For example I might want users to be able to access my data in a rich way in Excel as well as my custom Silverlight client. I’d like to be able to enable that without writing multiple services or duplicating any business or data access logic. This is very easy to enable with RIA Services. In fact it is just a check box away! When you create your DomainService simply check the “Expose OData endpoint” and that will expose your DomainService as an OData feed. If you have already created a DomainService it is easy to enable OData on it as well by doing the two things this wizard does. First, it adds an endpoint to the domainServices section of the web.config. > Second, on each paramaterless query methods you wish to expose via OData, mark the query as being default. Meaning any time there is an ask for “Plate” it is this query method that is used. [Query(IsDefault = true)]public IQueryable<Plate> GetPlates(){ once these are done, you an hit the service and see an Atom feed. The format of the URL is the namespace+typename for the domainservice with dots replaced by dashs followed by “.svc/Odata/”. So if the DomainService class is MyApp.Web.DishViewDomainService then the URL would be And then drill in with this URL: That is really cool that the data is in an open ATOM based format… but what is even better is there is a budding ecosystem of clients that can consume this feed. One of the more interesting ones is the Excel addin called PowerPivot. Once you have it installed with Excel 2010. select the PowerPivot window Then you can use the full power of excel so if i want to sort by Number of Updates, with a rating of 4 or higher, with calorie count between 3000 and 4000 then go graph that in some interesting way, you can do the easily.. All with the live data without any custom application code. What we showed in this walk though is how to expose OData from your DomainService and consume that in Excel. This is just a down payment on the OData support coming in the future in RIA Services. why display endpoint not found error? So add OData support is just additional feature? We will not use it in SL app. When we will add support OData in our app then SL still will use WCF RIA and we can use from other app this services with OData support, right? I mean that OData is not a feature for Silverlight? Jolly nice, Brad. I tried using OData protocol features like $count, filters but I get either the "The webpage cannot be found" error or the filter is not applied and all the results are returned. Am I missing something?
https://blogs.msdn.microsoft.com/brada/2010/03/16/silverlight-4-ria-services-ready-for-business-exposing-odata-services/
CC-MAIN-2019-22
refinedweb
520
71.14
» Certification » Developer Certification (SCJD/OCMJD) Author Locking Schemes: Tactical View 01 Javini Javono Ranch Hand Joined: Dec 03, 2003 Posts: 286 posted Feb 08, 2004 16:52:00 0 Hi, The following is a review of locking schemes in general for comparison purposes. This is a tactical review. Please feel free to correct any mis-notions I may have. The following assumptions are made: * the database is stored as one, random access file, * the database is accessed through the class Data. * low level file read, update, write, and create new record operations are carried out within only one instantiated class called MicroData where each method is synchronized. There is no overhead processing associated with any of these methods in that each method directly reads, updates, writes, or creates a new record in the file. * although not explicitly drawn, there exists one instantiation of the singleton class called Guard, which is involved with logical record locking (has lock() and unlock() methods and is multi-threaded). Exactly how it is integrated and used by Data or MyServer or by some other object is not shown. * client accessible remote objects are coded on the server side to behave correctly when multi-threaded. * client accessible remote objects are coded on the server side to behave correctly when more than one instance of this remote object is instantiated on the server. Overview: 1. Client ---over RMI uses---> multi-threaded Data(1..N) ---> MicroData(1) --> DbRaf 2. Client ---over RMI uses---> multi-threaded MyServer(1..N) ---> multi-threaded Data(1..N) ---> MicroData(1) --> DbRaf We will disregard MyServer as it is just a middleman, and introduce Guard: Client ---over RMI uses---> multi-threaded Guard(1) --> multi-threaded Data(1..N) ---> MicroData(1) --> DbRaf Class Guard is used to get access to each individual shared file record of the shared, single resource: Data (which uses MicroData (which directly manipulates the DbRaf, i.e., the database random access file)). Class Guard is used for logical record locking. As long as all the software obeys the rules, then only one thread can ever gain access to a record. This allows concurrency in that ThreadA can lock and use record 1 while ThreadB can lock and use record 2. The overall time it takes, from the client's perspective, is thus minimized for all clients equally. //Class Guard is a server-side "singleton" (or, to say it another way, //for each DbRaf file (and we only have one), there can at most exist //one instance of Guard). Before continuing, let's consider are strategic decisions: any client accessible remote abject can be instantiated more than once, and each instantiation can be multi-threaded. So, let's look at an example: guard.lock(1); process record 1 guard.unlock(1); In this short code block, record 1 is being locked, processed, and then unlocked. In our multithreaded environment, that means that two threads could be working this same three lines of code at the same time, which could result in a programmer logic error thus: ThreadA locks record 1. ThreadB attemps to lock record 1, but waits. ThreadA processes record 1. ThreadA unlocks record 1. ThreadB locks record 1. Whew! We are fine. While it is true that more complicated variations of these locking and unlocking schemes can accept a unique client ID, this is not a fundamental part of understanding the concepts, and we will not deal with the unique client ID at this time. Before continuing, let's consider a simple mutex class, which I have grossly simplified from the Java class given at Doug Lea's web site; you would never use this as production code, I am using it as an algorithmic example only: //I have grossly simplified this class for algorithm display //purposes; this gross over-simplification is not the //exact, correct code found within Doug Lea's concurrent //package at <a href="" target="_blank" rel="nofollow"></a> public class Mutex { private boolean held = false; public synchronized void lock() throws InterruptedException { while (held) wait(); held = true; } public synchronized void unlock() { held = false; notify(); } } The Guard can be set up to use any of the following collections, as examples (as this list is not complete): 1. array[] 2. Vector 3. ArrayList 4. HashMap 5. WeakHashMap And, the collection can be used in one of two ways: Strategy 1: The collection always holds the same number of elements as there are records in the file. I. For every record in the file, which is a shared resource, there must exist some type of locking algorithm; for some examples, a simple MUTEX will be used and will be called class Mutex. II. Since there are N records in the file, each record must have a Mutex associated with it; thus, we will have a collection of mutexes. Strategy 2: The collection only holds those elements which represent records which are currently locked by some thread. Regardless, we do not want to sequentially search through the collection, and that is why linked list-like collections are not being considered. But, random access and keyed lookup collections should offer us sufficient speed. For some reason, no one ever seems to mention that the collection could be an array. So, I've added it to the above list as an option. Basically, I think that an array would be fundamentally similar to a unsynchronized ArrayList (whose size would not change). We will consider these very similar, and thus only consider the ArrayList. When evaluating any given scenario, consider the memory footprint (will all the file record mutex-like objects always reside in memory, or just those records that are currently locked), and how often and for how long the complete collection, if ever, will be completely locked down (i.e., synchronized on itself for a code block). Finally, consider how easily the representation can handle the file growing in size. ---------------------------------------------------------- Strategy 1: Collection Size Equals Number of File Records ---------------------------------------------------------- For simplicity, we'll use a random access-like collection for this example (array[], Vector, ArrayList): public class Guard extends Object { RandomAccessLikeCollection nLock; public Guard(int totalRecordsInFile) { nLock = new RandomAccessLikeCollection(); for (i = 0; i < totalRecordsInFile; i++) { nLock.add(new Mutex()) } } public void lock(int recordNumber) { ((Mutex) nLock.get(recordNumber)).lock(); } public void unlock(int recordNumber) { ((Mutex) nLock.get(recordNumber)).unlock(); } } The above is elegant in its simplicity. It is readily readable, concise, and very clear. We can also use a look-up collection, such as a HashMap wherein the code becomes only a little bit more messy: public class Guard extends Object { HashMap nLock; public Guard(int totalRecordsInFile) { nLock = new HashMap(); for (i = 0; i < totalRecordsInFile; i++) { String recordNumber = "" + i; nLock.put(recordNumber, new Mutex()) } } public void lock(String recordNumber) { ((Mutex) nLock.get(recordNumber)).lock(); } public void unlock(String recordNumber) { ((Mutex) nLock.get(recordNumber)).unlock(); } } Strategy 1 algorithms, as given above, are particularly appealing because they are easy to understand; and, admittedly, I initially fell in love with them because they are 100% accurate, and you gain a lot of confidence because they are so easily comprehensible (although it is true I spent a thread or two trying to understand the details of multi-threading using the Mutex class in previous threads here). Nevertheless, I no longer consider Strategy 1 solutions viable for the following reasons: 1. As the file continues to increase in size: 1a. The memory footprint of the Guard increases. 1b. To grow the Guard object is difficult and could be quite potentially time consuming, not to mention involving complex coding. 2. I may want to have the option of letting the client directly manipulate Data as a remote object, in which case a WeakHashMap might be desired (note: although I haven't though too much about it until just now, one could explore whether the above Strategy 1 could be used with a WeakHashMap so that dead, unresponsive clients could be dealt with). So, while I love their clarity, Strategy 1 solutions are more what I consider to be learning tools; that is, I've never implemented, coded, and tested a Strategy 1 solution, but I understand it completely because it is so intuitively straightforward. Once you see how simple it is, then it is easier to go to the next, more realistic solution phase: Strategy 2 solutions. ----------------- End of Strategy 1 ----------------- ------------------------------------------------------------------------------ Strategy 2: Current Collection Size Equals Number of Currently Locked Records ------------------------------------------------------------------------------ There are two sub-categories: 2a. where locking is against a collection and 2b. locking is against individual locks held within the collection. The difference is in the use of notify() and notifyAll(). The memory footprints are essentially the same. You can decide whether one algorithm may have more contention over shared resources than another. --------------------------------------------------------------------------------------------- Strategy 2a: Current Collection Size Equals Number of Currently Locked Records: notifyAll() --------------------------------------------------------------------------------------------- public class Guard extends Object { ElasticMutex elasticMutex; public Guard() { elasticMutex = new ElasticMutex(); } public void lock(String recordNumber) { elasticMutex.lock(recordNumber); } public void unlock(String recordNumber) { elasticMutex.unlock(recordNumber); } } What kind of algorithm is this I have given above? Well, basically I realize that we no longer have a one to one relationship between each record in the file and a simple, elementary mutex. Instead, we have, what I have named an ElasticMutex: an object whose size changes dynamically, and this is fundamentally different from Strategy 1 that was given above. So, it makes sense to consider the ElasticMutex in isolation, particularly as this is my first study on this issue (we can always condense the algorithm later). For the ElasticMutex, it makes little sense to use a random access-like collection because we will not be using an integer index; instead, we will be keying off the existance of objects, so a hash table is about the only reasonable choice. Since the very existence of a (key, value) pair means that a particular record is locked, value will simply be set to the empty string , everyValue (we can later, consider extending this to so that (key, value) == (client ID, record number)). To create the ElasticMutex, and to keep the algorithm as clear as possible, because production code will be adding more complexity, you will see, if you jot down the Mutex example code given above, that I have formulated ElasticMutex so that its structure is exactly the same as the Mutex class: that is, where the mutex class operated on the simple boolean "held", I substitute an object, HMutex, with a comparable operation: 1. while(held) becomes while(hMutex.isHeld(recordNumber)) 2. held = true becomes hMutex.takeHold(recordNumber) 3. held = false becomes hMutex.letGo(recordNumber) public class ElasticMutex extends Object { HMutex hMutex; public ElasticMutex() { hMutex = new HMutex(); } public synchronized void lock(String recordNumber) { while(hMutex.isHeld(recordNumber)) { wait(); } hMutex.takeHold(recordNumber); } public synchronized void unlock(String recordNumber) { hMutex.letGo(recordNumber); notifyAll(); } } public class HMutex extends Object { HashMap nLock; String everyValue; public Status() { nLock = new HashMap(); everyValue = ""; } public boolean isHeld(String recordNumber) { return nLock.containsKey(recordNumber); } public void takeHold(String recordNumber) { nLock.put(recordNumber, everyValue); } public void letGo(String recordNumber) { nLock.remove(recordNumber); } } --------------------------------------------------------------------------------------------- Strategy 2b: Current Collection Size Equals Number of Currently Locked Records: notify() --------------------------------------------------------------------------------------------- Phil posted this idea just recently: While I work through it now, I am not looking at Phil's posting; so, I may make an error or come up with a less efficient first draft than that presented by Phil. This algorithm is a refinement of the algorithm just given above. The main difference is this: 1. In 2a, all locks were stored in one object called NLock which, let's say, is a HashMap. Thus, to be certain that a waiting thread was able to leave the wait state, the unlock() method had to call the notifyAll() method (otherwise, if you called notify() only, you may awaken a thread who wants currently locked record M, while you have just unlocked record Q). 2. In 2b, all locks are still stored within one object call NLock which, let's say, is a HashMap; however, each value of the hash table represents a linked list of lock objects all relating to one, unique record in the file. This algorithm 2b is particular interesting in that the while clause and wait() call looks like this symbolically: if (myThreadWantsACurrentlyLockedRecord) { synchronized(syncObjectInLinkedList) { wait(); } } That is, the "syncObjectInLinkedList" refers to a particular element in the linked list. But which particular element it refers is not specified in the middle three lines of code. Thus, what "syncObjectInLinkedList" means depends on which thread you are! I thought that was an interesting concept. We now have N syncObjectInLinkedList waiting to use a locked record. Each of these syncObjectInLinkedList has one thread which is waiting on it. The unlocking code takes the first syncObjectInLinkedList from the linked list, and calls notify(): synchronized(syncObjectInLinkedList) { notify(); } In otherwords, in the unlock() method, the thread exiting dictates exactly which syncObjectInLinkedList it will choose, synchronizes to it, and calls notify, so that the one thread waiting on this particular syncObjectInLinkedList, can then proceed to lock the record. Phil liked this algorithm for at least two reasons: 1. He uses notify() instead of notifyAll(), 2. The first thread requesting the lock, will be assured of being the first thread to obtain the lock. In a sense, this algorithm combines two features from algorithms 1 and 2a: 1. From algorithm 1, we use notify(). 2. From algorithm 2a, our storage only accounts for locked records, and thus does not keep an ArrayList for every physical record simply existing within the file. Now at this stage I will depart from Phil's presentation (if I haven't already done so). For I see a line of inquiry that I want to follow. We note that Phil's linked list is really a mutex, but a special kind, a kind where the programmer dictates specifically which one thread within the waiting threads on the mutex will be awakened. I will simplify the algorithm by saying that we can have a Mutex, or a LinkedListMutex; and, we will start with a Mutex only, and later, since Mutex is a separate class, we can change its implementation later if we want to make it into a LinkedListMutex. The goal is this 1. To have an increasing and decreasing collection of mutex's representing the number of locked records in the file (but not to have a collection of mutex's representing every records which currently exists within the file). 2. To be able to use notify() instead of notifyAll(). We will delay the goal of dictating exactly which thread will be awakened to take control of any given Mutex. As long as our HashMap holds a Mutex for a specific record, this should be possible. We start with the dumbed-down, algorithmic version of Mutex with its use of notify() which is a component we desire. public class Mutex { private int numberOfWaitingThreads = 0; private boolean isHashMapMember = false; private boolean held = false; public synchronized void lock() throws InterruptedException { while (held) { numberOfWaitingThreads++; wait(); } held = true; } public synchronized void unlock() { held = false; numberOfWaitingThreads--; notify(); } public synchronized void hasWaitingThreads(HashMap mutexStore, String recordNumber) { if (numberOfWaitingThreads == 0) { synchronize (mutexStore) { mutexStore.delete(recordNumber); } } } public synchronized setIsHashMapMember(boolean value) { isHashMapMember = value; } } Guard will use an ElasticMutex, and ElasticMutex will be responsible for storing one Mutex object for every record. ElasticMutex will not store Mutex objects when the record is not locked. We obtain the unique Mutex we want simply by specifying an argument for recordNumber. public class Guard extends Object { ElasticMutex elasticMutex; public Guard() { elasticMutex = new ElasticMutex(); } public void lock(String recordNumber) { Mutex mutex = null; mutex = elasticMutex.createOrObtainMutexFor(recordNumber); //One potential problem is that right here, this thread //is stopped by the thread scheduler, and then another //thread exercises checkForRemoval() and removes this //mutex from the HashMap! So, need additional logic //so that the mutex knows if it is still a member of //the hashmap. It may turn out that we may have to //synchronize on the HashMap more than I had originally //thought. mutex.lock(); } public void unlock(String recordNumber) { Mutex mutex = null; mutex = elasticMutex.obtainMutexFor(recordNumber); mutex.unlock(); elasticMutex.checkForRemoval(recordNumber); } } public class ElasticMutex extends Object { HashMap mutexStore; public ElasticMutex() { mutexStore = new HashMap(); } public void createOrObtainMutexFor(String recordNumber) { if (recordNumber is key within mutexStore) { return associated Mutex object; } else { create new mutex object; synchronize (mutexStore) { mutexStore.put (key, value) as (recordNumber, Mutex); mutex.setIsHashMapMember(true); } return newly created Mutex object; } } public void checkForRemoval(String recordNumber) { Mutex mutex = mutexStore.getValue(recordNumber); mutex.hasWaitingThreads(mutexStore, recordNumber); } } So, assuming the above algorithm is logically correct, we now have accomplished three goals thanks to Phil's posting: 1. We can use notify() instead of notifyAll(). 2. We can store only those Mutex objects which relate to locked records, instead of storing a Mutex for every physical record in the file. 3. And, though I'd have to check Phil's posting again, I think we have reduced the contention for the HashMap, in that in the above algorithm, the HashMap is only locked down to either do a put or a delete. In short, if the above assertions turn out to be true, this most likely will be the algorithm I can be happy with. And, again, I thank Phil for posting and motivating my thinking along these lines. However, I've noticed that there may be problems in that while one thread is using the mutex but hasn't yet locked it, another thread can be removing the mutex from the HashMap. So, the above algorithm still needs some thinking. More than likely, I'll end up having to synchronize on the HashMap more than I had originally thought. And, the above algorithm, as of now, is an idea, the algorithm is not yet completed due to this problem. New Notes: if it turns out that the only solution is to lock down the HashMap more often, then I lose some excitement over this algorithm, for then a real contention problem can start to arise. What I might do in this case, is just let the HashMap grow until, say 1,000, 10,000 or however many records, then periodically a background thread will start up, lock the HashMap once , go through it, and delete all Mutex's that have no waiting threads. So, the outline would be: 1. Lock the database, i.e., make sure no threads are allowed to attempt to lock any records. 2. Lock the HashMap and remove Mutex's which have no waiting threads, which is the same as deleting all the records in it, actually. ----------------- End of Strategy 2 ----------------- Keep in mind that the above "coding" examples are algorithms, not production-ready code by any means (again, never before implemented, coded, and tested). What I have shown, albeit quite accidentally during my studies, is how we can come up with a set of classes--Guard, ElasticMutex, and HMutex--based upon the simple, straightforward Mutex class written by Doug Lea. Thus, our construction remains about as crystal clear as it can be. Then, due to Phil's posting, I expanded on my interests regarding Strategy 2b which looks like it might be my mutex solution of choice. Thanks, Javini Javono [ February 09, 2004: Message edited by: Javini Javono ] [ February 09, 2004: Message edited by: Javini Javono ] Javini Javono Ranch Hand Joined: Dec 03, 2003 Posts: 286 posted Feb 09, 2004 19:43:00 0 Hi, I added a new Strategy 2b (near the end of the above posting) which is a variant based upon Phil's recent posting: Thanks, Javini Javono I agree. Here's the link: subject: Locking Schemes: Tactical View 01 Similar Threads lock/unlock Implementing Lock/Unlock Threads 002 Synchronization of public methods in DVDDatabase Lock question! All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/185096/java-developer-SCJD/certification/Locking-Schemes-Tactical-View
CC-MAIN-2014-49
refinedweb
3,290
50.46
I am trying to use the sorted() in python and trying to find the interpretation but I get this error. ValueError: too many values to unpack (expected 2) Here’s my code: from treeinterpreter import treeinterpreter as ti X = processed_data[0] y = prediction rf = pickle.load(open("original/PermutationModelNew.sav", "rb")) prediction, bias, contributions = ti.predict(rf, X) print("Bias (trainset mean)", bias[0]) c, feature = sorted(zip(contributions[0],X.columns)) X is the test data and it looks like this: Age DailyRate DistanceFromHome ... BusinessTravel_ OverTime_ Over18_ 0 39 903 2 ... 2 1 1 [1 rows x 28 columns] and y looks like this: [0] Can someone please help me to fix this? I am using this Example Source: Python Questions
https://askpythonquestions.com/2020/09/21/how-to-fix-valueerror-too-many-values-to-unpack-expected-2/
CC-MAIN-2021-31
refinedweb
122
57.98
Creating Custom RPC Classes I had a bit of trouble getting started with RPC classes, so I'm including my first experiment with custom RPC classes as an example for others who want to start using RPC. The basic concept of RPC is to call be able to access the variables and functions on your microcontroller through an external device, via Serial, Ethernet, or whatever other communication types you folk favor. Unfortunately, when compiling a program all of the things like comments and variable names are removed, so it becomes difficult to locate the variable you want to access. This problem is addressed in RPC by storing a string containing the variable name on the microcontroller, and linking that variable name to the address of the variable/function in memory. This works well, the only catch is you'll have to set up each function or variable you want to be able to access over RPC individually. To try a quick demo of RPC using built-in mbed classes, try: For trying out RPC on simple functions and variables, check out the documentation at: You may need to modify some configurations to get RPC working in your terminal. Each command seems to expect a carriage return + line feed at the end of the line. In Tera Term, I had to enable CR+LF in the settings, and in PUTTY the best I could figure out is using CTRL-J instead of ENTER. I also find turning on local echo is pretty handy, since I type the wrong command more often than the right one. RPC commands are of the format: /<Object name>/<Method name> <Arguments separated by spaces> This demonstration focuses on how to use RPC with your own custom classes and member functions. As a simple example, we will create an LED class attached to a DigitalOut pin with member functions blink and toggle. RPCTest Our custom class will need to have inheritance from the class "Base". Remember to register the name of your object by using in the initialization list the command Base(name). We will also need to include two member functions which help to link the class to the strings identifying it. #ifdef MBED_RPC //this code will not compile unless we have included the rpc.h file. So this class can also be used without RPC. /**Defines the methods available over RPC*/ virtual const struct rpc_method *get_rpc_methods(); /**Defines the RPC class*/ static struct rpc_class *get_rpc_class(); #endif // MBED_RPC The body of these functions is still a little bit of a mystery to me. I copied the basic format and only changed the arguments to match my functions. I think most of the clues to a deeper understanding lie in , but for now I'm just glad it works. Note: I'm having difficulty getting this code to display properly in my notebook. Since some of the code displayed here may have errors, I suggest you look at the code in the actual program instead: RPCTest Now that the class has been defined as RPC-able, we have completed the class definition. The only step that remains is to specify it within the body of your program as an RPC class that you will be using. I chose to include both this and the built-in Timer class as options in my code. Base::add_rpc_class<Timer> (); //a class included in the core mbed RPC library Base::add_rpc_class<LED> (); //my own custom LED class Typing "/" into the terminal will give a list of available RPC commands. This should include our new class "LED" the built-in "Timer" class, and "Base" which has RPC options to view or delete all objects. Please log in to post a comment.
http://mbed.org/users/JimmyTheHack/notebook/creating-custom-rpc-classes/
CC-MAIN-2013-20
refinedweb
618
64.95
You Used Perl to Write WHAT?! 307 Esther Schindler writes "Developers spend a lot of time telling managers, 'Let me use the tool that's appropriate for the job' (cue the '...everything looks like a nail' meme here). But rarely do we enumerate when a language is the right one for a particular job, and when it's a very, very wrong choice. James Turner, writing for CIO.com, identifies five tasks for which perl is ideally suited, and four that... well, really, shouldn't you choose something else? This is the first article in a series that will examine what each language is good at, and for which tasks it's just plain dumb. Another article is coming RSN about JavaScript, and yet another for PHP... with more promised, should these first articles do well." Both sides... (Score:5, Interesting) Having the right tools is great for current productivity, but it's hell on expenses and new recruits. If you use a different tool for every job, you need to maintain all those tools and a task force that's able to use all of them. Sometimes the 'right tool' is one that fits the company as well as the job. An Intelligent FrI$T Psot. (Score:2) Re:An Intelligent FrI$T Psot. (Score:4, Insightful) Re: (Score:2) I didn't use it on a site I worked. This site relied on user uploads. However, Django's FileField doesn't allow you to put anything but static strings and date formatting characters in the upload directory, or even offer a method to move files around. The files on the aforementioned site were supposed to be organized by user, so that other scripts could easily zip entire directories / directory trees. Re: (Score:3, Insightful) OK, so not all languages are well matched to solving all problems, but keeping it down to a managable number also serves to avoid some major grief in future. Re:is your company weak? (Score:5, Insightful) Re:is your company weak? (Score:4, Insightful) But the problem isn't 'picking up a language', it's picking up 3. If we hire a new recruit, to expect him to learn 3 new languages immediately is ridiculous. So we don't -have- a ton of different languages in use, we have a choice few that cover everything reasonably well. In fact, since I started, we have dropped 1 and almost dropped another. (They're waiting on me to have time to rewrite that last program in another language.) In addition to not having to have new recruits learn those 2 languages, we also don't have to maintain the software needed for those 2 languages. That saves employee time and computing power both. And in truth, I tried to suggest adding a new language a few months ago... And after discussion, we decided the benefits didn't outweigh the costs. I was the only one who already knew the language at all, and it wasn't -that- much better than what we had. If we were a huge company with thousands of employees, it might make sense to have specialists in each of the languages and also use 'the right tool for the job' Re:is your company weak? (Score:5, Insightful) Re: (Score:3, Insightful) Re:is your company weak? (Score:5, Interesting) HOWEVER, I do remember quite well what threads are, what a semaphore is, what a binary tree is, the difference between a bubble/quick/radix sort, the concept of object oriented design, etc. I wish I could say I remembered UML modeling but honestly, I hated that darned part of CS and never paid attention there anyways People keep saying this... (Score:5, Insightful) The problem is not learning the syntax and basic idioms. Agreed, that's pretty quick, particularly if you have a good reference. The problem (and the time sink) is the *ugly* side of every language. The parts of the standard libraries that sucked, and were reimplemented elsewhere (but you gotta know that...). The functionality where everyone who "lives with" the language grabs X open source library to implement -- not Y! it's a POS! -- but you don't know that yet. The language features that have secret, illogical gotchas for special cases. The bugs in the compiler or interpreter that are easy to avoid -- once you've been burned once. The code that will break cross-platform compatibility for obscure reasons. The code that will make it almost impossible to internationalize later, because you didn't learn how that support worked yet. Granted, the cost of these things with any reasonable mature language should not be enormous (though it depends how long you go down a wrong path...), and you can allow for it, but it's always a significant risk *especially* if you don't have someone on the team (perhaps the new team who has to maintain your old code) who's already more-or-less expert level. But either way, you have to allow *something* for that cost, and sometimes it's not worth it just to use the absolute best tool for the job when you have a pretty close fit available. Re: (Score:3, Insightful) This is worthy of a mod +5 - Wish I Said It Myself. It bears repeating. Syntax can learned in a few hours, figuring out the quirks can last you a lifetime. Re: (Score:3, Interesting) Sadly it is the brain dead of the HR departments and headhunters who do the hiring/selecting... usually. To them, what is on paper trumps experience. Seems that ever more often these wastes of brain pans either submit for interviews or will outright hire a newly graduated Masters student (based entirely on their piece of paper) with fuck all experience and make them managers or 'senior' developers Re: (Score:2) Re:is your company weak? (Score:4, Funny) Besides, complaining about having too much work while browsing Slashdot really is foolish. Re: (Score:2) The real issue isn't first personnel, but time. The need to get stuff done NOW, NOW, NOW doesn't afford the FNG any time to sink into the language and its paradigms. Lack of time leads to heat and pressure, which produces nasty coal fires more often than diamonds. Re: (Score:3, Insightful) If it is a one time throw away script to fix a one time problem then yes the programmer can use what ever he wants. But if it is a tool then you may need to have other people maintain and work on it. You can write any program in any language. Yes some are better than others but how well you know the language is also important. Also having multiple vendors for a language is also really useful. If only one vendor supports the language then they have a lot of control over your company. Take Foxpro a Not following... (Score:3, Informative) I recently had a throwdown regarding this because one of my coworkers was working on a project, and I flat refused to help him in his chosen language. That language? VB6. Now I used to program in that...thing...and Re:Both sides... (Score:5, Insightful) That's why the "right tool for the job" is sometimes the tool that meets the greatest cross-section of a company's needs rather than a jumble of tools that are ideal at a lot of little tasks. e.g. While it's fashionable to hate Java these days, you have to admit that it does have a rather massive cross-section of needs it can meet. Thus one of the reasons why it's so popular in large companies. Yet a smaller company might find more value in using Ruby toolkits to do all their work. Ruby may not be ideal for some of the less glamorous back-end tasks, but tools like Rails gain so much on the front end that Ruby meets a greater cross-section of needs than Java would. Re: (Score:2) Re: (Score:3, Interesting) Seriously, you're picking at an example where I say that some small company somewhere might benefit from the faster development time of Ruby over the advantages of Java? Especially when said company probably doesn't need the same level of scalability you're worried about? Geez. Simmer down, will ya? Re: (Score:3, Insightful) Oh, perhaps the kind that provide exclusive or small-reach services? e.g. If I ran a local salon, how many people would reasonably be hitting my site simultaneously? I might not be able to afford to have someone build me a scalable J2EE online-appointment system, but I could probably afford a small Ruby on Rails site. Scalability for my site would be handled by throwing hardware at the problem, as it's a LOT cheaper in this case th Re:Both sides... (Score:5, Insightful) I hate the "right tool for the job" cliche. Not because it's necessarily wrong, but because it tends to be used by people who automatically assume that their tool is the right one and wish to stop any serious discussion about other possibilities. language vs library (Score:5, Insightful) I wonder if this whole discussion is off the mark. Languages are for the most part trivial. And universal. "It's the libraries, stupid" is sometimes how I feel. If it was easy to link in or call any library function from any language, then half of this discussion would immediately be seen to be irrelevant. So Perl is "the right tool for the job" because it has the ability to apply regular expressions to strings? But, you know, C can do that too thanks to this PCRE library. Hashes? C can do that too via another library. In case anyone has forgotten, Perl itself is written in C. I read that Perl 6 has vastly improved the interface to other languages, especially C libraries. These day, whenever I write a new program it often feels as if I'm creating yet another language. A simple, superficial, limited language, but nonetheless, a language. Program needs a configuration file? Whip up a suitable format (language) for that. Needs to save data? Barf out this big data structure into a YML file. Want some way to run the interface from a batch process, or otherwise automatically? Start turning the user interface into a language. Want to connect Perl 5 and C? Get acquainted with XS, a "language" the Perl folks felt it necessary to create for that purpose, because Perl 5 wasn't good enough alone. Want to compile a large project written in C? Get familiar with the language of Make, because while C certainly could do it, C isn't so good for that. Is ANT a "language" for building Java projects? Where's the line between language and library? I suppose where things lead to a new language is when someone wants to implement a new concept and the established ways aren't good enough. Or has a way to eliminate a bad programming practice, but some elements of an existing language must be dropped to do it. For instance, wouldn't be nice to have variable length parameter lists in C, as C's own printf function does? Too bad it's such a pain to do that in C. How about lazy evaluation and currying so we can have infinitely long parameter lists? Oops, guess the C call stack can do recursion, but isn't too well suited to expressing that sort of thing, time to make another language, Haskell. Do we want to pass along a pointer to a structure, or a copy of a structure? Java defaulted to pointers where C did not, but then said Java didn't use pointers. Nice not to have to type in ampersands and asterisks all the time, but still, I find the thinking misleading. Then there's garbage collection. The consensus is that garbage collection is overall a good thing, but that a good programmer can do better than the automatic garbage collector. And so on. Re:language vs library (Score:4, Interesting) Languages are for the most part trivial. And universal. How about lazy evaluation and currying... ...time to make another language, Haskell.... Do we want... a pointer? Please do forgive me (hee hee hee: Please ==> forgive $ me) if I haven't quite gotten your point, but I cannot square your first and last paragraphs, specifically the parts I've quoted. Perhaps if you'd written "Imperative languages are for the most part trivial and universal? Perhaps then could easily equate C and Fortran and Perl and sed, and leave Haskell and Lisp out of the mix. Or perhaps if you'd written "OO languages are for the most part trivial and universal? Perhaps then I could equate (more or less easily, don't think it's quite NP) Objective C and Java and C++. (But see below....) But the bold unqualified "they're all languages, get over it" sort of assertion doesn't parse. My intro was Fortran, then F77, then Pascal, then C (OMG! Pointers! The Bomb!), and it was evolution, a little more cool, a little more flexibility all along. Then I learned C++ at work by day and Java for fun at night, and my head hurt, 'cause I liked the imperative style and OO was weirdly different but everyone was swilling kool aid so I stuck it out... ...and every night I'd discover something in Java that was THE BOMB that would solve that day's problem and every next day I'd find that C++ didn't have that feature (does today, AFAIK, but STL was busted then, so everyone rolled their own)... ...but I moved out of real programming before I got my head around OO. And now I'm learning Haskell, just 'cause I've learned that making my head hurt from time to time is a great way of stretching myself, of getting better at everything.... And I don't understand - at least, I haven't wrapped my head around Monads yet, though I get what they're for. And you know what? No pain. None. I can feel the approach of enlightenment. SYB and reflect, baby, introspection and lazy evaluation and side-effect free (or reliably and provably constrained side-effect management) via implicit state passing. Whoa. I feel like Neo after the roof but just before the hallway - I'm starting to believe. I've read the "SYB in C++" paper. I get what they're doing. But they admit the gap: Scrap++ is a great exercise, but how do you get to SYB without those? You don't. There's a guy out there that /.ers love or hate, no middle ground, so I won't reference him directly, but he's right: Different programming models change how you think of problems, and the right model opens so many doors you didn't even know existed. Doors you couldn't even have described until you knew they were there, but you were unable to find the hallway until you squinted, looked sideways at the world, and watched it shift... ...and were freed from the imperative.... "Hello World" is a cool teaching aid in Fortran and C and even perl (do it 15 different ways, without string literals or character types, preferably with a program one column wide :->) But Haskell? No. When you learn Haskell, think big. 'Cause its programming model is so way different you have no idea. I'm at the point I almost consider Monads harmful... ...but I'll get the other side of the koan soon. And when I have a simple repetitive task that I need to automate, I'll stick to bash, 'cause it's clean and readable, and sufficiently fast and sufficiently limited that I have to force myself to be literate, which makes it so much easier 6 months later when I need to tweak $ remember script. I won't use Haskell for that. No way. But that replacement for scrabble/scribble I've been thinking of? That tool for edit Re: (Score:3, Interesting) Turing Machines vs. Lambda Calculus (Score:3, Interesting) What you're really talking about here is the difference between the Turing machine model of computation and the Lambda Calculus model, and you're absolutely right. Even though the two are provably equivalent (try expressing one of your Haskell programs as a while loop with a stack; it works but it sucks having to write it!), the very mentality that you use when programming in a language like Haskell is so totally radically different from how you program in C that it's useless comparing the two. In college I Re: (Score:2) Re: (Score:2) How is it hell on expenses if you focus on open source solutions? Also, if you hire people who cannot learn a new technology then you hired the wrong person. Technology will always change. You don't want people who will become dinosaurs in a few short years. It's "Hard" to learn a few languages? (Score:3, Insightful) There is a difference between keeping a well stocked and maintained tool-box that covers the basics and being a compulsive tool collector. There's also a difference between keeping a well stocked and maintained tool-box that covers the basics and using a screwdriver for everything. That's the same mentality that tries to use the tip of a hunting knife to turn a precision screw. Its simple (Score:5, Insightful) Re: (Score:3, Insightful) Glue and objects (Score:5, Interesting) The other really odd experience for me was learing object oriented programming. I had been programming in objects since I was first introduced to them when the first NeXT computer came out. I used java. And C++ and such. I thought I understood objects. Then one day I learned to program object oriented in Perl. An I learned that while I was fluent in object oriented usage, I really had a pathetic understanding of how they worked and what was actually possible with objects. Perl objects are sort of like owning a copy of grey's anatomy or "the visible" man. You son't just see that arms connect to torso's from outside but you see all the sinews and bones and blood. It's actually amazing how so many things we think of as different concepts in object oriented programming and data bases are actually different reflections of the same trick. And that's the trick perl use to make objects. in perl, an object is any variable that has an attribute that can store a list of package names. Let's see what you can do with that. Hmmm.... well that list can be your inheritance heirarchy so each package is what you search for methods. But notice that since it's a mutable list a perl object can do something else that most object oriented languages cannot. A variable can change it's "inheritance" list after the fact. it can change it's own class. Okay Now this is just a single variable so where to we get attributes of the object? Well, if that variable is say a hash (dictionary) then we can just use the key's as the attribute names. so if were to write self.foo in C++, you would write self->{foo} in perl. More fun: let's say you call a method() or ask for an attribute on a variable that does not exist. Well, a perl object can just add more packages to it's inheritnace list. Or it cold write the method on the spot and add it too it's own inheritnace. "I'm my own grandpa". I've used this trick many times to create tables. I don't write any of the "get" or "set" methods. instead I just intercept the call to the method "setfoo()" which never existed cause I never wrote it, then I have perl create an attribute called foo: Self->{foo} = "something". then I have perl write a subroutine called "setfoo" and add that subroutine into a package namespace and put that in it's inhereticnace list.. ("like adding methods to a C++ package outside the declaration". (programming tip: obviously this is could lead to problems with typos, so I also provide the variabel with a list of all allowed attribute names--- but of course I can always add to that list later). Now something more exotic. The hottest thing in Data base programming is the realization that sometimes column centric data bases are better than traditional row-centric data bases structures. In perl an object can change which it is, transparently. For example, if I'm a traditional object with a row organization then all my attributes are stored as self->{foo1}, self->{foo2}, self->{foo3}. and so on, just as you might right self.foo3 in python. But I did not have to do it that way. What if instead of making the self variable a hash (dictionary) I had made the self-variable a simple scalar, say an integer. Well at fist this seems stupid, where did all the instance variables go? Well, I just store them in the class. I make the scalar self-variable's integer just an index. The class keeps the instance variables in arrays--that is column based storage--.. SO for example if self = 4, then the attibute foo for this instance now becomes self->class->foo[4]. The beauty of this is that si ob (Score:4, Insightful) Re: (Score:3, Insightful) Perl6 will be written in Perl (Score:2) 1 Page Version (Score:5, Informative) Re: (Score:2) Re: (Score:2) idiots (Score:5, Funny) Re:idiots (Score:5, Funny) Re:idiots (Score:5, Funny) Re:idiots (Score:5, Funny) Ask not for whom the whoosh whooshes - it whooshes for thee. Re: (Score:3, Funny) Re: (Score:3, Funny) It's not that rare. Re:idiots (Score:4, Insightful) Now, perl might not be the best language on the entire planet for web scripting and such, but to suggest that it is actually on the negative side of the graph in being web appropriate is just dumb. And I don't need to be a CIO to understand that. Re:idiots (Score:5, Funny) Oh, I always write it in C. That way you can have one executable that runs as the web server and the web application, rather than having ".pl" and ".shtml" and other generated files everywhere. This is why strcat() was invented folks! It's easy. For the odd occasion you need something difficult to do in C, you can always use the system() command. For example, from my website: That way I can just put links to "/internal/specialfn?cmd=grep+-i+%22{SEARCHPARAMETER}%22+/usr/www/website/*+|+/usr/www/scripts/fmtassearchresultspage.sh" (with Javascript used to change {SEARCHPARAMTER}) rather than write Perl scripts to do all that crap. I don't understand why everyone doesn't code like this! Re:idiots (Score:4, Funny) Re: (Score:3, Funny) Re: (Score:3, Insightful) Re: (Score:2) Ray Tracing (Score:5, Insightful) But the most profound part of the whole article, and I admonish everyone coding Perl to remember this: This applies to any language. If you can do it multiple ways, pick the readable one. Re:Ray Tracing (Score:4, Interesting) I was a long time perl programmer before I made the switch to python. All my headaches with perl went away, and no new headaches of similar magnitude have surfaced. So for me it has been an net improvement. KISS, DRY, and various other good engineering/development paradigms are embodied in python's development model. Perl made it easy to shoot yourself in the foot. Python makes it hard to shoot yourself in the foot -- but you can if you want to. That probably best sums up their differences. Re: (Score:2, Funny) But whatever, I have a perl script that converts tabs to spaces, so it all works out. Re: (Score:3, Informative) Larry Wall doesn't exactly follow his own advice, there... But I digress. I'm not really sure a raytracer was such a horrible idea. If you can isolate those tight loops, there's a good chance you can do just that part in C. High-performance should be possible. It's the real-time response that would be difficult. I wouldn't have a problem writ Re: (Score:2) You made me chuckle. Re:Ray Tracing (Score:5, Insightful) In a college computer architecture class, we had to write an emulator for a system designed "by the professor". Basically all tight loops performing really basic operations, and a lot of synchronization. We were given sample microcode and programs to test with, and when we turned it in he ran it with different microcode and programs to guarantee accuracy. Accuracy was required to pass, but your grade was based on performance and clarity. They only perfect score went to an emulator written in Perl. The built-in hash tables, and some smart programming combined with the ease of parsing the microcode and program data created not only the fastest (some classmates used C, C++, lisp, or Java to write their emulators) emulator, but also the easiest to read of the group. It's the programmer that creates slow, unreadable code, not the language. Re: (Score:2) Re: (Score:3, Interesting) Yes, that reminds me a bit about a class I used to teach for a former employer. I was teaching old-time Why two different languages? (Score:2) If you can isolate those tight loops, there's a good chance you can do just that part in C. Unless your interpreter can detect inner loops and compile them to native code for you. This is one advantage of using a language that targets the JVM or CLR: the widespread interpreters for these targets are designed to do just that. Why should inner loops and outer loops be written in different languages? And PHP has been notorious for SQL injections, if undeservedly. In my opinion, a language is "deservedly" notorious for security holes if many of the code examples in the language's official documentation are subject to these security holes. For example, if a langua Re:Ray Tracing (Score:4, Insightful) To the contrary, I think everyone should write a ray-tracer in Perl. Or, more generally, every programmer should take his or her favorite language and use it for something it's spectacularly bad at. Like ray-tracing in Perl. Part of the reason is to show that yes, you can use just about any language for just about any task. But that doesn't mean you should. Using a language unsuited to a project gets you familiar with the bounds of the language, so you have a pretty good idea before you start whether or not the language is a good fit for a given task. And it can often teach you a lot about the language, because you'll have to explore the little nooks and crannies to figure out how to get it to do what you want. The other part of the reason is that everyone needs a little humbling. This is especially true for anyone who says, "I used to use {language_x} until I discovered {language_y} and realized that {x} is TEH SUCK!" That usually just means that {y} is more suited to what you're doing now. Go code something non-trivial that {y} is unsuited for, and see if you don't end up cursing your new favorite just as much as you curse your old favorite. I wrote a BASIC interpreter (Score:4, Interesting) Why? I dunno, but I did learn a whole lot about Perl. I think that's the best way to learn things... make up a fake project for yourself (say, a database, or a simple flight simulator)...then implement it. Then revise it. Re:I wrote a BASIC interpreter (Score:4, Funny) refund (Score:5, Insightful) According to the article.. (Score:5, Funny) Wait...there are ways to use perl in a non-obfuscated fashion!? Re:According to the article.. (Score:4, Insightful) Whoosh! (Score:2, Insightful) My favorite example (Score:5, Interesting) It's fun to watch people's reaction when they realize that "You wrote a perl script that reads the manual and generates the code?" I just respond something like "Uh, yeah; you got a problem with that?" Especially fun has been the couple of discussions in which I expressed a great deal of skepticism of various "AI" claims. Then someone brings up the fact that I write perl programs that read English-language docs and generate code from them. They're obviously puzzled by the fact that I do this while looking skeptically at "AI" proposals. It's like they expect me to just shrug and write other impossible things in perl. Re: (Score:2, Insightful) Just sounds like text processing to me, which Perl (and most scripting/shell languages) are designed for. Inline C in Perl (Score:3, Interesting) Re: (Score:2) Quick repair tool (Score:3, Interesting) I'm currently writing a server based application written in c# (mono). The email class of c# was good...but enough flexible for the multipart graphically enriched email I had to send (a report not a spam...Mind you). I couldn't properly configured the MIME Parts (especially "inline"). If I had just c# the only available would have been a commercial library. So I end up with Perl. perl -MCPAN -e shell . install MIME::Light (if I remind well) a couple lines after I had a tool ready to send emails (based html pages written by my c# application). The script is fired up by my c# application with several parameters. It works. There should be a pool of accepted tools (Score:2) Bollocks (Score:5, Interesting) Perl was, and is (IMHO) the first and foremost thing you grab when you write web-stuff. CPAN is nothing if not infinite, the web is a text-based thing the perl was designed for, and its speed makes ruby blush. So why ? Why try to write off perl all the time. Is it because they can't seem to Re: (Score:3, Insightful) Probably because the CIO author is highly ignorant of the existence of CPAN and some of the high quality modules it archives and distributes. One particularly high quality module is Template Toolkit [cpan.org]. It is an incredibly powerful templating system. Some would even say it is too powerful (you could write enter programs in the template language). But what I have found is that power was purposely put there because there are instances where you need to do something fancy in the template or view rather than the Re:Bollocks (Score:4, Informative) I can think of a combination of three factors to support this assertion: For all the things PHP does wrong, these are things that it has done right. When to use Perl? (Score:5, Funny) Re:When to use Perl? (Score:5, Interesting) Re: (Score:2, Funny) Not quite Perl... (Score:2) Originally it was just a small project to get through a week of stagnated work. It's actually pretty hacked together but is separated into a client/server setup for use of a single backend and multiple frontends. Eventually I plan to port it to C/C++... but for now it seems to be working fine. Another useless article (Score:3, Insightful) What people tend to forget is how extensible a language can be, especially Perl. Blanket statements like "Perl should not be used for the web" is misinformed at best. No one wrote web scripts in Ruby before Rails -- it's all about the framework. Go give the Catalyst framework a try, and tell me again not to use Perl for the web. As for high performance computing, remember that the perl interpreter does a few things very well, very fast. We ended up rewriting our web crawling infrastructure at $WORK from Nutch and Lucene in Java to a custom distributed Perl architecture against Xapian. Not only is it much more 'pluggable' than the original solution, we ended up getting a huge increase in speed out of the deal, even putting it up against 64-bit Java. It's anecdotal, and mileage will vary, but there are times that Perl is just better at crunching text than anything else. Too many people write off Perl as a relic of the past. What people fail to see is the new Perl renaissance that is quickly approaching. It's a good time to be a Perl developer, judging by the job market. 4 Signs You're An IT Tool (Score:3, Insightful) I was expecting the standard litany of anti-perl 'wrong tool for the job' comments in this article, but the 'four things' you're not supposed to do made me laugh: Check. No discussion necessary, but did it even need to be pointed out? Really, if you're even thinking about doing real-time apps in any interpreted language, you need to have your head examined. The example provided points out that using a simplistic perl script that calls 'system' to move files around generates a lot of needless sub shells and processes. OK - good point. However, in the example he provides, he replaces the inefficient perl script with an efficient perl script. How does that help make your point? Unless the point is 'try to write good code' - which isn't language specific. This is just short-sighted and stupid, and the author suggests we use PHP or Ruby on Rails. OK - there are a lot of choices here, and all of them have advantages and disadvantages. But after reading that I should be using PHP, this quote made me spit coffe on my keyboard: "You should especially avoid using perl for traditional CGI-style form processing; this code tends to be hard to read and maintain because the HTML ends up inlined inside the perl code." Clean, elegant and properly designed code can be written in any language. Some languages encourage this, some make it difficult. Ruby encourages, but I'd stake my reputation on the claim that PHP makes it very hard. Perl is neutral on that spectrum. Check. No discussion necessary, but did it even need to be pointed out? Oh, I used that one already. Re: (Score:3, Insightful) Check. No discussion necessary, but did it even need to be pointed out? Really, if you're even thinking about doing real-time apps in any interpreted language, you need to have your head examined. Yes. It needed to be pointed out. Look at where the article is published: a magazine targeted at _IT managers_. Many of these people don't really understand the basics of what the languages the programmers they employ are. Articles like Re: (Score:2) The lack of namespaces and the dizzying array of functions included in the core libraries makes it difficult to write clean code. I didn't say it couldn't be done. In fact, I started the comment by saying that you could write good code in any languag Re: (Score:2) No [cpan.org] it [cpan.org] is [cpan.org] not [cpan.org]. Perl has far more tools to support making clean MVC web separation stuff than ruby. The issue is that what represents "good" or "clean" code is domain specific. All the things are showed you are things that are good for the specific problem of document templating for the web. I'd be willing to say that Perl's flexibility lets you code in pretty much any domain cleanly...if you set up your environment to facilitate that. Re: (Score:2) Re: (Score:3, Insightful) The silliest statement ever (Score:2) Re: (Score:2) Perl can be written in a way that's fairly C-style whereas that's not the case for bash. Bash is great for simple tasks but for more complicated ones it can be Perl - Swiss Army Knife (Score:2) That's the 10,000' view, anyway. Data visualization in Perl/Tk... (Score:2) It ran quite well on the hardware of the day, and had the advantage of actually behaving on just about everything in a very mixed-platform campus (Linux, Windows, Solaris, AIX, etc). Most unusual thing I've used Perl for (Score:2) Sheesh... (Score:3, Insightful) First, and more than sufficiently, of all: who the $curse is going to be taking coding language advice from CIO Magazine? If it's a real practicing software developer, they need to turn in their geek card and coder license immediately. And if it's a CIO or other PHB-level entity, for the love of $DIETY, don't let him start dictating software tool choices on the basis of stuff like TFA! Second, the author of the article sounds like he has only ever dabbled with Perl, sysadmin-tool-like. He betrays a disturbing unawareness of the recent development in frameworks and methodologies in the Perl universe that track most of the major software development trends and tools available in other communities. His advice, positive and negative, seems stuck in basic out-of-the-box Perl 5.6 or something. Most of the time, that's plenty good enough for the ol' sysadmin "Swiss Army Scripting Language" approach, but certainly missed out on a lot of good work. (The reader comments after the article call him out on this pretty well, so I won't rehash.) Third, a lot of the advice is universal, not Perl-specific. I mean, stuff like "Don't use Perl in an obfuscated fashion" is like "Don't drive a 1973 Dodge Ram pickup truck while drunk." Very true, very sage advice, but the problem is not Perl (or the truck), it's the obfuscation (or the drunkenness). Code readability is a timeless, domainless, endless problem. The only reason Perl gets picked on for readability is basically bad PR. Frankly, a lot of TFA just sounds like an excuse to fill up a few column-inches the editor needed filled in. They forgot #6 (Score:3, Funny) one of my proudest moments... (Score:3, Interesting) Real World Counter Example (Score:3, Informative) Specifically, the Zappos site, built with Perl, was rated the fastest retail website in the world [internetretailer.com] for broadband customers for much of 2006. It beat out Amazon, Dell, Best Buy, etc, etc, you name it. It was also the most consistent speed and the fewest errors. Search Internet Retailer for the more numbers. It always places in the top 5. Also, the claim that one might mix HTML in scripts is a sign that this guy hasn't actually used Perl in the past decade. Everyone switched to powerful templating systems sometime in 98. There are several very nice web development frameworks for Perl these days. Just like almost any other language. The rest of his criticisms are more valid. I wouldn't try writing graphic intensive applications, or anything with heavy math processing in Perl. And the most common complaint, that it doesn't prevent you from writing messy code, is certainly true. Of course, just because your code looks neat doesn't mean it's good code either Cheers. Re: (Score:2, Interesting) I actually read the f**king article and came away feeling dumber for actually having done so. Funny enough, I did not see one point that you could not use Ruby or Python to make the EXACT SAME CASE. Data manipulation in place? Cripes, if I'm in an Excel file perhaps I could write VBA to do the same thing and a lot easier at that. This kind of thing is just inane. Re: (Score:2) Perl scripts may not be portable because they use modules that are not installed everywhere. Shell scripts may not be portable because they use executables that are not installed everywhere. I don't see how one improves upon the other, and it's really pretty easy to install things from CPAN. Re:PHP WTF?! (Score:4, Interesting) Re: (Score:2) For reference, I use PHP like a lot of people use Perl. I'm a hardcore assembly/C developer by day, but realistically, we all need to massage data and such, as well as provide web tools to perform certain tasks. I've written moderately-sized web apps in PHP (about 70 different dynamic pages, with the whole system getting about 100k page loads per business day), and find it quite nice as long as you force yourse
http://tech.slashdot.org/story/08/01/25/1438247/you-used-perl-to-write-what
CC-MAIN-2013-20
refinedweb
6,826
72.05
The RIA Services T4 Code Generation feature was checked in about 3 hours before we published our MSI for today’s release. Talk about publishing an experimental feature! So tonight, I decided to take the app that I’ve been working on for my RIA Services Validation series and flip it over to the T4 generator to see what the experience is like. Here is my uncensored experience. In order to use our T4 code generator, you first need to add an assembly reference: Then you need to edit your Silverlight project file and add the following into the project properties: <RiaClientCodeGeneratorName> Microsoft.ServiceModel.DomainServices.Tools.TextTemplate.CSharpGenerators.CSharpClientCodeGenerator, Microsoft.ServiceModel.DomainServices.Tools.TextTemplate, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 </RiaClientCodeGeneratorName> We would like your project to flip over to T4 just by adding the reference; we’ve filed a bug for that. Alternatively, it would be nice to only specify the class name here instead of the fully-qualified name. As you’ll see below, when you add your own code generator, it actually gets simpler though. At this point, your project is switched over to use the T4 code generator. Compile and see how we did. If you don’t have any compile errors, then you should be in good shape to use the T4 generator. Some of you might experience build errors though, and if you do these are bugs and we need to hear about them. As a matter of fact, with my RudeValidation solution, after I flipped the switch, I got build errors. I have found a bug already. Hey, I told you these were experimental bits! Here’s what I saw after my first T4 compile. As you can see, my custom ValidationAttributes accept parameters of enum types, and the enum values are not being qualified properly. Oh well, I guess this T4 stuff won’t work for me, huh? Wait! I have control over the generated code… maybe I can fix this! I want to introduce my own code generator to take control and try to fix this issue. I need to create a new class and derive from CSharpClientCodeGenerator, and decorate my class with a DomainServiceClientCodeGenerator attribute. Let’s start with that. using Microsoft.ServiceModel.DomainServices.Tools; { } } You’ll notice that I needed to add a reference to Microsoft.ServiceModel.DomainServices.Tools for this to work. That assembly is in our framework (not the Toolkit) and it’s where the DomainServiceClientCodeGeneratorAttribute class is defined. Also, in order for this to compile, I needed to add a reference to System.ComponentModel.Composition (MEF) because that attribute class actually derives from ExportAttribute. To switch from the default T4 code generator to our custom one, we’ll simply go back into the Silverlight project file and remove the <RiaClientCodeGeneratorName> tag completely. Alternatively, we could specify the class name to the desired generator (RudeValidation.Web.T4.RudeValidationClientCodeGenerator), but since we only have 1 generator defined in our project it will be picked up by default. After removing that tag and reloading the Silverlight project, our build is now using the custom code generator. We still have the build error though, since we haven’t done anything to address the issue. At this point, we need to find the right hook in the code generation to override behavior. We can explore the virtual methods in CSharpClientCodeGenerator by typing “override” and letting IntelliSense lead us. In doing so, I found that there’s a virtual property for EntityGenerator—that sounds promising, so let’s override that. We can then derive from CSharpEntityGenerator and provide our own. This is where we are now: { } } } We’re still not customizing behavior, but we’re getting close. We now have our own EntityGenerator and our custom DomainServiceClientCodeGenerator is set up to use it. Let’s let IntelliSense guide us again. From inside the derived EntityGenerator class, I typed “override” and discovered GenerateAttributes is a virtual method that sounds promising. This method accepts a list of Attribute instances that have been instantiated based on what the server declares for every entity, and its job is to write out the client code to represent those attribute declarations. Using T4, this would be done in a TT file; in a C# class file, this can be done using this.Write/this.WriteLine. To be honest, I’m kind of stuck here though. The logic required to transform server-side attribute instances into client-side attribute declarations is nontrivial at best. For those seeking to take full control over the attribute declarations, we have given them exactly what they need, full control. But those those like me who just need to tweak the current behavior ever so slightly, the barrier to entry is pretty high from here. This is precisely the kind of feedback our team needs to hear. When you are trying to take over on code gen, where are you getting stuck? Technically, all of the hooks are in place for you to completely own the generated code, but what helpers can we provide to allow you to generate the code you desire? What, did you think I was going to give up? No way! I want my project to build using the T4 generator. Even if I don’t have the right tools to tweak this through a nifty API, I still own the code that is getting generated, so let’s fix it. In the end, each generator has a TransformText() method that returns the generated code. We can override this method and massage the output as much as we want. While the following code is by no means elegant, it gets the job done. { public override string TransformText() { return base.TransformText() .Replace("(GreaterThan", "(RudeValidation.Web.Validators.CompareOperator.GreaterThan") .Replace("(LessThan", "(RudeValidation.Web.Validators.CompareOperator.LessThan") .Replace("(Future", "(RudeValidation.Web.Validators.DateValidatorType.Future"); } } } } Yes, I just did that. I used 3 string replacements to fix a bug. And you know what, I love it. Why? Because there was a bug in the RIA Services T4 code generation and I was able to fix it with very little code in my own project. With this in place, my project now builds and works just as it did with the default CodeDom code generator. I call that a Win! I’m sure you’ll need to know, so I wanted to go ahead and call this out. We no longer have anything specified in our Silverlight project to indicate which code generator to use, but our custom generator is being picked up by convention. There are a couple of ways of reverting back to the default CodeDom generator: Notice that for this generator, only the class name is required and not the fully-qualified name. You’ll only need the fully-qualified name for our T4 CSharpClientCodeGenerator. However, there is another bug here: I tried having a line breaks between the tags and the generator name, and it failed to strip those out. So be sure to specify this on one line. If you are at all interested in controlling or modifying your generated code, please grab the new SP1 Beta and Toolkit release and flip the switch over to T4. Let us know what bugs you find and let us know what APIs you want to have for modifying the generated code. Any scenarios you can share with us for why and how you are seeking to modify the generated code will be greatly appreciated! You can report issues and open discussions on our forums: Thursday, October 28, 2010 2:44 AM This is an alternative to changing the project file.using Microsoft.ServiceModel.DomainServices.Tools;using System.Collections.Generic;namespace MyCodeGenerator { [DomainServiceClientCodeGenerator(typeof(MyT4CodeGenerator), "C#")] public partial class MyT4CodeGenerator : Microsoft.ServiceModel.DomainServices.Tools.TextTemplate.CSharpGenerators.CSharpClientCodeGenerator { }} Colin's idea is good here...As it turns out, it's easier to add an empty custom code generator (that derives from the CSharpClientCodeGenerator) than it is to change the project file to reference the CSharpClientCodeGenerator.You'll notice that if you have two custom code generators defined in your project, you'll be forced to edit the project file to specify which one should be used. A reason I'm looking to modify the default RIA service generator is because I want default parameters sent to the service. For example, you can specifypublic void foo (string bar = null)and based on whether you pass a parameter or not, you can filter on the server side before returning entities. However, when you look in the client generated code, the default parameter is lost. It would be nice if there was an attribute on the DomainService (or a change to EnableClientAccessAttribute) that would allow us to supply the name of the generator. I can see wanting to have different generators for different DomainServices. I agree with Colin. Especially since we are thinking about adding an interface to our services so we can unit test. One of the big reasons why there isn't an attribute on the DomainService is because the code generator is specific to the client consuming the DomainService.Imagine a couple of different projects that link to the same DomainService, but need their code generated in different ways. My only request is a EF POCO generator style .tt file that literally creates the entire experience as it should be. For instance, when I switched to use a custom generator it mangled what I expected the output to be versus what is there when you don't use it. It also doesn't have any weird project hooks to enable it, you just turn off generation in the EDMX and use the .tt.I found that I had to use the RiaClientCodeGeneratorName property no matter what I did so it may not be a bad idea to include some kind of sample that uses both techniques. Then again it could just be too early for me to really comprehend what I'm trying to do.Worst case even just a .tt template in the SDK folder to use as a good starting point would be enough. Hello Jeff,I think I found a bug with the new RiaServices December 2010 Toolkit. If I give the Silverlight project the same name as the Solution and unload it for edition, I can't reload it again.Vs2010 will give me the following error: "error : A project with that name is already opened in the solution." Hi Jeff.Is there any support for VB code generation with T4? As I see it now, the only way to do this is by creating your own vb code generation classes which inherit the base abstract classes in the 'Microsoft.ServiceModel.DomainServices.Tools.TextTemplate' namespace.Thank you! Hello Jeff.Could you please help with some indications as how to create a T4 Generator for VB.So far I've tested your approach on a C# custome code generator and it worked fine.As for VB, I've done some very simple steps and got stuck in an error.Basically what I have done is:1. Created an assembly which holds all code generation classes.2. Created Custom VBClientCodeGenerator class which inherits from ClientCodeGenerator and overriden function GenerateCode to return empty string and overriden all properties to return Custom VB Classes 3. For all properties I've implemented custom classes which inherit from the base classes and ovveride all functions to return empty string: for example the EntityGenerator property of my CustomVBClientCodeGenerator returns a CustomVBEntityGenerator which inherits from base EntityGenerator.4. I've decorated the CustomVBClientCodeGenerator class with the DomainServiceClientCodeGenerator attribute and set the type to the custom class type and language to "VB".5. I've referenced the above assembly containing custom classed from the server project.6. I've modified the silverlight project file with the RiaClientCodeGeneratorName tags you indicated providing the complete name, assembly, version, etc for the assembly containing the custom code generation class.When I try to build the silverlight project, an error occurs: "Microsoft.ServiceModel.DomainServices.Tools.TextTemplate.CSharpGenerators.CSharpClientCodeGenerator supports only C# code generation. Please use the default code generator for VB code generation."I don't know why this occurs as I do not reference the CSharpGenerators assembly nowhere in the solution and I do not use those classes.This is the same approach I've used in my CS example which worked perfectly.Please help, I am at a loss...Thanks. Can I control the generated class' namespace?I want the namespace of the generated User entity (thru the AuthenticationService @ Server) to be MyDemoApp.Services (just like in the server). I downloaded the latest Ria Services May toolkit (I think - I have no idea from the product what version thank you). Where is CSharpClientCodeGenerator?Did ya drop it? You guys really need to work at documenting changes, or anything for that matter. I've wasted a whole day on this. I got past the toolkit versioning issues. Still banging my head.I finally found DomainServiceClientCodeGeneratorAttribute and tried to use that. No luck!You said:Then you need to edit your Silverlight project file and add the following into the project properties:Project Properties? Ist there a ProjectProperties section? I dont' see it.Hello! Where! Where! Where! Where? I have created a T4 template to inject my own base class for all Entity classes. I have ran this on a test silverlight solution, and it worked like a charm. However, when I move the same process to my production solution, it no longer works. Now there is a difference between the two solutions. My wcf ria services link links to a normal class library that houses my domain services. The .web application still exists, but it just references this class library as well. This allows us to move the actual real work around different solutions so that we can have different install bases that are pertinent to our clients. Is there some entry in the .web project that allows my code generator to be mef'ed that my normal class library will not? Both my .web and service project references my T4 project. I am utilizing Silverlight 4, with the April 2011 toolkit. jeff - how do we debug our custom CSharpClientCodeGenerator?I was assuming that I could just attach to the devenv.exe doing the compile with a 2nd instance.Thanks ignore last comment. breakpoint was in the wrong visual studio instance, and I assumed <RiaClientCodeGeneratorName> was still active in the properties of the Silverlight project file. Hello Jeff, do you have a generator that generates methods that return Task<> ? @PauloNo, sorry. so i used this to have all my genereted proxy entities derive from my custom base calls which it self derives from the entity class. Everything complies but at runtime i get errors because the EntityCollection<TEntity> will not load for some reason. I don't want to go as far as creating my own EntityCollection<MyCustomBase>. plus i am not even sure if that will work at all. by main reason for considering this was to have all my entities derive from my custom base class so i can put all my custom logic in there as opposed to repeating those properties in every single entity class. Can any of you smart folks think of an easier way to accomplish
http://jeffhandley.com/archive/2010/10/28/RiaServicesT4WalkUp.aspx
CC-MAIN-2017-13
refinedweb
2,553
55.44
Yeti recently launched an app on a Friday night, in conjunction with Chelsea Handler’s new Netflix documentary, Chelsea Does: Silicon Valley. The team awoke to a crisis Saturday morning, due to an immediate spike in traffic on the app’s SMS and phone call functionality (i.e., Twilio, Nexmo, Plivo). In this talk Rudy will discuss the cause of the problem, how his team fixed it, and what they could have done differently, sharing some Swift code and the lessons learned along the way. Introduction and Backstory (0:20) My name’s Rudy, co-founder and head of tech at Yeti, a local product design and development firm, in San Francisco. I’m also the organizer of the San Francisco Django Meetup group. So I’ve done a lot of Python and Django work, and I’ve had a lot of experience building native iPhone apps, Android apps, and back-end work. I’ll also go into how when I woke up at seven in the morning and things in our production app were broken, what we tried to do to fix what was wrong, why that didn’t work, and multiple attempts we made to get it back up and running that weekend. About a year ago, someone reached out to us and asked us to be part of Chelsea Handler’s documentary series where she works with an actual development agency to build an app. We taped it a year ago, and it’s on Netflix now. The process was half faked, half real. The idea behind the app is that you’re in an uncomfortable situation such as a meeting you don’t want to be in, or a bad date. With the app, you can schedule the app to send you a real phone call or text message at some point in time in the future, you will get the phone call or text messages, and it’ll be an excuse to get out of the situation. It’s similar to when you ask your friend to text you messages to get you out of situations. The app has a list of excuses, you pick an emoji to represent your excuse. You set up a contact for it to come from, and the messages or calls will look like it’s coming from someone with a real picture for that contact. You schedule how long it should come in the future, and then you can kind of like craft a little bit of a story, so you can add a phone call, you can add text messages Jan 23: Launch Morning (6:14) I woke up that morning, and I realized that things were not going well with our app that was launched. All the text messages were being undelivered. The onboarding process involved verification, and only a handful of people had verified accounts. Get more development news like this For our telephony service, we used Plivo. In the onboarding process, the iPhone app sends to the server the number that you put in, the server then pings our telephony service, and it creates a unique code that expires after a certain amount of time. Plivo then sends the code in a text message to that user’s number. Viper (9:40) Gotta Go (the name of the app) is built using like the VIPER architecture, it’s an acronym that stands for getting as much code out of the view controller. So the idea is View, Interactor, Presenter, Entity, and Router. The Interactor only handles API requests. The presenter is about passing and sanitizing data to the view controller to show. And then we have a list of our view controllers. Gotta Go’s a fairly small app, there’s maybe only really three of these modules and that’s kind of how we organize our code. If we make a verification request object, we set what the code is, the user input it in their phone, RestKit will handle sending that to the API for us and turning it into JSON. Adding Contacts (12:06) We built Gotta Go before iOS came out with a new contacts framework. The contact code we had to write is a bit rough. We wanted the app to actually save the phone numbers that we’re sending the text messages and phone calls from. We also want to allow you to edit the contact’s name and the photo all from here. Suppose, we’ve already made the contact. Finding the contact can also be a real problem. As a solution we just deleted the existing contact and then made a new one. It has the same effect, and the user would not know. Another interesting thing is we also launched the support for the US and Canada, so depending on the user and where they’re from, basically using some combination of their phone number and area code to figure out where they are. Canada has the same format numbers as US, luckily that made it easier to implement. So we have to make sure we’re saving the right numbers, that’s just part of a little bit of a complication. Store numbers appropriate to user’s country let country = AppModel.sharedModel.getCountry() let americanNumbers: ContactNumbers = ContactNumbers(voiceNumber: "(310) 269-5471", textNumber: "792-273") let canadianNumbers: ContactNumbers = ContactNumbers(voiceNumber: "(604) 425-1155", textNumber: "(604) 425-1155") Find contact with a matching number static func findContact(number: String) -> ABRecordRef? { if Contact.hasAddressBookAccess() { var error: Unmananged<CFError>? // create address book instance let addressBook: ABAddressBookRef = ABAddressBookCreateWithOptions(nil, &error).takeUnretainedValue() // get all contacts let contacts = ABAddressBookCopyArrayOfAllPeople(addressBook).takeRetainedValue() as Array for record in contacts { let currentContact: ABRecordRef = record let numbers: ABMultiValueRef = ABRecordCopyValue(currentContact, kABPersonPhoneProperty).takeUnretainedValue() // loop through contact's numbers for (var j = 0; j < ABMultiValueGetCount(numbers); j++) { let phoneNumber = ABMultiValueCopyValueAtIndex(numbers, j).takeUnretainedValue() as! String // if the phone number matches the first contact number, we've got a contact match. if phoneNumber == number { return currentContact } } } } return nil } The idea is we need to check to see if this contact already exists in your address book before we go and create a new one. You get the address book, we look through all the records in there, and we try to find the contact that has one of the phone numbers that we’re saving. This could easily break, if for some reason the user went to their contact book and manually edited the contact we had put in there. That’s kind of just something we had to accept. But basically once you find the record, you loop through all of their phone numbers and try to find one of the phone numbers that we’re trying to save. Set name, image, and numbers on new contact func createContact() -> Bool { let newContact: ABRecordRef! = ABPersonCreate().takeRetainedValue() var error: Unmanaged<CFErrorRef>? let firstNameSuccess = ABRecordSetValue(newContact, kABPersonFirstNameProperty, self.firstName, & error) let lastNameSuccess = ABRecordSetValue(newContact, kABPersonLastNameProperty, self.lastName, &error) if let image = image { let pngImage = UIImagePNGRepresentation(image)! let cfDataRef = CFDataCreate(nil, UnsafePointer(pngImage.bytes), pngImage.length) ABPersonSetImageData(newContact, cfDataRef, &error) } let multiStringProperty = ABPropertyType(kABMultiStringPropertyType) let phoneNumbers: ABMutableMultiValue = ABMultiValueCreateMutable(multiStringProperty).takeUnretainedValue() ABMultiValueAddValueAndLabel(phoneNumbers, mainPhoneNumber, kABPersonPhoneMainLabel, nil) let mainNumberSuccess = ABRecordSetValue(newContact, kABPersonPhoneProperty, phoneNumbers, &error) ABMultiValueAddValueAndLabel(phoneNumbers, mobilePhoneNumber, kABPersonPhoneMobileLabel, nil) let mobileNumberSuccess = ABRecordSetValue(newContact, kABPersonPhoneProperty, phoneNumbers, &error) return saveContactToPhone(newContact) } To create a new contact, you have to create a new person record, input their first name, their last name, the image the user uploaded, and then add these ABMultiValue, add value and label. For each number. So we save like one number as their main number, and we save another number as their cell number. And then save contact to phone, this gets that address book object again and then just saves the record to the address book. Saving & Creating Excuses (15:27) We allow users to add one phone call and up to seven text messages. We store all the excuses locally in Core Data. func getExcuses() -> [Excuse] { let fetchRequest = NSFetchRequest(entityName: "Excuse") let createdSortDescriptor = NSSortDescriptor(key: "createdAt", ascending: false) fetchRequest.sortDescriptors = [createdSortDescriptor] return (try! managedObjectContext!.executeFetchRequest(fetchReqeuest)) as! [Excuse] } We an NSFetch request to basically ask Core Data, “Do you have any excuses stored in there?” We grab them, and we have that list, and we just show that list on that initial scroll view that shows all the emojis and the different colors. newExcuse = Excuse.createInManagedObjectContext(managedObjectContext!, emoji: " ", color: Constants.excuseCellColors[5], delayTimeMinutes: 0) let lockedOutMessages: [String?] = [ "I'm locked out.", "Think I may have triggered the silent alert. Can you get here right away?", "Last time I did this the cops showed up. Please Hurry!" ] addMessagesToExcuse(lockedOutMessages, excuse: newExcuse) When you first open the app for the first time, if we realize you don’t have any excuses or like this is the first time, you’ve installed and opened the app, we set up three default excuses. Fixing the Fires Part 1 (21:24) The reason why the text messages weren’t going through was because they were flagged as spam. My first idea was to buy more numbers, I quickly wrote some Python code that when it sends the text message out, it would just round-robin between the numbers. The problem with this now is that the text message is coming from a number that we didn’t save in the user’s contact. So it is working, and the text messages are getting sent, it’s okay for the verification codes because it doesn’t matter like what number the verification codes come from. Fixing the Fires Part 2 (23:24) We got in touch with someone, and concluded what we needed was to buy a shortcode - one that’s five or six digits. Shortcodes are made for applications that send a high traffic of text messages. Throughout the implementation, no one once suggested that we needed a shortcode for the application to work. We had implemented a new shortcode, and we started seeing users being verified, which was awesome. But then all of their text, their excuses were now coming from the shortcode and not the number saved in their contacts. Fixing the Fires Part 3 (24:34) Our iOS app has a hardcoded list of numbers, which not a great solution. You can submit builds to Apple and you request for them to expedite your review process, which is what we did. What Did We Do Wrong? (25:10) The contact numbers probably should of been dynamic. What other people normally do is they do round-robin, with hundreds or thousands of numbers, that gets round-robined. There’s a threshold if you start sending hundreds of text messages within like a half an hour, you will get banned. And so it’s not that Plivo banned us, it’s that the carriers banned us - Verizon, AT&T, T-Mobile, and they’re the ones that actually marked our text messages as spam. What Did We Do Right? (26:14) What did we do right? We had all the monitor tools set up, so that I knew that things were breaking. We used the following: - Sentry, an error monitoring tool. - New Relic, a server monitoring tool. - Fabric, analytics and crash reporting. Q&A (27:58) **Q: Why did you use Core Data? ** We have an internal tool which automatically generates the Core Data models. So the idea is we actually auto generate all of our API code via a Python script we wrote, so it looks at our spec, our API spec, it actually auto generates all the RestKit, and all the API code, and it like auto generates the Core Data models and everything. **Q: Can you respond to a text message? ** So yeah, users totally do it. We never instruct them to. **Q: What was the rate limit for these phone numbers? ** I don’t think there’s any hard and fast limit, and different carriers would ban them at different times is what we’ve experienced. About the content This content has been published here with the express permission of the author.
https://academy.realm.io/posts/slug-rudy-mutter-putting-out-swift-fires/
CC-MAIN-2018-22
refinedweb
2,011
53.71
Talk. To build a messaging application we need to create a queue first, have one application send messages to it and have another application receive those messages from it. So, in this article I shall focus mainly on these operations. Before doing that I would like to mention about the basic types of MSMQ queues: Private and Public. Public queues are those that are published in the active directory. So, applications running on different servers throughout the network can find and use public queues through the active directory. Private queues on the other hand are available locally on a particular machine and are not published in the active directory. The System.Messaging namespace provides a set of classes which can be used to work with MSMQ. In this article we will be mainly focusing on Message and MessageQueue classes. The MessageQueue class provides all the necessary functionality to work with and manipulate MSMQ queues. It is like a wrapper around message queuing. The Message class provides everything required to define and use an MSMQ message. System.Messaging MessageQueue MessageQueue Queues can be created either programmatically or through the computer management snap-in. I presume MSMQ is installed on your system. You can use the Create shared method of the MessageQueue class to create a queue. In the code snippet given below I am creating a private queue named MyNewQueue. Create Try Dim queueAsMessageQueue queue = MessageQueue.Create(".\Private$\MyNewQueue") ' If there is an error creating a queue you get a MessageQueueException exception Catch ex As MessageQueueException End Try Note the parameter I have passed to the the Create method. It is called the path of the queue. The single dot (.) in the path indicates the queue is created on the local machine. Since we are creating a private queue here, we need to include a Private$ in the path. If we are creating or accessing a public queue we can just use the MachineName\QueueName format in the path. We use the Send method of the MessageQueue object to post a message to the queue. Send MyQueue.Send("<<Message>>", "<<Message Label>>") One thing I would like to mention here is that the Send method takes an object as the first parameter to denote the message body. This can be a Message object or any managed object. If the object is not of type Message, then the object is serialized and stored in the message body. There are two types of operations with respect to reading a message fom the Queue: Peeking and Receiving. When we receive a message from the queue, the message is removed from the queue. Peeking is similar to Receiving but here the message is not removed from the queue. Dim msg As Message msg = MyQueue.Receive() MessageBox.Show(msg.Body) There are many overloads of Receive and the one used above indefinitely blocks the caller till a message is available on the queue. On the other hand if you do not want to get blocked indefinitely then you can use the overload of Receive which takes a TimeSpan argument and this throws an exception if a message is not received within that time span. Here is a call to receive which times out after 1000 ticks (1 second). TimeSpan msg = MyQueue.Receive(New TimeSpan(1000)) We have seen that Peek and Receive operations are synchronous in nature. However there exists a method where we can do these operations asynchronously. For this, we can use the BeginPeek and EndPeek or BeginReceive or EndReceive respectively. BeginPeek EndPeek BeginReceive EndReceive We can use the Delete shared method of the MessageQueue object to delete a queue from the system. MyQueue.Delete(".\Private$\MyNewQueue") You can also delete a queue from the computer management console. Right click on the queue you wish to delete and select Delete from the context menu. Enlisted below are some other simple operations on a queue which you might be interested in. You might have noticed that we have used a lot of the MessageQueue objects in the samples. VS.NET makes development easier by providing objects in the toolbox which we can drag and drop to the designer surface. We can use the property sheet to initialize properties and then use the object straight away in our code. Included with this article is a VB.NET windows application project which has all the simple operations mentioned above implemented. Here's how it looks. Towards the left of the screen in a treeview control enlisting all the private queues in the system. Messages are also listed under the respective queue nodes in the tree. The operations possible with this app are: Please excuse me if you find bugs in the project In this article and in the VB.NET project that I have submitted, I have used only Private Queues. Also, I have not used transactional messaging. Please note the working with Public queues is almost identical but for some small changes in Path semantics. So, everything I have explained with respect to private queues will definitely hold good for public queues as well. I will be doing an injustice to MSMQ if I finish off everything in a single article. In this article we saw the very basics of MSMQ. In the forthcoming article we shall see some more advanced concepts like Transactional messaging, serializing and deserializing messages, asynchronous operations.
http://www.codeproject.com/Articles/3944/Programming-MSMQ-in-NET-Part
CC-MAIN-2016-30
refinedweb
894
64.41
Created on 2012-05-25 00:27 by eric.smith, last changed 2015-08-05 16:07 by eric.snow. If a zip file contains "pkg/foo.py" but no "pkg/" entry, it will not be possible for "pkg" to be a namespace package portion. For a (very) brief discussion on the strategy to implement this, see: See also test_namespace_pkgs.py ZipWithMissingDirectory.test_missing_directory which is currently marked as expectedFailure. Here is a patch that synthesises the directory names at the point where file names are read in. The unit test now passes, and has had the expected failure removed. Patch collaboration with Diarmuid Bourke <diarmuidbourke@gmail.com> at the europython sprint. Please see attached new patch, based on review comments. This can significant slowdown zipimport. I think we shouldn't support such broken zip files in zipimport. How common are such broken zip files? Like Serhiy, I'm concerned about the possible negative impact on the interpreter startup time as we try to second guess the contents of the zip file manifest. It seems better to be explicit that we consider such zipfiles broken and they need to be regenerated with full manifests (perhaps providing a script in Tools that fixes them). OTOH, the scan time should be short relative to the time needed to read the manifest in the first place - an appropriate microbenchmark may also be adequate to address my concerns. I don't think such files are common: I've never seen such a file "in the wild". I created one, by accident, while testing PEP 420. OTOH, it was surprisingly easy to create the malformed file with zipfile. Why are zipfiles without entries for directories broken? When you don't care about directory permissions (such as when the zipfile won't be extracted at all) the entries for directories are not necessary. Also, AFAIK the zipfile specification <> does not require adding directory entries to the zipfile. FWIW: the zipfiles created by py2app do no contain entries for directories at the moment. I'll probably add entries for directories in the next update to work around this issue. Just a note: the zip files produced by the distutils and friends (sdist, bdist_dumb, eggs) do not include entries for plain directories. I would guess that this is also true for wheels at the moment, unless something was specifically done to work around this property of distutils-generated zip files. So ISTM the right thing to do is to synthesize the entries at directory read time, when they're being looped over anyway. Reviewing the patch, there is a performance optimization possible by making a slight change to the algorithm. Currently the patch loops from the start of the string to the end, looking for path prefixes. This means that the total overall performance is determined by the length of the strings and especially the average directory depth. However, there is a significant shortcut possible: looping from the *end* of each string to the beginning, it's possible to break out of the loop if the prefix has already been seen -- thus saving (depth-1) dictionary lookups in the average case, and only looking at the characters in the base filename, unless a new directory is encountered... for a typical overhead of one unicode substring, dictionary lookup, and strrchr per zipfile directory entry. (Which is very small compared to what else is going on at that point in the process.) To elaborate, if you have paths of the form: x/y/a x/y/b x/y/c/d Then when processing 'x/y/a', you would first process x/y -- it's not in the dict, add it. Then x -- not in the dict, add it. Then you go to x/y/b, your first parent is x/y again -- but since it's in the dict you skip it, and don't even bother with the x. Next you see x/y/c, which is not in the dict, so you add it, then x/y, which is, so you break out of the loop for that item. Basically, about all that would change would be the for() loop starting at the end of the string and going to the beginning, with the loop position still representing the end of the prefix to be extracted. And the PyDict_Contains check would result in a break rather than a continue. So, if the only concern keeping the patch from being accepted is that it adds to startup time, this approach would cut down quite a bit on the overhead for generating the path information, in cases of repeated prefixes. (And in the common cases for zipfile use on sys.path, one would expect to see a lot of common prefixes, if only for package names.) The problem appears to be more general. zipimport fails for deeper hierarchies, even with directory entries. With the supplied patch (zipimport-issue14905-2.patch) I see the following: $ unzip -l foo.zip Archive: foo.zip Length Date Time Name --------- ---------- ----- ---- 0 2013-04-03 17:28 a/b/c/foo.py 0 2013-04-03 17:34 a/ 0 2013-04-03 17:34 a/b/ 0 2013-04-03 17:34 a/b/c/ --------- ------- 0 4 files $ ls foo.zip $ PYTHONPATH=foo.zip ~/dev/cpython/python Python 3.4.0a0 (default:3b1dbe7a2aa0+, Apr 3 2013, 17:31:54) [GCC 4.8.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import a >>> import a.b Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'a.b' >>> I've raised issue17633 to track the issue in my last message.
http://bugs.python.org/issue14905
CC-MAIN-2016-07
refinedweb
943
64.41
From Documentation Introduction The ability to print out a web page is a common business requirement especially if you are in the finance or banking industry where you may need to document some important data in papers. The most common approach is to design two views: one for online browsing and one for printing. However, it is a time consuming job to design two layouts to suit both purposes. This smalltalk will introduce an easy way to print out selected area of a ZK page, without having to design two views. Print a ZK page Thanks to the advancements of modern browsers, if you wish to print out the current view of a webpage, you can simply press "Ctrl + P" and the browser will print the current page for you. Alternatively you can use the Clients.print() API provided by ZK. Sample It is easy to use theClients.print() API, here is the simplest sample: <zk> <window title="Print Whole Page" border="normal"> <button label="Print" onClick="Clients.print()" /> <grid> <columns> <column label="Column 1" /> <column label="Column 2" /> </columns> <rows> <row forEach="1,2,3,4,5"> <label value="First Name" /> <label value="Last Name" /> </row> </rows> </grid> </window> </zk> Print Selected Area However, in many cases you do not care about the headers, footers or the sidebar and you wish to only print out the data you are interested in. Here under we will introduce how we could achieve this easily in ZK. Concept We first create a snapshot of the html fragment of the component you want to print. Then we post and insert this fragment into a template zul file, rendered in a hidden iframe. One key point here is to use a hidden iframe -- it is common to use javascript function window.open() to open another browser tab or window and then pass the html content to the newly opened window to print. However, it is not a very user-friendly design because some browsers may block the pop-up window. Thus, here we implement the print utility by creating a hidden iframe to avoid opening another browser window. Implementation Steps First, we create a template.zul page with Html component to store the content we want to print. <zk> <style src="${param.printStyle}" media="print" /> <html content="${param.printContent}" /> </zk> - Line 2: Used to load the print style. - Line 3: Html component to print. Second, we need to create a utility class PrintUtil.java to call javascript print function. public class PrintUtil { public static void print(Component comp) { print(comp, "template.zul", null); } public static void print(Component comp, String cssuri) { print(comp, "template.zul", cssuri); } public static void print(Component comp, String uri, String cssuri) { String script = "zk.print('" + comp.getUuid() + "', '" + uri + "'"; if (cssuri != null) { script += ", '" + cssuri + "');"; } else { script += ");"; } Clients.evalJavaScript(script); } } - Line 11: The javascript print function to call. Third, create print.js file to define zk.print function and create a hidden iframe to avoid opening a new browser window. zk.print = function(uuid, uri, cssuri) { if (uuid && uri) { var wgt = zk.Widget.$(uuid), body = document.body, ifr = jq('#zk_printframe'); if (!ifr[0]) { jq(body).append('<iframe id="zk_printframe" name="zk_printframe"' + ' style="width:0;height:0;border:0;position:fixed;"'+ '></iframe>'); ifr = jq('#zk_printframe'); } // wait form submit response then call print function // reference: ifr.unbind('load.ajaxsubmit').bind('load.ajaxsubmit', function() { var iw = ifr[0].contentWindow || ifr[0]; iw.document.body.focus(); iw.print(); }); jq(body).append('<form id="zk_printform" action="' + uri + '" method="post" target="zk_printframe"></form>'); var form = jq('#zk_printform'), content = '<div style="width: ' + wgt.$n().offsetWidth + 'px">' + jq(wgt.$n())[0].outerHTML + '</div>'; form.append(jq('<input/>').attr({name: 'printContent', value: content})); if (cssuri) { form.append(jq('<input/>').attr({name: 'printStyle', value: cssuri})); } form.submit().remove(); } else { window.print(); } } - Line 6: Create hidden iframe. - Line 14: Execute window.print() function once iframe finishes loading. - Line 20: Create a hidden form and set target to the hidden iframe. - Line 27: Submit the form then remove. Finally, we can modify the sample in previous section to print only the grid component we care. <zk> <window title="Print Whole Page" border="normal"> <button label="Print" onClick="org.zkoss.addon.print.PrintUtil.print(grid)" /> <grid id="grid"> <columns> <column label="Column 1" /> <column label="Column 2" /> </columns> <rows> <row forEach="1,2,3,4,5"> <label value="First Name" /> <label value="Last Name" /> </row> </rows> </grid> </window> </zk> - Line 3, 4: Use printing utility to print grid component. Advanced Usage After going through the steps above we can easily implement an utility to print out only the desired components/data of a ZK page. Now, to go a step further, you may wish to add some extra elements to the desired data. For example you may wish to add your company letter head or company logo, or a signature field so that the printed page can be submitted for approval. Or, you may wish to tweak the font size or color to make the printed copy more readable. Here under we will use two examples to demonstrate how these could be done. Modify the Template Page Take the following web page as the first example: We wish to print only the center part highlighted in red, with extra information -- report header and footer. In the previous section, the template.zul page is as simple as possible to demonstrate the concept and usage. By customizing this zul file we can easily add desired report header and footer or any elements we want to include in the printed view. For example, below is the new template file content named newTemplate.zul where we have placed the logo, title and other desired fields. <zk xmlns: <style src="${param.printStyle}" media="print" /> <div sclass="printHeader"> <n:div Company Logo </n:div> <n:div Ratio Analysis </n:div> <n:div Report Date: 2014/12/31 </n:div> </div> <html content="${param.printContent}" /> <div sclass="printFooter"> <n:div Signature: </n:div> </div> </zk> - Line 3: Add custom report header with company logo, report title and report date. - Line 15: Add custom report footer with signature. Then use the print utility class as follows: PrintUtil.print(comp, "print/newTemplate.zul", null); The resulting layout is demonstrated in section 4.3. Modify the Report Look and Feel It is also possible to modify the look and feel for the targeting content to make it more readable when being printed. Take the following web page as an example : It looks quite nice and clear in the browser but when printed out, especially using a monochrome printer, some of the white and gray colored text might be hard to read. To make the text stand out better and make it more readable on the papers we may wish to enlarge the text and specify a proper text and background color. This can be done by providing a custom print style with the following steps. First, we find out the CSS styles we use for the web page. For ease of demonstration I am only extracting some styles here: .mortgage-category { font-family: Arial,Sans-serif; font-size: 14px; color: #FFFFFF; padding: 4px 5px 3px; line-height: 24px; background-color: #F39C12; border-bottom: 1px solid #F39C12; } .mortgage-item-cell { font-family: Arial,Sans-serif; font-size: 12px; color: #636363; padding: 4px 5px 3px; line-height: 24px; overflow: hidden; border-bottom: 1px solid #F39C12; } - Line 3, 4, 12, 13: the font size and color for viewing on website. - Line 7, 8, 17: the used color for viewing on website. Then, we copy the styles above to a new file called print.css as a basis, and modify as needed for printing. .mortgage-category { font-family: Arial,Sans-serif; font-size: 18px; color: #000000; padding: 4px 5px 3px; line-height: 24px; background-color: #DDDDDD; border-bottom: 1px solid #DDDDDD; } .mortgage-item-cell { font-family: Arial,Sans-serif; font-size: 16px; color: #000000; padding: 4px 5px 3px; line-height: 24px; overflow: hidden; border-bottom: 1px solid #DDDDDD; } - Line 3, 4, 12, 13: the font size and color for printing. - Line 7, 8, 17: the used color for printing. Finally, we use the print utility as follows: //use absolute path if zul page and css file are in different folders PrintUtil.print(comp, "print/newTemplate.zul", "/css/print.css"); The resulting style is demonstrated in the 2nd half of the Demo video in section 4.3 below. Result Demo The video below demonstrates the results of the two advanced usages described above. For ease of demonstration here we use a PDF printer so the resulting screen is a PDF file, but you can definitely specify a real printer to print out the desired results on papers. Summary With the printing utility explained in the article, you can print the desired sections in a ZK page with only little effort -- you can even include custom headers & footers or change the style easily for better readability. For your convenience we have wrapped this utility as a ready-to-use jar file. Refer to download section to download the jar file and put it in your project's WEB-INF/lib folder. Download
https://www.zkoss.org/wiki/Small_Talks/2014/December/Printing_In_ZK
CC-MAIN-2017-26
refinedweb
1,522
56.76
Starting out JAVA in MAX – send message directly to object I have just installed eclipse and got it running with MAX:) I have used Javascript in the past but need something that performs better so this is why I am starting to use JAVA. I used to access objects directly from javascript using the following syntax: objline1 = this.patcher.getnamed("sn_line1"); objline1.message(0,10); Is it possible to send a message directly to an object using JAVA? Hi. No, this isn’t possible from Java, you need to send out of an outlet and route appropriately… Ok. Thanks for the advise. Do you happen to know of any good tutorials apart from the "Writing Max Externals in Java" pdf? Also when sending to an outlet can I please ask you to advise how to format the message. I am sending the string below to a play object but it did not understand it. import com.cycling74.max.MaxObject; public class PlayFragment extends MaxObject { public void bang() { String str ="start 100 2000 1900"; post(str); outlet(0,str); } } Yo KMLL, You need to parse the values you want to output as "Atoms" ( the generic datatype that Max uses ), build an Atom[] array from those values, then output the array. This is how I go about it: String someString = "allTheThings"; float someFloat = 0.4f; Atom[] outputAtom = new Atom[] { Atom.newAtom ( someString ), Atom.newAtom ( someFloat ) }; outlet ( 0, outputAtom ); Chris. yup, that. As for additional tutorials, don’t know of any of hand, but didn’t look that much. found the pdf to be informative enough. i’m guessing if you hunt around for emamples of other people’s code that may help you. Thank you both for the input:) I studied the API in more detail yesterday and it seems that it is actually possible to send a message direcly to an object. Here is an example: MaxPatcher p = this.getParentPatcher(); MaxBox mb = p.getNamedBox("sn_play1");//The script name needs to be set in the inspector Atom[] outputAtom = new Atom[] { Atom.newAtom (200), Atom.newAtom (500),Atom.newAtom (300)}; mb.send("start", outputAtom); btw, if you look in the java/classes directory under your MAX install, there are a number of java files in there that do basic things KMLL, you wonderful person, you. I’ve seen this question a few times and the response was always "it can’t be done". This will save me from having to route all over the place. Well done and many thanks! Chris. Yes, it’s possible to send messages between Java objects without using patch cords. Doing so will make it well-nigh impossible to debug your patches unless you document the holy shit out of what you’re doing. Which you won’t. Sending messages through patch cords means you have to draw the patch cords, but then you can see the connections in your patch. With invisible connections: you make one change, nothing works, and you can spend many long and tedious hours trying to figure out what’s gone wrong. If you want your code to be reusable, if you’re working on a project that’s supposed to last and develop over more than a week, then stay away from this trick. But if your time is worth nothing at all, go ahead and be it on your head.-) Thanks for the input on this thread you all. Glad I could help Chris:) Guess Peter has a good point but it is a matter of coding style I guess. Also I have to warn any readers that I found the API to be inadequate when interacting with certain objects e.g the line object but I have created a seperate thread about this issue. – Thanks FYI Also I have to warn any readers that I found the API to be inadequate when interacting with certain objects e.g the line object there’s nothing wrong with the API in this regard. you just have to know that a list, e.g. 20 200, is a message starting with the (mostly hidden) message header list. i.e. 20 200 and list 20 200 is basically the same. so in java you can use the send(java.lang.String message, Atom[] args) method to send a list to a max object, like this: int args[] = {20, 2000}; MaxPatcher p = this.getParentPatcher(); MaxBox mb = p.getNamedBox("sn_line1"); mb.send("list", Atom.newAtom(args)); hope that clears it up. As Volker said, you can use send(java.lang.String message, Atom[] args). There is also MaxSystem.sendMessageToBoundObject(java.lang.String sym_name, java.lang.String msg, Atom[] args) As mentioned in your other thread about sending messages to line, it may be easier to use the overloaded methods for java primitive types than using the atom versions–performance is better too. Yes, it’s possible to send messages between Java objects without using patch cords. Doing so will make it well-nigh impossible to debug your patches unless you document the holy shit out of what you’re doing. Which you won’t. I totally agree with Peter, but if you do document everything well, you can do many quite powerful things messaging between Java objects. Using the two methods mentioned above maybe have some performance and threading issues vs some of the methods available for using your MXJ’s outlets. You can even build stuff that doesn’t use the Max API but passes around references to Java objects if you really want to get into crazy town. int args[] = {20, 2000}; MaxPatcher p = this.getParentPatcher(); MaxBox mb = p.getNamedBox("sn_line1"); mb.send("list", Atom.newAtom(args)); I get the following error from the code above…. "cannot find symbol symbol : method send(java.lang.String,com.cycling74.max.Atom) location: class com.cycling74.max.MaxBox mb.send("list", Atom.newAtom("test"));" This error is a bit weird as it seemed to come and go, now it is always there. I was also wondering if it’s possible to name the MaxPatcher something other than p? Like so… MaxPatcher mypatch = this.getParentPatcher(); MaxBox pattr = mypatch.getNamedBox("pattr_object"); pattr.send("dump");
http://cycling74.com/forums/topic/starting-out-java-in-max-send-message-directly-to-object/
CC-MAIN-2014-42
refinedweb
1,025
74.19
Aug 02, 2010 11:35 AM|zoggling|LINK I have an upload control on my ASP page, to allow users to upload photos to a folder in the web site directory. After uploading, I would like the site to automatically generate thumbnails from these images (so that the thumbnails can be displayed in a gallery page, rather than the high-quality images themselves). Can anyone provide some code/references that will do this? My requirements are: - Needs to be a part of my web application (no Windows.Forms solutions please!) - Generate one thumbnail per uploaded image - Thumbnail must have same ratio as original - Source files will be .bmp, .jpg and possibly .tif and .png - Potentially a means to adjust resolution/size of generated thumbnails - Potentially run in the background, so that the user can continue to navigate through the website and/or upload further images. My page has VB.NET code-behind. Many thanks in advance! generate thumbnail files uploaded images Star 8798 Points Aug 03, 2010 12:38 AM|stanly|LINK Member 375 Points Aug 03, 2010 01:38 AM|dotNetViper|LINK Also see the link below. Participant 1241 Points Aug 03, 2010 01:47 AM|sunilyadav165|LINK Hi, I have created something similar look at the link below. hope it helps. Aug 03, 2010 02:41 AM|Vipindas|LINK using System.Drawing; using System.Drawing.Drawing2D; public(outputPath, image.RawFormat); thumbnailGraph.Dispose(); thumbnailBitmap.Dispose(); image.Dispose(); } then call resize image function as ResizeImage(400, File1.FileContent, path); 5 replies Last post Aug 03, 2010 02:58 PM by zoggling
https://forums.asp.net/t/1585508.aspx
CC-MAIN-2019-39
refinedweb
261
58.58
(I'm from C background and new in C++ and its STLs) I'm writing a C++ array of vectors that will be passed (as a reference of an array of vectors) through a function and will be processed in it. In this case [in C] I would have passed a pointer to my custom data type (call by value under the hood.) My code that's giving errors in compile time while trying to do so: #include <cstdio> #include <vector> using namespace std; /* the problem is I can't get the syntax. vector<type> &var is a reference to a single dimension array of vectors. */ void pass_arrayOf_vect(vector<int> &array, int lmt); int main() { int lmt = 10; vector<int> lst[lmt]; pass_arrayOf_vect(lst, lmt); return 0; } /* and the traditional ambiguity of whether using "." or "->" for accessing or modifying indexes and their members. */ void pass_arrayOf_vect(vector<int> &lst, int lmt) { for (int i = 0; i < lmt; i++) { lst[i].push_back(i*i); } for (int i = 0; i < lmt; i++) { printf("array[%d]: ", i); for (int j = 0; j < lst[i].size(); j++) { printf("%d ",lst[i][j]); } printf("\n"); } printf("\n"); return; } In the main function the lst variable is an array of vectors. When you pass this to the pass_arrayOf_vect function you pass a pointer to the first element. I.e. when you do pass_arrayOf_vect(lst, lmt); it's actually the same as doing pass_arrayOf_vect(&lst[0], lmt); So the function you call needs to accept a pointer to a vector as its first argument (not a reference): void pass_arrayOf_vect(vector<int> *array, int lmt); // ^ // Note use of asterisk instead of ampersand An even better solution would be to use an std::array of vectors instead. Or if you're on an older compiler without support for std::array, or need the amount to be run-time configurable (in which case you can't use plain C-style arrays anyway), use a vector of vectors.
https://codedump.io/share/HJlw8w9W6Ofj/1/passing-and-modifying-array-of-vectors-through-functions-in-c
CC-MAIN-2017-47
refinedweb
326
64.04
This version of the Rebel Pack, 0.2.2, works with Bifrost 2.0.2.0 and later versions. New in 0.2.2 What is the Rebel Pack? The product team of Bifrost often make compounds for our own use, and these sometimes make it into the compounds Bifrost ships with. Quite a few of the compounds that ship with Bifrost today started here. These compounds are distributed together in a "pack", where some compounds use and rely on other compounds in a pack. The compounds included include math utilities, property access nodes, and nodes for manipulating strands, meshes, as well as some simulation caching utilities. In the graph these are all marked with a orange "R" icon as their names and namespaces are integrated into the existing ones. Ones with an orange test tube are especially experimental - we expect them to change. These are not officially supported like the compounds that ship with Bifrost, but we have been maintaining these and intend to continue to do so. We will also specify which versions of Bifrost these nodes are intended to work with. This is also a way for us, the product team of Bifrost, to act as if we're TDs at a studio maintaining our own set of compounds, which makes our testing a bit more like the real world. For information on how to install compounds and graphs, please see the documentation:
https://area.autodesk.com/downloads/rebel-pack-022/
CC-MAIN-2019-43
refinedweb
236
72.97
Hi, It was possible on N95/N95 8Gb using Ext plugin for 3rd edition FP1 ver 2.5. AFAIK it doesn't work on later releases. Check mplayerremotecontrol.h from plugin. You should be able to... Hi, It was possible on N95/N95 8Gb using Ext plugin for 3rd edition FP1 ver 2.5. AFAIK it doesn't work on later releases. Check mplayerremotecontrol.h from plugin. You should be able to... Yes, it works. Download latest Eclipse for Java developers, currently Indigo. Then install Mobile Tools for Java (MTJ), cuurently version 1.1.2. In Eclipse open Help -> Install new software,... Thank you wizard_hu_ for reply. I tried test GUI and console autostarted apps, still nothing so far. Setting iEikonEnv->SetSystem(ETrue); for GUI application didn't help either. Apps... Hi, I discovered a strange behaviour on Nokia 700 Belle device. Applications using startup list management API don't start, if secure code enabled and you don't enter the code quick enough. The... Hi Lucian, Thank you for reply. Fortunately we have no issues with UIDs, I requested them only from my account which is now fine and verified. Today the issue with test house list got fixed... Hi, A number of e-mails have been sent within last week. Still nothing. Anybody can help to merge our accounts? Best regards Ok, thank you. I'll try that. It would be better to have dedicated option for that on the site. Hi, We have four accounts in our company with pending submissions associated with a particular account. One account is now verified with company's Publisher id. How to merge them in one to get... Yes, the link has been fixed in SDK v.0.9. Thank you. I managed to run the example on emulator v. 0.8 and v.0.9 as well as on real device. Missing libOpenVGU.lib and couple of header files may be... It is not the Coverflow example. GraphicsShell is quite old. I asked for this on FNP but with no luck. Yes, thank you. I reported the broken link. br Viktor Hello, I can't find the CoverFlow example mentioned in Nokia_Symbian3_Developers_Library_v0_8_en. This example application illustrates how the graphics architecture called ScreenPlay creates... Hi, The following code gives an allocation error on completion. What is wrong here? #include <QApplication> int main(int argc, char *argv[]) Hi, Include the line in your mmp file SYSTEMINCLUDE \epoc32\include\mw br Yes, I understand it. It does not prove anything. But I have a big doubt here. I almost sure that such basic example build with SDK for 5th edition would work on 3rd edition phones. (I didn't try it... Thank you all for replies. I interested in any compatibility issues caused moving to a new platform. I tried helloworldbasic example build for ARMV5 (RVCT 2.2) with PDK 3.0.g. It does not start... Hi, Is there any information about new SDK for coming S^3 platform, when it will be available? Should it be enough just to use existing N97 SDK or probably developers should move to PDK from... Hi, This is a brilliant solution! Thank you for this. In one of my old projects I interested in a location of the sis file, drive only, not the full name. And here it is. Viktor Hi, If you have an issue with TRAPD on emulator, see the thread It may help you. br Hi, You can check I hope it helps. Thank you, but I don't want the drive where the application was installed to. I need the drive from the sis(x) file was launched. -- Viktor Hi, Any idea how to discover the drive from the application was installed? My app reads some data from the drive after installation. Old wrong data file can be located on another drive, so I... Proto device reflashed and now I have "Unable to install" issue all the time. Any ideas? Viktor Thank you, It works. I installed the S60_5th_sdk_v0_9 SDK on anither PC just to get xml files S60_5th_Edition_SDK_v0.9_v2.xml S60_5th_Edition_SDK_v0.9_v3.xml I copied these files to... I am also interested in how to enable S60 5th SDK v1.0 in Carbide VS. I am using VS 2005. Can somebody please give descriptive recomendations. Thank you
http://developer.nokia.com/community/discussion/search.php?s=15ab22d4e1bec76ebca54127d72d20d2&searchid=3282372
CC-MAIN-2014-35
refinedweb
718
78.96
Dealing with Operators At some point the examples might not serve your purposes any more. Then you need to modify or to create an entire new operator for your problem. If this happens the following section will help you to understand what happens by calling the different methods of the operator and how the final operator which serves as the RHS of the equation will be assembled. __init__ Here all internal used variables of the class should be defined. By calling the specific operator class they will be provided. Read more about `__init__`. bind Putting everything together. The bind method returns the method rhs as an operator which can be used on the discretized domain. It serves as an assembly method bringing all parts of the operator together. The different parts are: - op_template - flux Stepwise through the method bind: 1. Going into the op_template method to create an operator template which only needs to be called with the specific input data (Source, BC, etc.). compiled_op_template = discr.compile(self.op_template()) 2. Defining the vector-field for the state and the boundary conditions. 3. Building the flux operator with the flux method. flux_op = get_flux_operator(self.flux()) 4. Building all other requested operators (Nabla, Mass-Matrix, Stiffness-Matrix, Inverse Mass-Matrix, etc.). 5. Putting all separate operators together. 6. Returning the compiled operator template. 7. Building a rhs method by feeding the compiled operator with source functions and boundary values. def rhs(t, w): ... return rhs The entire building process works with place holders or empty vector-fields and not with the actual vector-fields containing the data of the entire mesh. Step 1. - 6. will be described in more detail in section op_template. Step 7 will be described in the next section. rhs - Building the Right Hand Side After the operator template has been assembled the link between the place holders and the actual functions behind them needs to be provided. This happens in the part where the method rhs is defined: def rhs(t, w): from hedge.tools import join_fields, ptwise_dot rhs = compiled_op_template(w=w, dir_bc_u=self.dirichlet_bc_f(t)) if self.source_f is not None: rhs[0] += self.source_f(t) return rhs Here the keywords w and dir_bc_u which served as place holders in the operator template got linked to the input variables w and the boundary function dirichlet_bc_f(t) which has been passed to the operator during initialisation. As a special feature the boundary function uses the time t as input. Depending on how the function looks like it also could have been w or parts of it - w[0] = u (state) oder w[1:] = v (velocity). However this is the crucial part of the implementation of an operator. All new functions used in the operator have to be linked with the field-vectors at this point. op_template The operator template will not use any vector-fields containing data of the entire mesh but place holders which have to be linked with the actual vector-field. As an example the StrongWaveOperator will serve. First the structure of the state needs to be defined: w = make_vector_field("w", dimensions+1) u = w[0] v = w[1:] The structure of the Operator is a vector-field. Later u and v will be linked to the actual vector-fields containing the value for each node. At this stage the vector-field appears as [w[0] w[1] w[2]] when printing it to the screen. In the next step the vector-fields containing the information about the boundary conditions will be defined. Again only the structure gets defined here and later the actual values get linked to it. The vector-field for the Dirichlet BC's is defined as: dir_bc = join_fields(Field("dir_bc_u"), v) In the case of the StrongWaveOperator only the state u will be defined as a BC place holder but not the velocity v. Later dir_bc_u will be linked to an external function calculating the state u on the boundary node. The BC-vectro-field appears as: [dir_bc_u w[1] w[2]] at this stage. As the StrongWaveOperator has different BC's included another two possibilities can be found: neu_bc = join_fields(u, -v) rad_bc = join_fields( 0.5*(u - self.sign*numpy.dot(normal, v)), 0.5*normal*(numpy.dot(normal, v) - self.sign*u) ) neu_bc describes the Neumann-BC's. SUMMARY: The vector-field for the operator and the vector-field for the BC's have been defined. Both have the same structure [u, v] and [u_bc, v_bc]. Now that we have the framework for the state vector-fields we need to define the flux operator. flux_op = get_flux_operator(self.flux()) How the flux operator will be built is described in the section flux The next step is to build the DG specific operators (Mass-Matrix, Stiffness-Matrix, etc.). In the case of the StrongWaveOperator we only need the Nabla-Operator and the inverse Massmatrix nabla = make_nabla(d) InverseMassOperator() As both the nabla and the InverseMassOperator are external features from the optemplate module they have to be imported from this module at first: from hedge.optemplate import make_nabla, InverseMassOperator SUMMARY: The operator template now has all important parts - vector-fields, flux-operator, DG-operators (Mass-Matrix, Stiffness-Matrix, etc.) to assemble the final expression. The last step is the assembly of the operator template which gets passed back to the bind method. In case of the StrongWaveOperator the returned expression is a sum of vector fields for the field and the flux. from hedge.tools import join_fields return ( - join_fields( -self.c*numpy.dot(nabla, v), -self.c*(nabla*u) ) + InverseMassOperator() * ( flux_op*w + flux_op * pair_with_boundary(w, dir_bc, self.dirichlet_tag) + flux_op * pair_with_boundary(w, neu_bc, self.neumann_tag) + flux_op * pair_with_boundary(w, rad_bc, self.radiation_tag) )) Important to mention is the flux formulation w.r.t. the BC's. Actually the flux is the sum of the internal flux between the elements flux_op*w and the fluxes at the boundaries + flux_op * pair_with_boundary(w, dir_bc, self.dirichlet_tag) + flux_op * pair_with_boundary(w, neu_bc, self.neumann_tag) + flux_op * pair_with_boundary(w, rad_bc, self.radiation_tag) The pair_with_boundary method hast three inputs. w describes the volume vector of the internal part of the mesh and dir_bc describes a boundary vector. The third argument is the boundary tag. If a face has a self.dirichlet_tag as BC then the flux can be calculated by using w as the internal part and dir_bc as the external part. As the StrongWaveOperator has three different BC's all three possibilities are added to the flux. If one BC is not activated this part of the flux will be zero and has no influence. flux To assemble the flux operator place holders for the different parts of the state vector will be used. The place holders will later be linked to the values of the vector-fields. w = FluxVectorPlaceholder(1+dim) u = w[0] v = w[1:] With the place holders the different types of fluxes can be described: from hedge.tools import join_fields flux_weak = join_fields( numpy.dot(v.avg, normal), u.avg * normal) if self.flux_type == "central": pass elif self.flux_type == "upwind": flux_weak -= self.sign*join_fields( 0.5*(u.int-u.ext), 0.5*(normal * numpy.dot(normal, v.int-v.ext))) else: raise ValueError, "invalid flux type '%s'" % self.flux_type Here again the join_fields method is used to assemble the different components of the flux vector. Finally the flux operator gets returned to the op_template and there it will be used to assemble the entire WaveOpterator.
http://wiki.tiker.net/Hedge/HowTo/DealingwithOperators?action=fullsearch&value=linkto%3A%22Hedge%2FHowTo%2FDealingwithOperators%22&context=180
CC-MAIN-2015-27
refinedweb
1,237
57.67