text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
a4j commandbutton render attribute does not work when using immediate="true"Mano Swerts Apr 11, 2011 6:09 AM Hi all, I'm using Richfaces 4 and i'm using a form with the following button: {code:xml} <a:commandButton {code} I am using the immediate="true" attribute to reset the object behind the form without triggering validation. This works but it does not rerender the form. Is this normal? I would expect that the render attributes keeps working when setting immediate="true". Thanks in advance. 1. Re: a4j commandbutton render attribute does not work when using immediate="true"Lukáš Fryč Apr 6, 2011 5:04 AM (in response to Mano Swerts) Hi Mano, I have checked and @render works fine with @immediate=true. What version of RichFaces 4 are you using, could you upgrade to 4.0.0.Final? Could you provide more details about page and beans? 2. a4j commandbutton render attribute does not work when using immediate="true"Mano Swerts Apr 8, 2011 5:40 AM (in response to Lukáš Fryč) Hi Lukás, I am using RichFaces 4 Final. I will investigate a little more and get back to you. Thanks. 3. Re: a4j commandbutton render attribute does not work when using immediate="true"Mano Swerts Apr 11, 2011 6:22 AM (in response to Mano Swerts) Hi Lukás, I investigated this a little bit more. My problem is as follows: I have a page with a simple backing bean which is annotated with @ViewScoped. I have simple form with a couple of fields which is backed by a POJO through said backing bean. One of the properties of this POJO is annotated with @Max(value=9). I use a rich:message component to show an error message next to the field when a number greater than 9 is entered. {code:java} @Controller @ViewScoped public class PojoBean { private Pojo newPojo = new Pojo(); // ... getters and setters ... public void saveNewPojo() { // persist } public void resetNewPojo() { newPojo = new Pojo(); } }{code} {code:java} public class Pojo { @Max(value=9) private int property; // ...getters and setters... } {code} When I enter the number 20 in the text field and hit the tab key, then the error message pops up next to the field. This is ok. After that I hit my cancel button. This button is bound to a reset action. It looks like this in the xhtml code: {code:xml} <a:outputPanel <h:form> <h:inputText <rich:validator /> </h:inputText> <rich:message <a:commandButton <a:commandButton </h:form> </outputPanel> {code} The problem is that when I hit the reset It does not rerender my form. It reset the pojo correctly and the error message disappears like it should thanks to the immediate propery, but the number 20 remains in the text field. Even when there is no validation error, the number remains in the text field. When I remove the immediate property, the value for the property is removed from the text field. So my conclusion from this is that the rerender doesn't occur. Am I trying to tackle this problem in the wrong way? Thanks in advance. - 5. Re: a4j commandbutton render attribute does not work when using immediate="true"Ilya Shaikovsky Apr 18, 2011 6:19 AM (in response to Ilya Shaikovsky) think that easier solution will be to remove immediate and use ajaxSingle for cancel instead.
https://developer.jboss.org/thread/165036
CC-MAIN-2018-26
refinedweb
557
63.09
I started learning about backtracking in college and was asked to solve the famous 8 queens problem. I know there are many solutions out there for this, but I would like to get there by myself. However, I am kind of stuck and in need for some guidance here. The problem is that I don't have quite understood how to backtrack properly. When it comes to the point that there are no more legal positions at the board, I remove the last queen placed, but the algo will place the queen again at the same place that originated no valid positions. I thought about storing the sequences that are invalid so that they are not repeated but I am sure that's not the way to go about it. Any opinion or pointer in the right direction is greatly appreciated. Thanks. package queens; public class Queens { private int[][] board = new int[9][9]; private int numQueens = 0; public void printBoard() { for (int i = 1; i <= 8; i++) { for (int j = 1; j <= 8; j++) System.out.printf("%2d", board[i][j]); System.out.print("\n"); } System.out.print("\n"); } public void place(int i, int j) { board[i][j] = 1; numQueens++; } public boolean accepts(int i, int j) { int row, col; row = 1; while (row <= 8) if (board[row++][j] == 1) return false; col = 1; while (col <= 8) if (board[i][col++] == 1) return false; row = i; col = j; while (row <= 8 && col <= 8) if (board[row++][col++] == 1) return false; row = i; col = j; while (row <= 8 && col >= 1) if (board[row++][col--] == 1) return false; row = i; col = j; while (row >= 1 && col <= 8) if (board[row--][col++] == 1) return false; row = i; col = j; while (row >= 1 && col >= 1) if (board[row--][col--] == 1) return false; return true; } public void remove(int i, int j) { board[i][j] = 0; numQueens--; } public boolean isComplete() { return numQueens == 8; } public boolean calc() { if (isComplete()) return true; int last_i = 0, last_j = 0; for (int i = 1; i <= 8; i++) { for (int j = 1; j <= 8; j++) { if (accepts(i, j)) { place(i, j); printBoard(); last_i = i; last_j = j; } } } remove(last_i, last_j); calc(); } }
https://www.daniweb.com/programming/computer-science/threads/277194/8-queens-problem
CC-MAIN-2018-26
refinedweb
362
63.87
print "Enter the name of the file", file= open(raw_input(":"),'w').write(raw_input("Enter the content:")) print file.read() AttributeError: 'NoneType' object has no attribute 'read' write() method doesn't return anything, so file value is None. You should assign result of open() function to file variable and secondly call write method on it. If you are using: open(path_to_file, 'w') you can read content of this file. And when you call file = open(some options) method you should call file.close() after end of file processing. But in python has with keyword which (for instance of file class) call close() method automatically after end of code block execution, even if an exception occurced. So your method can be implemented like this: def write_to_file_and_print_content(): print("Enter the name of the file:") name_of_file = raw_input("") # Writting to file with open(name_of_file, 'w') as file_to_write: content_of_file = raw_input("Enter the content:\n") file_to_write.write(content_of_file) # after that file_to_write.close() is called with open(name_of_file, 'r') as file_to_read: print(file_to_read.read()) # after that file_to_read.close() is called
https://codedump.io/share/NwgYvjsGbScM/1/why-am-i-getting-a-attributeerror-39nonetype39-object-has-no-attribute-39read39
CC-MAIN-2018-13
refinedweb
172
59.7
Maven now supports attributes in pom.xml?! In December 2005, I asked Is it possible to make pom.xml simpler?. After seeing what the Spring Developers have done to simplify Spring context files, I can't help but think the same thing is possible for Maven 2's pom.xml. Is it possible to add namespaces and make something like the following possible? Before: <dependency> <groupId>springframework</groupId> <artifactId>spring</artifactId> <version>1.2.6</version> </dependency> After: <dep:artifact Or just allow attributes to make things a bit cleaner? <dependency groupId="org.springframework" artifactId="spring" version="1.2.6"/> At that time, the general response was "That's how Maven works. It's a matter of taste. You'll get used to it." It's been two years and sure, I'm used to it, but I'd still rather write less XML. That's why I was particularly pleased to see Brett Porter's write Maven now supports condensed POMs using attributes: The issue is being tracked under MNG-3397. The result is that something like this: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>easymock</groupId> <artifactId>easymock</artifactId> <version>1.2_Java1.3</version> <scope>test</scope> </dependency> ... Halves in length to something like this: <dependency groupId="junit" artifactId="junit" version="3.8.1" scope="test"/> <dependency groupId="easymock" artifactId="easymock" version="1.2_Java1.3" scope="test"/> ... Now that wasn't so hard was it? Posted in Java at Feb 11 2008, 03:45:57 Ben on February 11, 2008 at 04:22 PM MST # Posted by GB on February 11, 2008 at 05:17 PM MST # Posted by Matt Raible on February 11, 2008 at 05:29 PM MST # Posted by Brett Porter on February 11, 2008 at 10:55 PM MST #
http://raibledesigns.com/rd/entry/maven_now_supports_attributes_in
crawl-002
refinedweb
306
58.79
table of contents other versions - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.07-1 - unstable 5.08-1 other sections NAME¶sigwait - wait for a signal SYNOPSIS¶ #include <signal.h> int sigwait(const sigset_t *set, int *sig); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): sigwait(): Since glibc 2.26: _POSIX_C_SOURCE >= 199506L Glibc 2.25 and earlier: _POSIX_C_SOURCE Glibc 2.25 and earlier: _POSIX_C_SOURCE DESCRIPTION¶¶On success, sigwait() returns 0. On error, it returns a positive error number (listed in ERRORS). ERRORS¶ - EINVAL - set contains an invalid signal number. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶POSIX.1-2001, POSIX.1-2008. NOTES¶sigwait() is implemented using sigtimedwait(2). The glibc implementation of sigwait() silently ignores attempts to wait for the two real-time signals that are used internally by the NPTL threading implementation. See nptl(7) for details.
https://manpages.debian.org/unstable/manpages-dev/sigwait.3.en.html
CC-MAIN-2020-45
refinedweb
155
54.59
#include <stdio.h> FILE *fopen(const char *restrict filename, const char *restrict mode); The fopen() function shall open the file whose pathname is the string pointed to by filename, and associates a stream with it. The mode argument points to a string. If the string is one of the following, the file shall be opened in the indicated mode. Otherwise, the behavior is undefined. The character 'b' shall have no effect, but is allowed for ISO C standard conformance. Opening a file with read mode (r as the first character() function shall mark for update the st_atime, st_ctime, and st_mtime fields of the file and the st_ctime and st_mtime fields of the parent directory. conversion state. The largest value that can be represented correctly in an object of type off_t shall be established as the offset maximum in the open file description. Upon successful completion, fopen() shall return a pointer to the object controlling the stream. Otherwise, a null pointer shall be returned, and errno shall be set to indicate the error. The fopen() function shall fail if: The length of the filename argument exceeds {PATH_MAX} or a pathname component is longer than {NAME_MAX}. The fopen() function may fail if: Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. The following sections are informative.; ... } None. None. None. fclose() , fdopen() , freopen() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdio.h>
http://www.makelinux.net/man/3posix/F/fopen
CC-MAIN-2016-07
refinedweb
237
61.87
Hi there. I'm trying to make a program that loads a value from a file (starting from 0 ) and adds a number each time. i.e. I have a file "data.log" that contains 0 , after the first run I add 100 so now it contains 100, then I add 50 , and it contains 150 and so on. The problem is: 4 lines before the endThe problem is: 4 lines before the endCode:#include <fstream> #include <iostream> #include <iomanip> using namespace std; int main() { int MyScore = 0 , i = 0; char Temp = '0', c = '0'; fstream my_file; my_file.open("data.log", ios::out | ios::in); if( my_file) cout <<"found"<<endl; else cout <<"not found"<<endl; // LOADING FROM FILE my_file.get(Temp); cout<<setw(40)<<"Simple Program"<<endl; for(i=0;i<80;i++) cout<<'-'; gputs("How much completed?:\na. 1/3\nb. 1/2\nc. 1\n ans = "); c = cin.get(); switch (c) { case 'a': MyScore += 50; break; case 'b': MyScore += 75; break; case 'c': MyScore +=100; break; default : break; } cout<<endl<<endl; MyScore += Temp - '0'; my_file<<"My Score is :"<<MyScore<<endl; my_file.put(MyScore); my_file.close(); cin.get(); return EXIT_SUCCESS; } doesn't work as it shoulddoesn't work as it shouldCode:my_file.put(MyScore); Do you have any idea why?
https://cboard.cprogramming.com/cplusplus-programming/88575-output-file-extremely-simple-but.html
CC-MAIN-2017-47
refinedweb
211
71.34
I'm trying to perform a number of functions to get some results from a set of satellite imagery (in the example case I am performing similarity functions). I first intended to iterate through all the pixels simultaneously, each containing 4 numbers, then calculating a value for each one based off these too numbers then write it to an array e.g scipy.spatial.distance.correlation(pixels_0, pixels_1). The issue I have is when I run this loop I am having issues getting it to save to an array 1000x1000 giving it a value for each pixel. array_0 = # some array with dimensions(1000, 1000, 4) array_1 = # some array with dimensions(1000, 1000, 4) result_array = [] for rows_0, rows_1 in itertools.izip(array_0, array_1): for pixels_0, pixels_1 in itertools.izip(rows_0, rows_1): results = some_function(pixels_0, pixels_1) print results >>> # successfully prints desired results results_array.append(results) >>> # unsuccessful in creating the desired array a = np.random.rand(10, 10, 4) b = np.random.rand(10, 10, 4) def dotprod(T0, T1): return np.dot(T0, T1)/(np.linalg.norm(T0)*np.linalg.norm(T1)) results =dotprod(a.flatten(), b.flatten()) results = results.reshape(a.shape) The best way is to use Numpy for your task. You should think in vectors. And you should write your some_function()to work in a vectorized manner. Here is an example: array_0 = np.random.rand(1000,1000,4) array_1 = np.random.rand(1000,1000,4) results = some_function(array_0.flatten(), array_1.flatten()) ## this will be (1000*1000*4 X 1) results = results.reshape(array_0.shape) ## reshaping to make it the way you want it.
https://codedump.io/share/ajpNpBI3PdLB/1/iterating-on-data-in-two-3d-arrays-python
CC-MAIN-2016-44
refinedweb
264
51.95
It happens very often, that Swift-libraries are only wrappers for old C-libraries (for example libxml or zlib). It's a good start, because in this way you don't get headaches from the effort to use C code in Swift (I did that several times and in every major Swift version there were breaking changes). But in the end, a native solution would be much better, especially because Swift is a safer language then C (at least that is my understanding and it happens often, that someone finds scaring security vulnerabilities in those libraries). Did anyone ever thought about rewriting one of these libraries in Swift? Or would that be a senseless effort? I'm sure someone did, and I'm sure there exists pure Swift alternative to many C libraries. That said, porting something to another language is always a significant effort that someone has to be willing to give. While wrapping C libraries can and often does cause headache, it is still usually faster than rewriting thousands of lines of battle-tested C code. So is this a senseless effort? It might be and it might not. It depends on the library, its complexity, its existing ecosystem (i.e. the community that already uses it) and how much porting it to Swift can improve its usability, performance, etc. zlib is an interesting case where--while written in unsafe languages--mainstream implementations (as provided by the system on most platforms) are generally extremely robust, due to their widespread use and aggressive use of fuzzing to uncover bugs. Any novel implementation, even one in a safe language, should be considered suspect by comparison¹. I fully expect it to be rewritten in safe languages eventually (Rust already has multiple implementations, and I'm sure a few folks have written Swift ones, too), but their are (in my opinion) bigger safety gains to be had elsewhere in the short term. ¹ I would relax this claim somewhat for an implementation with a checked proof of correctness. Just to pick your brain a bit - which libraries do you think would benefit the most by being reimplemented in a safe language? Does anything in particular come to mind? I think Mozilla's deployment of Rust in their CSS engine is a great example of a good place to target: - Exposed to incoming data from random websites - Very complex, making proof-style checking much less practical - CSS is actively changing, so you can't just write once, verify exhaustively, and be done I expect they'll see ongoing payoffs for that effort in both a reduced rate of security regressions and reduced maintenance/change costs. One other thought: I anticipate bigger payoffs in codebases that don't have ready access to a high quality data structures library already. Replacing hand rolled C implementations of strings, dynamic arrays, hash tables, and so on with standard ones is almost certainly worthwhile. As of last summer Swift depends on fewer libraries (just ICU/libobjc/libsystem now on Darwin!) so is much closer to being suitable for these sorts of environments. What David said is exactly right. Especially "bigger payoffs in codebases that don't have ready access to a high quality data structures library already". Part of what makes zlib "relatively safe" to have in C¹ is that there's absolutely nothing "clever" about it. The parsing and data structures involved are nearly as simple as possible². ¹ Well, parts of it are implemented in assembly on some systems (including Darwin). ² of course, there have still been many bugs over the years, but at least in the hardened system implementations, they're pretty rare now. There were written some interesting points I would like to comment: "Depends, how many people already use it" I always asked myself, why developers so often don't consider rewriting (their) old codebase. I'm just a hobby developer, and I experienced often, that sometimes rewriting old code can save time, because maintaining old code would cost even more time (and I can imagine, that the time factor plays a big role for this question). "Widespreaded libraries are safe" Is it like this? I never learned C and I always just learned the syntax of a programming language. Until now I never was interested in how the things work behind the code (that is something that has now changed after reading different things here in the forum). Maybe that is a question behind my first question: Is C really more unsafe then Swift? Or is Swift just easier to code with? At least, I guess, there would be much less bugs (one main reason, why I love Swift, is that things like optionals produces better code). I came up with this question, as I read, that the people of Redox OS are rewriting the libc (they call it relibc). They completely build up an entire OS only on one programming language, what is pretty amazing (although I still prefer Swift over Rust :)). So the question is: Why are they doing this? PS: Could someone explain me "bigger payoffs in codebases that don't have ready access to a high quality data structures library already"? Is this, where the Standard Library comes in? Imagine you're a new programmer that started learning C. You start traditionally with the hello world program: printf("Hello, World!"); You're happy, you love this language, you would grade it . To celebrate you modify your program: printf("This is cool! 100%"); Uh, oh. Now you are doing undefined behavior, and are exposed to the wrath of nasal demons, because you used printf format specifier wrong. Okay, let's ignore that and go to the second lesson. Computers are for computing! You ask your tutorial how to add two numbers: int8_t a = 2; int8_t b = 3; int8_t c = a + b; Great! But these are chump numbers! I can calculate that in my head! int8_t a = 123; int8_t b = 87; int8_t c = a + b; Oh no, nasal demons again. And Xcode didn't even warn me about that. What's next? If staments! I wanna write my own AI char *message; bool b = true; if (b) { message = "b is set to true"; } else { message = "b is set to false"; } printf("%s\n", message); but this program is too abstract. Let's write something about me personally: char *message; bool iLikeMushrooms = false; if (iLikeMushrooms) { message = "cukr loves mushrooms a lot!"; } printf("%s\n", message); Oh no, nasal demons again because of uninitialized variable. What is the next thing new programmers learn? Loops! Tutorial teaches you how to write "Hello world" forever: while (true) { printf("Hello, world!\n"); } Nice. But I don't like welcoming the world that much. Let's make it more silent: while (true) { } Did you know that in C++ and earlier versions of C infinite loops that don't do anything are undefined behavior? At this point you are angry. Why every time you do seemingly innocent changes to your program, you cause undefined behavior, which theoretically can delete your hard drive? This time you don't want any of that. You will copy the tutorial, and not change even a single letter of the source code. #include <stdio.h> int main() { int a, b, c; printf("Enter the first value:"); scanf("%d", &a); printf("Enter the second value:"); scanf("%d", &b); c = a + b; printf("%d + %d = %d\n", a, b, c); return 0; } You run it, and... no tutorial I could find checks if scanf failed. You buy a bottle of vodka to drink with your band of nasal demons. You start thinking about rewriting your brain in rust. This is a misreading of what has been written here. In general, wide-spread libraries are not safe. There are a few specific wide-spread libraries that are cornerstones of file formats commonly used on the internet that are extremely well-tested relative to the complexity of the algorithms they use and--while I would not claim they are bug-free--are at least mostly without any simple bugs. Adobe Flash has joined the chat Seriously, though: Developers rewrite old code all the time, for many different reasons. If you're trying to reduce future maintenance costs, we tend to call that a refactor rather than a rewrite. When you write the code, you make some assumptions about how easy it will be to maintain and what things you might want to add in the future, but reality might differ. It's common to go back and visit code if you feel the design isn't meeting your expectations in practice. Another reason to rewrite code is to adopt new technologies - again, typically because you think it be an overall maintenance win, or some old technology you used is being deprecated, or because it will help you develop new features later. When it comes to adopting Swift - well, C isn't being deprecated any time soon, so the 2 relevant reasons to port a C library to Swift are to reduce maintenance burdens (including from safety issues), and to offer new features (e.g. generics). Apple are doing it right now by building Swift in the OS. They've rebuilt their UI code in Swift. Some of the motivation may have been to reduce bugs and safety issues in the old code, but they're also making heavy use of features like generics and property wrappers, which they didn't have before Swift. C is an ISO standard and cannot change very easily, even if inherent safety issues emerge. Swift is designed to be safe (period. Not just "safer than C"). It's always the most important factor, even more than source stability IIRC. Okay, but this error could also happen in Swift, if there would be a function like printf. Wow, the result is -47! I see, I don't have any knowledge of C and I had to read about. int8_t just has 1 byte, means it can have the numbers -128 to 127. The result of the calculation would be 210, so you need a uint8_t (or a int16_t?), to make your example work correctly. Interesting: Xcode warns you, if you write int8_t a = 128;, but there is really no warning for your example. By the way: In Swift you just write var x = 200. How does Swift manage this to make it as performant as C? And what's about the memory? Whats the problems with that example? Xcode gives a warning about that. The example compiles, but just prints "(null)". The scanf-example crashes when I enter a non-digit. That's of course horrible, in Swift this is much safer, because it will be checked, if the String to Integer conversion works. Okay, back to topic (sorry for those questions or comments, but since reading this forum, I want to learn more about all those background stuff of programming languages): @scanon: Sorry, then I misunderstood you. Well-tested is for sure a good argument, but how often do people still find bugs and vulnerabilities? It's the same question as with open source software: Just because the code is open, it doesn't mean, that it is safer (because no one can check every line of code or no one can check a library in every case). Rewriting old C-libraries would at least make it easier for people like me, who never learned C or C++, if they want to contribute to those projects ;) If you want to write a new compression algorithm, or even a new implementation of zlib, you don’t need an existing version written in Swift. In general, if you want to build some feature, you can do it as a package. You don’t need to make it part of the original library itself. Swift has been specifically designed to support that, even if you’re extending a C library. Contracted because off-topic Swift uses native integer as a default. So it would just be Int64 on 64bit machine, and Int32 on 32bit machine. So it will likely use more memory than C example, though marginally. In practice it works well most of the time, and you can specify the size should you need to; var x: Int8 = 200 The example never sets any value to message. There is no specification about what to do in C. In other word, this is undefined behavior. Undefined behavior is very nasty because it can be anything. It may crash at runtime, set all bits to zero, use old memory values, etc. Different compilers can choose to do different things there, even compiling/running it twice could result in different behavior. It is a nightmare for debugging because it crank it works on my machine to the max. You also can not reason anything about the code because there's no specification to reason about. It could be that Xcode tries to be nice and set message to null pointer. It could be that you's lucky that message uses old memory that happens to be all zero (null in C). Though given that there's a warning, it could be the former. It's a territory Gods only know.
https://forums.swift.org/t/rewriting-c-libraries-in-swift/34450
CC-MAIN-2022-40
refinedweb
2,205
70.84
flytekit.types.file.FlyteFile¶ - class flytekit.types.file.FlyteFile(*args, **kwds)[source]¶ - Parameters path – The source path that users are expected to call open() on downloader – Optional function that can be passed that used to delay downloading of the actual fil until a user actually calls open(). remote_path – If the user wants to return something and also specify where it should be uploaded to. Methods - classmethod from_dict(kvs, *, infer_missing=False)¶ - classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)¶ - classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)¶ - Parameters - Return type dataclasses_json.mm.SchemaF[dataclasses_json.mm.A] - to_dict(encode_json=False)¶ - to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)¶ Attributes - path: Union[str, os.PathLike] = None¶ Since there is no native Python implementation of files and directories for the Flyte Blob type, (like how int exists for Flyte’s Integer type) we need to create one so that users can express that their tasks take in or return a file. There is pathlib.Pathof course, (which is usable in Flytekit as a return value, though not a return type), but it made more sense to create a new type esp. since we can add on additional properties. Files (and directories) differ from the primitive types like floats and string in that Flytekit typically uploads the contents of the files to the blob store connected with your Flyte installation. That is, the Python native literal that represents a file is typically just the path to the file on the local filesystem. However in Flyte, an instance of a file is represented by a Blobliteral, with the urifield set to the location in the Flyte blob store (AWS/GCS etc.). Take a look at the data handling doc for a deeper discussion. We decided to not support pathlib.Pathas an input/output type because if you wanted the automatic upload/download behavior, you should just use the FlyteFiletype. If you do not, then a strworks just as well. The prefix for where uploads go is set by the raw output data prefix setting, which should be set at registration time in the launch plan. See the option listed under flytectl register examples --helpfor more information. If not set in the launch plan, then your Flyte backend will specify a default. This default is itself configurable as well. Contact your Flyte platform administrators to change or ascertain the value. In short, if a task returns "/path/to/file"and the task’s signature is set to return FlyteFile, then the contents of /path/to/fileare uploaded. You can also make it so that the upload does not happen. There are different types of task/workflow signatures. Keep in mind that in the backend, in Admin and in the blob store, there is only one type that represents files, the Blobtype. Whether the uploading happens or not, the behavior of the translation between Python native values and Flyte literal values depends on a few attributes: The declared Python type in the signature. These can be * python:flytekit.FlyteFile* os.PathLikeNote that os.PathLikeis only a type in Python, you can’t instantiate it. The type of the Python native value we’re returning. These can be * flytekit.FlyteFile* pathlib.Path* str Whether the value being converted is a “remote” path or not. For instance, if a task returns a value of “ as a FlyteFile, obviously it doesn’t make sense for us to try to upload that to the Flyte blob store. So no remote paths are uploaded. Flytekit considers a path remote if it starts with s3://, gs://, or even file://. Converting from a Flyte literal value to a Python instance of FlyteFile Converting from a Python value (FlyteFile, str, or pathlib.Path) to a Flyte literal Since Flyte file types have a string embedded in it as part of the type, you can add a format by specifying a string after the class like so. def t2() -> flytekit_typing.FlyteFile["csv"]: return "/tmp/local_file.csv"
https://docs.flyte.org/projects/flytekit/en/latest/generated/flytekit.types.file.FlyteFile.html
CC-MAIN-2022-21
refinedweb
682
55.44
How it works First of all let's understand where Brail lives. Brail is a View Engine for the Castle MonoRail web development framework. MonoRail is MVC framework for ASP.Net that allows true Separation of Concerns between your business logic and your UI code. Brail comes into play when it's time to write your UI code, the idea is that instead of using a templating framework, like NVelocity or StringTemplate, you can use a bona fide programming language with all the benefits that this implies. The down side of this approach is that programming languages usually have very strict rules about how you can write code and that is usually the exact opposite of what you want when you write a web page. You want a language that wouldn't get in your way. This is where Brail comes into play. Brail is based on Boo, a programming language for the .Net Framework which has a very unorthodox view of the place of the user in the language design. Boo allows you to plug your own steps into the compiler pipeline, rip out whatever you don't like, put in things that you want, etc. This means that it packs a pretty punch when you need it. The options are nearly limitless. But enough raving about Boo, it is Brail that you are interested in. What Brail does is to allow you to write your web pages in Boo, in a very relaxed and free way. After you write the code, Brail takes over and transforms it to allow you to run this code. The Brail syntax and options are documented, so we assume that you are already familiar with it. The flow We need to understand what MonoRail does when it receive a request: - The user's browser sends a request to the server: GET: /admin/users/list.rails - The ASP.Net runtime passes the request to MonoRail's ProcessEngine, which loads the appropriate controller and after the controller has finished running, it will call to the appropriate view. - MonoRail's ProcessEngine calls to Brail passing the current context, the controller and a template name which will usually will look like this: "admin/users/list" - Brail processes the request and writes the results back to the user. Processing Requests MonoRail receives a request, calls the appropriate controller and then calls to the view engine with the current context, the controller and the view that needs to be displayed. Brail then takes over and does the following: - Check if the controller has defined a layout and if it has, pipe the view's output through the layout's output. (The layout is compiled the same way a view is) - Get the compiled version of a view script by: - Checking if the script is already in the cache. The cache is a hash table ["Full file name of view" : compiled type of the view] - If the script is already in the cache but the type is null this means that the view has changed, so we compile just this script again. - Instantiate the type and run the instance, which will send the script output to the user. A few things about changes in the views: Brail currently allows instantaneous replacement of views, layout and common scripts by watching over the Views directory and recompiling the file when necessary, since this is a developer only feature, I'm not worrying too much about efficiency / memory. I'm just invalidating the cache entry or recompiles the common scripts. Be aware that making a change to the Common Scripts would invalidate all the compiled views & layouts in memory and they would all have to be compiled again. This is done since you can't replace an assembly reference in memory. The interesting stuff is most happening when Brail is compiling a script. For reference, Brail usually will try to compile all the scripts in a directory (but does not recurse to the child directories) in one go, since this is more efficient in regard to speed / memory issues. Occasionally it will compile a script by itself, usually when it has been changed after its directory has been compiled or if the default configuration has been changed. There isn't much difference between compiling a single file and compiling a bunch of them, so I'm just going to ignore that right now and concentrate on compiling a single script. Brail's scripts are actually a Boo file that is being transformed by custom steps that plug into the Boo compiler. Compiling Scripts Here is what happens when Brail needs to compile a script: Creating an instance of BooCompiler, and telling if to compile to memory or to file (configuration option). - Adding a reference to the following assemblies: Brail, Castle.MonoRail.Framework, the compiled Common Script assembly and any assembly that the user referenced in the configuration file. - Adding imports that were defined in the configuration - Run a very simple pre processor on the file, to convert file with <#cdata-section><% %>#cdata-section> or <#cdata-section>#cdata-section> to a valid boo script. - Remove the default Boo's namespace (this is done because common names such as list, date were introduced including the default namespace and that meant that you couldn't use that as a parameter to the view. - Replace any variable reference that has unknown source with a call to GetParameter(variableName) which would use the controller's PropertyBag to get it. GetParameter() throws if it can't find a valid parameter, by the way. The reasoning is that this way you won't get null reference exceptions if you are trying to do something like: date.ToString("dd/mm/yyyy") and the controller didn't pass the date. Since debugging scripts is a pain, this gives you a much clearer message. - Then the real transformation begins. Any Brail script is turned into a subclass of the BrailBase class, which provides basic services to the script and allow the engine to output the results to the user. What is happening is that any "free" code, code that isn't in a class / method is moved to a Run() method on the subclass. Any methods are moved to the subclass, so they are accessible from the Run() method. Anything else is simply compiled normally. When Brail receive a request for a view it looks it up as describe above (from cache/compiled, etc). A new instance of the view is created and its Run() method is called. All the output from the script is sent to the user (directly or via the layout wrapping it.) BrailBase Class The BrailBase class has several useful method and properties: - ChildOutput - Layouts are scripts that are using their ChildOutput property to wrap their output around the child output. This works as follows, a layout is created, and its ChildOutput is set to a view's output, the view is then run. After the view run, the layout is run and has access to the view's layout. - IsDefined(parameterName) - Check if a parameter has been passed, this allows you to bypass GetParameter() throwing if nothing has been passed. - OutputSubView() - Output another view.
http://www.castleproject.org/monorail/documentation/trunk/viewengines/brail/howitworks.html
crawl-001
refinedweb
1,196
68.3
import "gocloud.dev/mysql" Package mysql provides functions to open MySQL databases with OpenCensus instrumentation. Scheme is the URL scheme this package registers its URLOpener under on DefaultMux. ConfigFromURL creates a mysql.Config from URL. Open opens the bucket identified by the URL given. URL openers must be registered in the DefaultURLMux, which is typically done in driver packages' initialization. See the URLOpener documentation in driver subpackages for more details on supported scheme(s) and URL parameter(s). Code: // PRAGMA: This example is used on gocloud.dev; PRAGMA comments adjust how it is shown and can be ignored. // PRAGMA: On gocloud.dev, hide lines until the next blank line. ctx := context.Background() // Replace this with your actual settings. db, err := mysql.Open(ctx, "mysql://user:password@localhost/testdb") if err != nil { log.Fatal(err) } defer db.Close() // Use database in your program. db.Exec("CREATE TABLE foo (bar INT);") A type that implements MySQLURLOpener can open connection based on a URL. The opener must not modify the URL argument. OpenMySQLURL must be safe to call from multiple goroutines. This interface is generally implemented by types in driver packages. URLMux is a URL opener multiplexer. It matches the scheme of the URLs against a set of registered schemes and calls the opener that matches the URL's scheme. The zero value is a multiplexer with no registered schemes. DefaultURLMux returns the URLMux used by OpenMySql. Driver packages can use this to register their MySQLURLOpener on the mux. OpenMySQL calls OpenMySQLURL with the URL parsed from urlstr. OpenMySQL is safe to call from multiple goroutines. OpenMySQLURL dispatches the URL to the opener that is registered with the URL's scheme. OpenMySQLURL is safe to call from multiple goroutines. func (mux *URLMux) RegisterMySQL(scheme string, opener MySQLURLOpener) RegisterMySQL registers the opener with the given scheme. If an opener already exists for the scheme, RegisterMySQL panics. type URLOpener struct { TraceOpts []ocsql.TraceOption } URLOpener opens URLs like "mysql://" by using the underlying MySQL driver. See for details. OpenMySQLURL opens a new database connection wrapped with OpenCensus instrumentation. Package mysql imports 9 packages (graph) and is imported by 3 packages. Updated 2020-09-12. Refresh now. Tools for package owners.
https://godoc.org/gocloud.dev/mysql
CC-MAIN-2020-40
refinedweb
364
53.68
Hi, I wrote a simplified version of a program I was having a problem with, to better illustrate my questions: 1. How do I bypass this error? 2. What is the exact cause? and the error that spits out:and the error that spits out:Code:#include <iostream> using namespace std; class Stuff { public: Stuff(); Stuff(int); Stuff(const Stuff&); int accessor(); void mutator(int); private: int mystery; }; Stuff::Stuff(): mystery(0) {} Stuff::Stuff(int b): mystery(b) {} Stuff::Stuff(const Stuff& b) { mystery = b.accessor(); } int Stuff::accessor() { return mystery; } void Stuff:: mutator(int b) { mystery = b; } int main() { Stuff fool; cout << fool.accessor() << endl; return 0; } Thanks dudes.Thanks dudes.Code:bash-3.00$ !g g++ test.c test.c: In copy constructor `Stuff::Stuff(const Stuff&)': test.c:20: error: passing `const Stuff' as `this' argument of `int Stuff::accessor()' discards qualifiers bash-3.00$
http://cboard.cprogramming.com/cplusplus-programming/101187-problem-passing-%60xx'-%60'-argument-%60int-xx-accessor-'-discards-qualifiers.html
CC-MAIN-2014-35
refinedweb
148
58.28
I have a nat and it has various server So from my local server I want to go to nat and then from nat i have to ssh to other machines Local-->NAT(abcuser@publicIP with key 1)-->server1(xyzuser@localIP with key 2) nat has different ssh key and each of the server has different ssh key how can i accomplish this type of multihop ssh using fabric I tried using env.roledefs feature but it doesnt seems to be working also I am not sure how to define two ssh keys.I know we can define a list of keys with env.key_filename but issue is will it check each key with each server?How can I be more specific and match a key with one server only I have tried using command from my local machine fab deploy -g 'ec2-user@54.251.151.39' -i '/home/aman/Downloads/aws_oms.pem' and my script is from __future__ import with_statement from fabric.api import local, run, cd, env, execute env.hosts=['ubuntu@10.0.0.77'] env.key_filename=['/home/ec2-user/varnish_cache.pem'] def deploy(): run("uname -a") In order to connect to remote hosts via an intermediate server, you can use the --gateway command-line option : Or, alternatively, set the env.gateway variable inside your fabfile :
https://codedump.io/share/xalMkMo7wWQh/1/how-to-do-multihop-ssh-with-fabric
CC-MAIN-2017-17
refinedweb
219
61.26
Hi Stephen, Am 07.02.2011 19:57, schrieb Stephen Dwyer: > [...] I only added jsbsim.CFLAGS += -g to the makefile (not > sure what this does). it tells the compiler to add debug symbols in the executable. This should not make any difference unless you actually run the simulator in a debugger. > [...] > (These both compile with the exception that for microjet_example.xml I > needed to comment out the second last line ppm_arch_init(); of > /paparazzi/sw/airborne/subsystems/radio_control/ppm.c . Still looking > to see what causes this.) in conf/autopilot/subsystems/shared/radio_control_ppm.makefile, ppm_arch.c is only added for non-jsbsim targets. I don't know why this is, at least for me it works to comment out the ifneq ($(ARCH),jsbsim) and endif lines. > [...] > JSBSim startup beginning ... > > Simulation delta 0.017 > Broadcasting on network 127.255.255.255, port 2010 so at least JSBSim starts up correctly. > but once I pressed Takeoff then Launch, the Paparazzi Center output gives > this: > Invalid_argument("Latlong.of_utm") > repeatedly. This looks to be an error from the GCS. Also, the display > on the GCS shows outrageous and fluctuating values for AGL and > velocity, and the aircraft does not move in the map. this looks as if you're running into some NaN issues that were only fixed in the recent CVS versions of JSBSim. You could verify this by replacing check_crash_jsbsim() in sim_ac_jsbsim.c with this version: bool check_crash_jsbsim(JSBSim::FGFDMExec* FDMExec) { double agl = FDMExec->GetPropagate()->GetDistanceAGL(), // in ft lat = FDMExec->GetPropagate()->GetLatitude(), // in rad lon = FDMExec->GetPropagate()->GetLongitude(); // in rad if (agl< 0) { cerr << "Crash detected: agl < 0" << endl << endl; return false; } if (agl > 1e5 || abs(lat) > M_PI_2 || abs(lon) > M_PI) { cerr << "Simulation divergence: Lat=" << lat << " rad, lon=" << lon << " rad, agl=" << agl << " ft" << endl << endl; return false; } if (isnan(agl) || isnan(lat) || isnan(lon)) { cerr << "JSBSim is producing NaNs. Exiting." << endl << endl; return false; } return true; } If you indeed have the NaN problem, I'd recommend to check with the devel version of JSBSim. As Sourceforge's CVS is down at the moment, you'll have to use the git repo from > Interestingly, > the Battery status indicator reads "12." instead of "12.5". the JSBSim simulator doesn't do battery modeling. The 12 is hardcoded in sim_ac_fw.c. Best regards, Andreas
https://lists.gnu.org/archive/html/paparazzi-devel/2011-02/msg00062.html
CC-MAIN-2022-33
refinedweb
378
50.53
The original release of the .NET Framework included collections as .NET was introduced to the Microsoft programming world. The .NET Framework 2.0 introduced generics to complement the System.Collections namespace and provide a more efficient and well performing option. Read on to learn more... Articles Written by Mark Strawmyer Timing Your C# Code with the Stopwatch Class This article is a relatively straight forward C# tutorial about how to use a stopwatch in your C# programming in order to help track the execution time of your code. This approach is especially useful when tracking the responsiveness of ASP.NET service calls or integrations to third-party services. Understanding and Using .NET Attributes This article focuses on understanding attributes, whether it is default attributes that are already part of the .NET Framework or custom attributes you create for your own purposes as a .NET developer, and how to use them within your code.. The examples within will serve as a C# tutorial on attributes. Conditional Compile Statements Mark Strawmyer shows you how to use conditional compile statements in your C# and .NET code. C# Coding Standards and Practices Explore the often overlooked, but yet extremely valuable art of coding standards and practices, including a definition of common areas, along with links to examples. .NET Framework: Use Your Own Cache Wrapper to Help Performance See how you can use the .NET framework to create your own wrapper classes in C# programming to help boost your application performance when accessing a bunch of reference or other look-up type data that you frequently use.
http://www.codeguru.com/member.php/Mark+Strawmyer/
CC-MAIN-2014-15
refinedweb
262
57.77
On May 5, 2005, at 5:57 AM, Madan US wrote: > > <D:post-commit-status>POST-COMMIT temp message</D:post-commit-status> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > like that!!!! > D: is the DAV: xml namespace, you're not allowed to invent new DAV: elements. :-) But we *can* invent new S:elements. In fact, we just started doing it in svn 1.2; the client now sends its list of lock-tokens in the MERGE request inside a new svn: element. So the only trickiness here is dealing with the protocol compatibility issues. Obviously, we can make a 1.3 client notice the new element if the server is new enough to send it. But if we add a new svn: element to the MERGE response, will older clients get upset and choke on it? I would try testing a 1.0 and 1.1 client against this idea. And Branko is right: hooks are always run synchronously; that's why we so often recommend to users that their post-commit hook run "command &" on unix or "start command" on windows. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org Received on Thu May 5 15:07:44 2005 This is an archived mail posted to the Subversion Dev mailing list.
http://svn.haxx.se/dev/archive-2005-05/0200.shtml
CC-MAIN-2014-10
refinedweb
216
68.16
Changes in Apache Libcloud v2.0¶ Replacement of httplib with requests¶ Apache Libcloud supports Python 2.6, 2.7 - 3.3 and beyond. To achieve this a package was written within the Libcloud library to create a generic HTTP client for Python 2 and 3. This package has a custom implementation of a certificate store, searching and TLS preference configuration. One of the first errors to greet new users of Libcloud would be “No CA Certificates were found in CA_CERTS_PATH.”... In 2.0 this implementation has been replaced with the requests package, and SSL verification should work against any publically signed HTTPS endpoint by default, without having to provide a CA cert store. Other changes include: - Enabling HTTP redirects - Allowing both global and driver-specific HTTP proxy configuration - Consolidation of the LibcloudHTTPSConnection and LibcloudHTTPConnection into a single class, LibcloudConnection - Support for streaming responses - Support for mocking HTTP responses without having to mock the Connection class - 10% typical performance improvement with the use of persistent TCP connections for each driver instance - Access to the low-level TCP session is no longer available. Access to .read() on a raw connection will bind around requests body or iter_content methods. - Temporary removal of the S3 very-large file support using the custom multi-part APIs. This will be added back in subsequent release candidates. Allow redirects is enabled by default¶ HTTP redirects are allowed by default in 2.0. To disable redirects, set this global variable to False. import libcloud.http libcloud.http.ALLOW_REDIRECTS = False HTTP/HTTPS Proxies¶ Enabling a HTTP/HTTPS proxy is still supported and accessed via the driver’s connection property or via the ‘http_proxy’ environment variable. Applying it to a driver will set the proxy for that driver only, using the environment variable will make a global change. # option 1 import os os.environ.get('http_proxy', '') # option 2 driver.connection.connection.set_http_proxy(proxy_url='') Adding support for Python 3.6 and deprecation of Python 3.2¶ In Apache Libcloud 2.0.0, Python 3.6 is now supported as a primary distribution. Python 3.2 support has been dropped in this release and users should either upgrade to 3.3 or a newer version of Python. SSL CA certificates are now bundled with the package¶ In Apache Libcloud 2.0.0, the Mozilla Trusted Root Store is bundled with the package, as part of the requests package bundle. This means that users no longer have to set the path to a CA file either via installing the certifi package, downloading a PEM file or providing a directory in an environment variable. All connections in Libcloud will assume HTTPS by default, now with 2.0.0, if those HTTPS endpoints have a signed certificate with a trusted CA authority, they will work with Libcloud by default. Providing a custom client-side certificate, for example for a development server or a HTTPS proxy is still supported given providing a value to libcloud.security.CA_CERTS_PATH. This code example would set a HTTP/HTTPS proxy and use a client-generated certificate to verify. import os os.environ.set('http_proxy', '') import libcloud.security libcloud.security.VERIFY_SSL_CERT = True libcloud.security.CA_CERTS_PATH = '/Users/anthonyshaw/charles.pem' Providing a list of CA trusts is no longer supported¶ In Apache Libcloud 2.0.0 if you provide a list of more than 1 path or certificate file in libcloud.security.CA_CERTS_PATH you will receive a warning and only the first path will be used. This path should be to a .cert or .pem file. The environment variable REQUESTS_CA_BUNDLE can be used to access the requests library’s list of trusted CAs. Performance improvements and introduction of sessions¶ Each instance of libcloud.common.base.Connection will have a LibcloudConnection instance under the connection property. In 1.5.0<, there would be 2 connection class instances, LibcloudHttpConnection and LibcloudHttpsConnection, stored as an instance property conn_classes. In 2.0.0 this has been replaced with a single type, libcloud.common.base.LibcloudHTTPConnection that handles both HTTP and HTTPS connections. def test(): import libcloud import libcloud.compute.providers d = libcloud.get_driver(libcloud.DriverType.COMPUTE, libcloud.DriverType.COMPUTE.DIMENSIONDATA) instance = d('anthony', 'mypassword!', 'dd-au') instance.list_nodes() # is paged instance.list_images() # is paged if __name__ == '__main__': import timeit print(timeit.timeit("test()", setup="from __main__ import test", number=5)) This simple test shows a 10% performance improvement between Libcloud 1.5.0 and 2.0.0. Changes to the storage API¶ Support for Buffered IO Streams¶ The methods upload_object_via_stream now supports file objects, BytesIO, StringIO and generators as the iterator. with open('my_file_to_upload', 'rb') as iterator: obj = driver.upload_object_via_stream(iterator=iterator, container=containers[0], object_name='me.jpg', extra=extra) Other minor changes¶ libcloud.common.base.Connectionwill now use urljoin to combine the request_path and method URLs. This means that the URL action will always have a leading slash. - The underlying connection classes do not assume HTTP if a non-standard port is used. They will use the preference set in the secure flag to the initializer of Connection. - The storage download_object_as_stream method no longer buffers out file streams twice.
http://libcloud.readthedocs.io/en/latest/other/changes_in_2_0.html
CC-MAIN-2017-30
refinedweb
844
50.43
Introduction Even if we don’t buy into the whole Agile methodology, we, as developers, are increasingly following Agile strategies in our code. This is most likely to involve continuous refactoring, continuous integration, and test-driven development. Where the database is concerned we will, by contrast, follow the more traditional ‘Big Design’ Up-Front approach in which the design is completed before the solution is implemented. If your requirements are extremely stable, immaculately analyzed, and you follow a waterfall process throughout the development life cycle, this may work: But the chances aren’t good. Those same reasons that make a convincing argument for following agile processes in your development life cycle also hold true as arguments for following agile methodology with the database. These reasons, in a nutshell are that - Requirements are rarely well-defined - Requirements will almost certainly change. - Agile processes become increasingly advantageous the more dynamic the requirements. While up-front design can make development easier, it will only happen if the requirements remain stable. If the requirements change faster than the application can be designed and implemented, the project will be caught running full-pelt in a hamster-exercise-wheel from which exit is difficult. Not to mention, in the end you may not actually deliver what the users want. At the very least, they wind up waiting longer than they want. If, on the other hand, you can use Agile techniques to respond to changes in requirements during your development life cycle, the chances of a timely delivery of the project are enhanced, but only if the same flexibility can be achieved in parallel with the development of the supporting database. Evolutionary Database Design In a recent article, I wrote about the practice of Database Refactoring. Scott Ambler adapts Martin Fowler’s definition of refactoring to become “a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics”. He takes a broad definition for database schema that includes not just the database structure but also views, stored procedures and triggers. Together with Pramodkumar J. Sadalage they wrote the book ‘Refactoring Databases: Evolutionary Database Design’, a part of which was published by Simple-Talk here. Scott maintains the site. They are attempting to bring the Refactoring discipline to database design, to try to overcome the current difficulties in making changes to databases as part of an evolutionary development process. Much of the inflexibility of the development process, even when faced with changing requirements, has been cultural rather than technical. This is especially true of the database. Changing stored procedures and triggers is usually not too controversial, but it is typical to find that any suggestion of making any changes to database structures that are already in production will be met with hostile ridicule. There is often a good reason for this, especially if nobody clearly knows all of the applications that will be affected. If you perform a refactor such as ‘Move Column’, this will quickly bring to light the affected applications, from the complaints of anguished users. This is a real risk, but also highlights that refactoring the database may be long overdue. In fact, if not knowing what will be affected by making a change deters you from making a change that is needed, you may have a bigger problem on your hands than you may have originally thought. As developers how do we move beyond this? What is involved in Database Refactoring? Refactoring a database is more complex than refactoring code. A code-refactoring exercise will generally be self-contained within a single application; for a database, this will often involve many applications. Each of these applications will often have different release schedules: This means that the database must support applications that have been updated to accommodate the change as well as applications that have not. To support both, we would need to devise the means to allow the database to work simultaneously with both types of applications. This type of ‘split personality’ would certainly be needed in the transitional stage while these applications are updated. Sometimes, this may take more than a year. There are a couple of strategies that we can follow to minimize this pain. Some of these, we must adopt from the beginning, some can be added in after the fact. As much as possible, you should limit the number of applications that directly access your database. We will often setup a separate database just for reporting that is “fed” from the transaction system. This keeps the reporting system separate from the transaction system. The Extract Transform Load (ETL) processes that move the data from the transaction database to the reporting database shields the reporting system from changes in the transaction system. Only the ETL processes and not every report need to be updated when we change the transaction system. We can follow similar approaches for other transaction systems that need access to our database. Instead of allowing another application to insert data into your database, keep them separate and only communicate through import / export files, or require applications to access your data through published web service interfaces or similar APIs. This limits the scope of the change. Applications that are interacting with your database do not need to be changed, only the interfaces that they use. If you don’t already have such structures in place, you may need to introduce triggers or views to simulate the original database structures while the affected applications are updated. In the meantime, we also have to keep track of which affected applications have been updated and which still need to be updated. Once we know that everyone has been updated, we are be able to remove the supporting mechanism that exposes different APIs of the database to different applications. For instance, to perform a ‘Move Column’ refactor, you might follow these steps: - Add the column to the destination table - Add an update trigger to the original table that will update the destination table - Test - Add an update trigger to the destination table that will update the original table - Test - Track as each affected system is updated - Remove the original column and the triggers - Test The Refactoring Databases book steps through dozens of similar examples of refactors, which are well worth studying, along with the methodology that the authors have developed. This sounds like a lot of Work, Why Bother? If the application development can be supple, and the database can’t, then there are bound to be problems. The fault lines are easy to spot: At one place where I worked, any idea of changing a database structure that was already in Production was frowned upon as severely as if you were making code changes directly in Production. We could add a new column, but a table that was no longer needed could not be dropped. Forget about dropping an obsolete column, and if a column had a bad name, we were forever doomed to live with the bad name. With a lot of effort, we could convert a one-to-many relationship to a many-to-many relationship, but we were stuck with the original foreign keys even if they were no longer needed. Over time, the data model degraded to sparsely populated tables riddled with columns that were no longer used. Eventually, we even resorted to reusing existing columns. It was easier to press an existing column back into service than it was to push through a new column. You were given the fifth degree to even add a new column. “Are you sure you need this column?” “You were once sure that you would need this other column!” “Why do you think you will need this new column when you no longer need this other column?” This would be accompanied with drawn out discussions of where to add a new column. Should this be part of the Borrower record or the BorrowerDetail record or the BorrowerIncome record? The stakes were high if you got it wrong. In this particular application, there was one BorrowerDetail record for the loan, one Borrower record for each borrower on the loan application, and a BorrowerIncome record for each source of income. If you add it to the wrong table, you may end up with lots of brittle logic to cover up the mistake. We were plagued with multiple naming conventions as new best practices came into fashion. Consistency was just a pipe dream. Eventually we defined a standard that all table relationships would be defined as many to many, just in case it was eventually needed. After all one-to-one, one-to-many, and many-to-one are all special cases of the more generalized many-to-many relationship. Sounded very reasonable at the time, but it does not take much imagination to see how this can lead to chaos. While you may often really need to define a many to many relationship, you don’t always. We needed a lot of extra code to enforce the relationships and constraints through code that the database could have easily supported had it been properly defined. OK, I am exaggerating a little. The situation was not quiet this bad, but It was close. It takes a great deal of wasted energy to make developments work when the database design is set rigid. Best Practices Is there any way that Database developers and DBAs can solve the problem of an application with an evolving data schema, and a database that has to accommodate existing applications? There are several ways that we can minimize the pain when we refactor databases. Database Encapsulation This is basically the discipline of creating a database abstraction layer (DAL), a way of reducing the interdependencies between your application and your data access logic. We are all familiar with encapsulation as one of the fundamental tenants of object oriented programming. For Database Encapsulation, we get several key advantages: - All data access logic is centrally located - You can focus on application logic and not have to worry about data access, and then focus on data access logic without having to get bogged down in application logic - The better the database structures and application structures are separated, the more easily each can be modified without affecting the other. There are a couple of guiding principles that we need our database encapsulation to follow to achieve these wonderful benefits: - Application code outside of the DAL should need to know nothing about the actual database being used, or its implementation details. When writing application logic, you don’t even care that it is an Oracle database or a SQL Server database. - Every object returned from the DAL should be POCO (Plain Old CLR Objects) this means no DataTable, no DataSets, and no DataReaders, nothing from any of the System.Data namespaces. - Queries should return Lists of the POCOs exposed by the DAL. Although basing a DAL on stored procedures and views is effective, it is not the only alternative. ORMs are another way of abstracting the database layer. ORM tools such as NHibernate, ADO.Net Entity Framework or SubSonic each make excellent Data Encapsulation layers. While this shields most of the application logic from the database details and schema, it does not shield the developer from knowing and understanding the database and schema. In fact, the more the developer knows about the database and the better we understand the schema, the easier our lives will be. Common Development Guidelines Guidelines should include: - Standardize on a set of acceptable data types and when they are to be used. - The number of characters for common fields like phone numbers, city names, descriptions, etc. - The numeric precisions for common fields like currency, percentages, ratios, etc. - The precision for date fields - The acceptable ranges for data - Standardize on naming conventions - Acceptable abbreviations - Acceptable acronyms - Use underscores or not - Standardize on a primary key strategy - Avoid common database smells: - Multi-purpose columns a column serving more than one purpose such as a “name” column being used to store the First Name of a customer or the DBA for a Company. Extra code is needed to ensure that the column is used the “right way” - Multi-purpose tables a table serving more than one purpose. This happens when two or more entities are being stored in the same table. The problem is that many field combinations may never be used and extra code will be needed to ensure that the table is used properly and proper database constraints cannot be defined. - Redundant Data is always a problem. Storing the same piece of data in multiple places guarantees that there will be consistency problems - Tables with too many columns are a lot like procedures with too many lines of code. They lack cohesion. Often this means that related columns need to be extracted to a new table. This will help normalize the data model. - “Smart” Columns that need to be parsed to get full details out. For example, storing City State and Zip in a CSZ field instead of three separate fields. Data validations are complicated and the parsing logic is duplicated throughout the system. It is almost always better to store this data unparsed, just as it is always better to store the components of a calculation and not just the result. Test Driven Database Development (TDDD) Test Driven Development for our code is nothing new, but the techniques of testing the DAL from the application layer are not as well-known. We will often ignore the database all together, testing our business logic against mock objects. But we are left skipping the foundation that our application is built on. One approach is to use the unit testing framework used to test your application logic to test the mapping for the OR/M. This could take the form of a test for each persistent object that will simply retrieve a record and ensure that there are no errors thrown. If you are using stored procedures instead of an OR/M, your testing approach may center on invoking each stored procedure. You can, at least, test to verify that the parameters are the same. You can also verify that the columns in the result set have not changed You may want to setup a test-bed that will insert a record, update the record, retrieve the record verifying the updates, and delete the record when you are through. This will ensure that all of your CRUD operations are in place and valid. You will need to also to test for the concurrency of these operations under stress, and be certain that they are atomic and consistent under high loadings. You may also want to include tests to verify that key reference data is configured properly. Alex Kuznetsov has explored these issues at length over several articles. It is important to test from both the perspective of your code and the databases to not only ensure that the data access logic works but that it works in the context of your application. Obviously these will take longer to run than tests that are run in memory against mock objects. I will often set these up in a separate test project. You don’t need to run through these tests as often as you would your continuous integration, but they should be a regular part of your development processes. Configuration Management of Database Artifacts Most shops have adopted strong configuration management for the code artifacts. We wouldn’t dream of not having source code under version control. It is usually a foregone conclusion that version control of some sort is available for your code, but a version control for your database artifacts is a luxury that far too few teams enjoy. Red Gate provides a tool, SQL Source Control 2.0 that will allow you to integrate with your existing Version Control System. This also brings the power of SQL Compare and Data Compare into the mix to quickly identify what revisions a database is missing and fix it. Adding version control for your database artifacts to your development life cycle will greatly improve the quality of your development process, and allow you to do continuous integration. Train developers in Basic Data Skills and Train DBAs in Basic Development Skills On his web site Scott Ambler talks about what he calls the Cultural Impedance Mismatch. We are all familiar with the Technological Impedance Mismatch that exists between Relational Databases and Object Oriented programs. We often have similar problems interacting with the individuals responsible for developing or maintaining them. This problem stems from a lack of understanding on both sides for the issues confronting the other side. Some signs that you might have this problem include: - DBAs who believe that developers are the biggest threat to their databases - Developers who will resort to anything to avoid seeking help from a DBA - Developers who believe that because they’re using a persistence framework they don’t need to understand anything about the underlying data technology - DBAs who complain about the data messes created by application developers but are reluctant to get involved in improving the training The only way to combat this is training on both sides. We must learn what the other side does; gain an appreciation for their skills, and challenges. Only through working together from a position of respect can we solve the challenges our industry faces. We get caught up playing blame games among ourselves, but from the outside looking in, we are all blamed for failed projects and no one else draws the distinctions that we do. Conclusion Refactoring the database is the next step in developing an evolutionary approach to application development. This is more challenging than refactoring code, but there are also great rewards. Imagine how much smoother your project would run if instead of forcing a database to support an application that it was not designed for you could evolve the database to keep up with the evolution of the application. No matter how much we may wish it was different, requirements change. We all need every tool we can get. Refactoring the database is a great tool to make sure that we have at our disposal.
https://www.red-gate.com/simple-talk/opinion/opinion-pieces/a-developers-guide-to-refactoring-databases/
CC-MAIN-2020-05
refinedweb
3,036
50.57
, AppCode analyses the context and suggests the choices that are reachable from the current caret position. Code completion covers supported and custom file types. However, AppCode does not recognize the structure of custom file types and suggests completion options regardless of whether a specific type is appropriate in the current context. If basic code completion is applied to part of a parameter, or a variable declaration, AppCode or choose Code | Completion | SmartType from the main menu. and press N/A. A construct element required in the current context is added and the caret moves to the next editing position. Examples The command is helpful in numerous scenarios, including auto-closing parentheses, adding semicolons, and more. Complete Statement works with the following language constructs: - Type and type members: class, namespace, enum and enum classes. - Statements: if/else, while, do, for, switch/case, catch. Below, you can find a number of examples of applying the complete statement command in different contexts. Completing tag names AppCode automatically completes tags and attributes names and values in the following file types: - HTML/XHTML - XML/XSL. Completing tag names - Press < and start typing the tag name. AppCode displays the list of tag names appropriate in the current context. Use the ArrowUp and ArrowDown buttons to scroll through the list. - Press N/A to accept a selection from the list. AppCode automatically inserts the mandatory attributes according to the schema. Inserting a taglib declaration - Start typing a tag and press N/A. - Select a tag from the list. The uriof the taglib it belongs to is displayed in brackets. - Select the desired taglib and press N/A. AppCode adds the declaration of the selected taglib: Importing a taglib declaration - Start typing a taglib prefix and press N/A. - Select a taglib from the list and press N/A. AppCode AppCode Preferences. AppCode will show suggestions that include the characters you've entered in any positions. This makes the use of wildcards unnecessary: In case of CamelCase or snake_case names, type the initial letters only. AppCode: - You can use the Quick Information View by pressing N/A when you select an entry in the suggestions list: Sort entries in the suggestions list You can sort the suggestions list alphabetically or by relevance. To toggle between these modes, click or respectively in the lower-right corner of the list. AppCode will remember your choice. You can change the default behavior in the Code Completion settings page. View code hierarchy You can view code hierarchy when you've selected an entry from the suggestions list: - Press N/A to view type hierarchy - Press N/A to view Using method parameters placeholdersWhen you choose a method call from the suggestions list, the IDE inserts placeholders for argument values that include a name and a type of a parameter:
https://www.jetbrains.com/help/objc/2017.2/auto-completing-code.html
CC-MAIN-2017-51
refinedweb
467
56.25
I am very new to C++, and am struggling with these problems. I must enter in a 10 integer array, and then reverse the array. I do not want to make a function, or use characters, just simple code, this is what I thinking is along the lines of the answer #include <iostream> using namespace std; int main (){ int SourceArray[10]; // this will be the original array int DestArray[10]; // this will be the array that the reverse goes into int i; cout << "Enter a 10 Integer array with Positive Integers" << endl; for(int i=0; i<10; i++){ cin >> SourceArray[i]; } cout << "Your reversed array is: " << endl; for(int i=10; i >=0; i--){ cin >> SourceArray[i]; SourceArray[i] = DestArray[i]; } i know this has many mistakes, but I just cannot figure out arrays, I am having a terrible time figuring them out. But i want it to be as simple as above, with no other libraries etc... Question 2. I must make a 10 floating point array (similar to above) and show the positive integers, negative integers, max and min, sum, and sum of absolute values. Any way to get me started on this? Oh, and i'm suppose to have them shown in a fixed format with 2 decimals, and the results for average, sum, and other sum are suppose to be in scientific...Any help would be appreciated, I have searched the forums and internet with no simple answer. THANKS A TON!!
https://www.daniweb.com/programming/software-development/threads/112385/a-few-noob-questions
CC-MAIN-2018-13
refinedweb
246
62.11
Each Answer to this Q is separated by one/two green lines. I am building a Django application that exposes a REST API by which users can query my application’s models. I’m following the instructions here. My Route looks like this in myApp’s url.py: from rest_framework import routers router = routers.DefaultRouter() router.register(r'myObjects/(?P<id>\d+)/?$', views.MyObjectsViewSet) url(r'^api/', include(router.urls)), My Model looks like this: class MyObject(models.Model): name = models.TextField() My Serializer looks like this: class MyObjectSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = MyObject fields = ('id', 'name',) My Viewset looks like this:) When I hit /api/myObjects/60/ I get the following error: base_nameargument not specified, and could not automatically determine the name from the viewset, as it does not have a .modelor .querysetattribute. I understand from here that I need a base_name parameter on my route. But from the docs, it is unclear to me what that value of that base_name parameter should be. Can someone please tell me what the route should look like with the base_name? Try doing this in your urls.py. The third parameter ‘Person’ can be anything you want. router.register(r'person/food', views.PersonViewSet, 'Person') Let me explain, why we need an base_name in the first place and then let’s go into the possible value of base_name. If you ever used the Django urls without the rest-framework (DRF) before, you would’ve specified it like this: urlpatterns = [ url(r'myObjects/(?P<id>\d+)/?$', views.MyObjectsListView.as_view(), name="myobject-list"), url(r'myObjects/(?P<id>\d+)/?$', views.MyObjectsDetailView.as_view(), name="myobject-detail"), ] Here, if you see, there is a name parameter which used to identify the url in the current namespace (which is app). This is exactly what django-rest-framework trying to do automatically, since the drf knows whether the view is list or detail (because of viewset). it just needs to append some prefix to differentiate the urls. That’s the purpose of base_name (prefix). In most scenario, you can give the url or resource name as base_name. In your case, base_name=myobject. DRF will generate base_name + view_type as the name parameter like myobject_list & myobject_detail. Note: Usually, base_name will be automatically obtained from the queryset field (from view), since it’s same for all view types in a viewset. But if you specify, get_queryset method instead of queryset, it possibly means you’ve different queryset for different view types (like list, detail). So, DRF will ask you to specify a common base_name for all the view types for a resource. Maybe you just need to set the base_name parameter for your router with the name of the object: MyObject, in your case. router.register(r'myObjects/(?P<id>\d+)/?$', views.MyObjectsViewSet, base_name="MyObject") An alternative solution might be to use a ModelViewSet which will derive the basename automatically from the model. Just make sure and tell it which model to use: Because ModelViewSet extends GenericAPIView, you’ll normally need to provide at least the queryset and serializer_class attributes, or the model attribute shortcut. simply mention this, queryset = MyObjects.objects.all() like this, class MyObjectsViewSet(viewsets.ViewSet): queryset = MyObjects.objects.all() in your corresponding Viewset in views.py instead of mentioning under def retrieve()… its worked for me 🙂 Say you have two functions in your views.py to query a bunch of employees. One to query just the name(employeenameviewset) and other to query addresses also(employeeinfoviewset). Then in your urls.py, add like so: router.register(r'employee_names', views.employeenameviewset, basename="employees") router.register(r'employee_details', views.employeeinfoviewset, basename="employees") Using the same basename, the rest of the url will be created automatically by Django like it does in case of URL Conf. In fact this is based on URLConf.
https://techstalking.com/programming/python/what-base_name-parameter-do-i-need-in-my-route-to-make-this-django-api-work/
CC-MAIN-2022-40
refinedweb
632
50.12
Hi, I have a small program written for Linux and OSX which I need to compile for Windows. The problem is simple. I have a class defining a member-function "min". When I try to compile it under windows, I get warning #945: type qualifier ignored inline const Label2D& min(const Label2D &other) and error: inline specifier allowed on function declarations only inline const Label2D& min(const Label2D &other) and some more which all point to the fact that "min" is already defined as macro. The question is: When does this happen and is there a way to turn it off? I'm not using any "using namespace std;" directives inside the code and this error never occured on OSX or Linux. Compiler switches in Windows are: "/c /O2 /Ob2 /Ehsc /MD /GS /fp:fast /W1 /TP /Zm1000 /Qparallel /Qtbb /Qopenmp" and some includes and defines like "WIN32". any ideas? Cheers Patrick Link Copied Hi again, I found it. It seems that defining NOMINMAX solves the problem. I didn't use any windows header and was confused where this came from.. Cheers Patrick
https://community.intel.com/t5/Intel-C-Compiler/min-macro-with-Microsoft-OS/td-p/836172
CC-MAIN-2021-31
refinedweb
183
70.43
Draft.js is a great way to implement rich text editors with React. It can be little unclear though what you should do when you want to display your editor content as plain HTML. In this post we will learn how to do just that, by converting our editor state to HTML that can be displayed without Draft.js editor. I just published a class where I teach more about Draft.js. Best part is that you can get it for free! Read more. Displaying ContentState as HTML Draft.js docs state the following: “Note that the Draft library does not currently provide utilities to convert to and from markdown or markup, since different clients may have different requirements for these formats. We instead provide JavaScript objects that can be converted to other formats as needed.”. What this means is that Draft.js does not provide utilities for converting the editor content to HTML. Instead we need to use a different library for that. There is a bunch of options to choose from. I like to use draft-js-export-html. draft-js-export-html provides stateToHTML method that generates HTML representation for given ContentState object. Using it is quite straight forward. Let’s look at an example. Example In the example below, we have plain Draft editor and we display its contents as HTML below the editor. The conversion from ContentState to HTML is done in the onChange handler in line 12 (if you are not familiar with getCurrentContent() function, it returns ContentState object from an EditorState object). import React from "react"; import { Editor, EditorState } from "draft-js"; import { stateToHTML } from "draft-js-export-html"; class ExampleEditor extends React.Component { constructor(props) { super(props); this.state = { editorState: EditorState.createEmpty() }; this.onChange = editorState => { this.setState({ editorState, editorContentHtml: stateToHTML(editorState.getCurrentContent()) }); }; } render() { return ( <div> <div className="editor-container" style={{ border: "1px solid #000" }}> <Editor editorState={this.state.editorState} onChange={this.onChange} /> </div> <h4>Editor content as HTML</h4> <pre>{this.state.editorContentHtml}</pre> </div> ); } } export default ExampleEditor; So first we import stateToHTML from draft-js-export-html. Then in the line 12 in the onChange handler we generate the HTML version of the ContentState and save it to the component’s state. We display the generated HTML in the render method on line 27. Since the generating of the HTML is done in the onChange handler, we can see the updated HTML as we make changes to the editor. Conclusion We used stateToHTML function from draft-js-export-html library to generate HTML out of a ContentState object. This was a clean and easy way to convert the contents of the editor to HTML. I created a codesandbox for the example above so you can test it out by yourself. I added also another example with an editor that has some rich text editing features to the codesandbox. You can find the codesandbox here. HTML works well for displaying purposes but if you want to store your editor content for a later use, HTML is not the ideal way to do it. For that you should read a post I wrote on How to store Draft.js content. Also don’t forget to sign up for a CodePulse newsletter below to stay up to date on the latest posts and other cool stuff we have to offer! And of course, if you have any questions or comments I would be happy to hear them so go ahead and drop a comment below! Originally published at codepulse.blog on November 28, 2018. Discussion (0)
https://dev.to/tumee/how-to-display-draft-js-content-as-html-2g4g
CC-MAIN-2021-10
refinedweb
592
66.13
Introduction Having built a HomeKit compatible light1 I thought it was time to build a sensor too. Rather than build a finished product, I wanted a proof-of-concept to test reliability and convenience. So, the biggest design concern was simplicity and speed of construction. One slightly longer term project is to automate the lights at home, so that they come on when it gets dark. Some sort of ambient light sensor seems a key part of this. I am also quite interested to monitor the environment at home, so ideally I’d like temperature and humidity sensors too. The Enviro pHAT To keep things simple, I wanted to use off-the-shelf hardware and so acquired an Enviro pHAT2 board from Pimoroni. This board lacks a humidity sensor, but has a convenient python library to simplify the software. The board also measures motion and barometric pressure, but I’m ignoring these for now. Happily Pimoroni provide a nice python library which talks to the board3. Temperature sensing is done by the BMP2804 pressure sensor, which isn’t ideal. The data sheet says: Temperature measured by the internal temperature sensor. This temperature value depends on the PCB temperature, sensor element self-heating and ambient temperature and is typically above ambient temperature. Besides these issues, the sensor is reasonably close to the CPU on the Pi which gets quite warm. The net effect of all this is that the sensor reads significantly high: as I write this a thermometer reads about 22°C, the BMP280 about 30°C. Although these problems could be mitigated by calibrarion, for real work a different sensor placed some distance from the Pi is probably warranted. Light is sensed by a TCS34725. This is a full RGB sensor, but I only use the total brightness value. The HomeKit documentation says I should provide a value in lux: I am just using the number returned by the python library. Subjectively, I want to turn on the lights when the level falls to about 100. Software You can grab all the code from GitHub6. As with the light, Ivan Kalchev’s HAP-python7 library handles all the HomeKit stuff. Thank you again Ivan! The main code for the accessory is shown below: from pyhap.accessory import Accessory from pyhap.const import CATEGORY_SENSOR from envirophat import light, weather class Ephat(Accessory): category = CATEGORY_SENSOR def __init__(self, driver, *args, **kwargs): super().__init__(driver, *args, **kwargs) chars = { 'LightSensor': [ ( 'CurrentAmbientLightLevel', lambda: light.light() ) ] , 'TemperatureSensor': [ ( 'CurrentTemperature', lambda: weather.temperature() ) ] , 'Switch': [ ('On', lambda: light.light() < 100) ] } self.chars = [] for sname, charlist in chars.items(): cnames = [ name for (name,_) in charlist ] service = self.add_preload_service(sname, chars = cnames) for (name, getter) in charlist: c = service.configure_char(name) self.chars.append((c, getter)) @Accessory.run_at_interval(3) def run(self): for (char, getter) in self.chars: v = getter() char.set_value(v) The local chars dictionary defines all the sensors. We walk this structure both to initialize the HAP Accessory, and to compile a list of characteristics and callbacks in the Accessory’s chars property. Note that the names for the services and characteristics e.g. LightSensor and CurrentTemperature must match the official Apple standards. You can’t just invent your own (which is why I didn’t add a characteristic for atmospheric pressure). You will also see that I’ve added a virtual Switch characteristic which turns on when the light level falls below a threshold. This makes it easier to automate things in the Home app: just tell the light to come on which the Switch closes. I really should add other virtual switches with slightly different thresholds so that different lights turn on at slightly different times. Finally, we set up a periodic task (here every three seconds) to make new measurements and update the Accessory’s state. Walkthrough The notes below assume you’ve set up the Pi roughly along these lines8. The Enviro pHAT sensors communicate with the Pi over the I²C bus, so you’ll need to enable that e.g. by running raspi-config. Now install the dependencies: $ sudo apt-get install libavahi-compat-libdnssd-dev git python3-envirophat $ pip3 install HAP-python You will notice that HAP-python is installed without QR-code support. Although QR codes are convenient if you attach the accessory to your Home from the command line, the QR code is unreadable in system logs (and makes a real mess of them). Once python3-envirophat has been installed you can use Pimoroni’s test code to check that the hardware is working: $ wget $ python3 all.py Finally grab the Accessory code from GitHub, and run it: $ git clone $ cd mjo-homekit/code $ python3 ephat.py You should then be able to add the Accessory in the Home app on your iPhone. systemd If you want to start all this automatically, you can use the systemd script in mjo-homekit/systemd/ephat.service. Discussion In essence, that’s all there is to this. The sensor works reliably with a minimum of fuss. Clearly things could be improved: it would be nice to collect more data and with better fidelity. There is one issue: sometimes the sensor gets ‘stuck’ when powered down and then restarted. You can follow this on GitHub9. References - 1. ../06/homekit-light.html - 2. - 3. - 4. - 5. - 6. - 7. - 8. ../06/rpi-setup.html - 9.
https://www.mjoldfield.com/atelier/2018/07/homekit-ephat.html
CC-MAIN-2020-50
refinedweb
891
57.37
/* tilde.h: Externally available variables and function in libtilde.a. */ /* Copyright (C) 1992 (_TILDE_H_) # define _TILDE_H_ #ifdef __cplusplus extern "C" { #endif /* typedef char *tilde_hook_func_t PARAMS((char *)); /* If non-null, this contains the address of a function that the application wants called before trying the standard tilde expansions. The function is called with the text sans tilde, and returns a malloc()'ed string which is the expansion, or a NULL pointer if the expansion fails. */ extern tilde_hook_func_t *tilde_expansion_preexpansion_hook; /*. */ extern tilde_hook_func_t *tilde_expansion_failure_hook; /* When non-null, this is a NULL terminated array of strings which are duplicates for a tilde prefix. Bash uses this to expand `=~' and `:~'. */ extern char **tilde_additional_prefixes; /* When non-null, this is a NULL terminated array of strings which match the end of a username, instead of just "/". Bash sets this to `:' and `=~'. */ extern char **tilde_additional_suffixes; /* Return a new string which is the result of tilde expanding STRING. */ extern char *tilde_expand PARAMS((const char *)); /* Do the work of tilde expansion on FILENAME. FILENAME starts with a tilde. If there is no expansion, call tilde_expansion_failure_hook. */ extern char *tilde_expand_word PARAMS((const char *)); #ifdef __cplusplus } #endif #endif /* _TILDE_H_ */
http://opensource.apple.com/source/gdb/gdb-1344/src/readline/tilde.h
CC-MAIN-2016-26
refinedweb
185
57.77
Today's lab will focus on using the Gradescope system and simple programs in Python. Software tools needed: web browser and Python IDLE programming environment. CSci 127 has a laboratory, 1001E North, dedicated for its use. The room has a flexible set-up to encourage group work and laptop computers that can be checked out for use in the room only. When you enter the room, hand your Hunter ID to undergraduate teaching assistant in exchange for a laptop computer. At the end of lab, make sure to return your computer to its docking station so that it can be charged. The laptops run Ubuntu Linux operating system. When you open the laptop, choose the "Computer Science Guest" account (the password is: 1001E!88). On the left hand side is a bar of icons, including icons for a browser (for accessing webpages) and the terminal window (for writing commands and launching programs). When you launch the browser in the lab, you will see the standard HunterNet webpage. Fill in the form with your Hunter credentials (the same that you would use to access the wifi from your own computer or phone) to access the internet. If you get a message that you are not connected to the internet, click on the internet symbol (empty quarter circle) in the upper right corner of the toolbar. Hover over the "More Networks" and then click on "HunterNet". The wifi symbol will blink with concentric lines. When it stops blinking and becomes solid lines, an internet connection has been established and you can then reload the webpage to type in your Hunter credentials. This course will use the on-line Blackboard system for course announcements, lecture previews, and posting grades. Blackboard should be accessible through your CUNY First account (see Hunter ICIT Blackboard page for directions on using the system and how to get help). The lecture previews can be found under the Content menu (left hand side of Home screen). Chrome: There were known bugs using the previous version of Blackboard with the Chrome browser. In particular, the Chrome browser often would freeze during quizzes. These have reportedly been fixed in the new version. Timing Out: If the system times out and locks your attempt (happens rarely when the browser or PC crashes), contact the instructor so they can clear the attempt so you can try again. We will be using the IDLE programming environment for Python, since it is very simple and comes with all distributions of Python (if you would prefer to use another programming environment, Spyder is loaded on the lab machines). To launch IDLE: idle3(followed by an enter/return). print("Hello, World!") Instead of using the shell window (where we can try things immediately), let's use a text window, where we can save our program for later and submit it to Gradescope (this is the basis of the first program). #Name: ...your name here... #Date: August 27, 2018 #This program prints: Hello, World! print("Hello, World!") This course will use the on-line gradescope system for submitting work electronically. An email invitation to the course was sent to your email address (we used the one saved for you on CUNYFirst as of Thursday, 24 January). If you did not receive the email, one of the teaching staff can regenerate it for you. Now that you have just submitted your first program, let's try some other Python commands. Here's a quick demo (click the triangle to run the program): A quick overview of the above program: Now, let's write the same program in IDLE: import turtle tia = turtle.Turtle() for i in range(4): tia.forward(150) tia.right(90) #Name: ...your name here... #Date: August 25, 2017 #This program draws a octagon.Run your program after editing to make sure you do not have any typos. To review, we introduced the turtle commands: There are many more turtles commands. Over the next couple of classes, we will use those in the turtle chapter from the textbook. In addition to the ones that control movement, you can also change the color of the turtle, change the size of the drawing pen, and change the backbround color (by making a window or screen object, and changing it's color): A complete list of turtle commands is part of the Python 3 documentation. Since the lab computers are shared, student files are regularly removed from the computer. Any of your work that you would like to save, you should email to yourself, put in your dropbox, or save on a USB drive. If you finish the lab early, now is a great time to get a head start on the programming problems due next week. There are instructors to help you, and you already have Python up and running. The Programming Problem List has problem descriptions, suggested reading, and due dates next to each problem. matplotlib scipy folium image sudo apt-get install spyder3 If you have a Mac or Windows computer, the easiest installation is Anaconda. When given the choice, choose to "Install for me only" (this avoids some odd permission problems that occur when choosing the install for everyone). Almost all the packages we will use are automatically included in the Anaconda default installation. The two that are not, image and folium, can be installed via pip (Python package manager). We won't need these immediately, so, you can wait on installing them. Here are the directions: which pipat the terminal window. easy_install pipAnd then add the packages: pip install image pip install folium There are many free on-line versions that you could use via a browser, such as pythonanywhere.
https://stjohn.github.io/teaching/csci127/s19/lab1.html
CC-MAIN-2021-43
refinedweb
946
70.02
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | ATTRIBUTES | SEE ALSO | NOTES #include <wchar.h>long int wcstol(const wchar_t *nptr, wchar_t **endptr, int base); #include <widec.h>long int wstol(const wchar_t *nptr, wchar_t **endptr, int base); The wcstol() and wstol() functions convert the initial portion of the wide character string pointed to by nptr to long int representation. They first decompose the input wide character string into three parts: an initial, possibly empty, sequence of white-space wide-character codes (as specified by iswspace(3C)), a subject sequence interpreted as an integer represented in some radix determined by the value of base; and watol() function is equivalent to wstol(str, (wchar_t **)NULL, 10). The watoll() function is the long-long (double long) version of watol(). The watoi() function is equivalent to (int)watol( ). Upon successful completion, wcstol() and wstol() return the converted value, if any. If no conversion could be performed, 0 is returned, and errno may be set to indicate the error. If the correct value is outside the range of representable values, {LONG_MAX} or {LONG_MIN} is returned (according to the sign of the value), and errno is set to ERANGE. The wcstol() and wstol() functions will fail if: The value of base is not supported. The value to be returned is not representable. The wcstol() and wstol() functions may fail if: No conversion could be performed. See attributes(5) for descriptions of the following attributes: iswalpha(3C), iswspace(3C), scanf(3C), wcstod(3C), attributes(5) Because 0, {LONG_MIN}, and {LONG_MAX} are returned on error and are also valid returns on success, an application wishing to check for error situations should set errno to 0, call wcstol() or wstol( ), then check errno and if it is non-zero assume an error has occurred. Truncation from long long to long can take place upon assignment or by an explicit cast. NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | ATTRIBUTES | SEE ALSO | NOTES
https://docs.oracle.com/cd/E19683-01/816-5214/6mbcfdlit/index.html
CC-MAIN-2018-13
refinedweb
320
51.07
BaseServlet - servlet optimization servlet role Generally speaking, servlet is specially used to receive the client's request, specifically to receive the client's request data, and then call the underlying service to process the data and generate the result The traditional way of writing servlet s As you can see from the picture, we only have two classes, one user class and one task class, and the functions are very simple, log in and register, add and modify query tasks, and there are already five servlet s, so we will have one in the future. If a servlet function is used, it will be very redundant, and the servlets of the user and the task are written together, so it is difficult to distinguish what is what, which increases the difficulty of post-maintenance. Optimization ideas MVC three-tier architecture Then can we design an easy-to-manage method for unified management? Refer to the service and dao layers in our MVC three-tier architecture Let's try to be like them, a specific class is managed by a corresponding dao or service class. We designed BaseServlet. Does it feel much clearer? Then the function in each module is abstracted into a method. (The picture shows the method in UserServlet) Then our front-end request path is written like this servlet running process The client sends a request—init initializes the servlet—calls the service method to automatically identify the post or get request method. We designed each function as a method above, so how do we automatically identify which method it is? We thought of reflection. Then let's combine the service method and reflection to transform it so that it can automatically identify which method it is. optimization process - Originally, we directly inherited HttpServlet, but now we have an additional layer of BaseServlet, so our original class should inherit BaseServlet, then BaseServlet inherits HttpServlet, and then rewrites the service method inside. /*BaseSevlet Inherit HttpServlet*/ public class BaseServlet extends HttpServlet { @Override protected void service(HttpServletRequest req, HttpServletResponse resp) { //1. Get the request path first String requestUrl = req.getRequestURI(); //2. Get the last method name of the request path (see the url picture above for details) int index = requestUrl.lastIndexOf('/'); String methodName = requestUrl.substring(index + 1); //3. Get the bytecode file of the calling object Class<? extends BaseServlet> clazz = this.getClass(); try { //4. Get the method in the class by the method name. Method method = clazz.getMethod(methodName,HttpServletRequest.class,HttpServletResponse.class); //5. Execution method method.invoke(this, req,resp); } catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException e) { e.printStackTrace(); } } } - The original class inherits BaseServlet /** * @program: BaseServlet * @description: * @author: stop.yc * @create: 2022-04-13 15:42 **/ //Under user, * represents any @WebServlet("/user/*") public class UserServlet extends BaseServlet{ /** * @Description: login servlet * @Param: [req, resp] * @return: void * @Author: stop.yc * @Date: 2022/4/13 */ public void login(HttpServletRequest req, HttpServletResponse resp) throws IOException { //1. Get the front-end data (post) request body BufferedReader reader = req.getReader(); String userStr = reader.readLine(); //2...... } } - Front-end request writing //It's just an example of url, other codes are according to your code. axios({ method: "post", url: " data: _this.user, }).then(function (resp) { //. . . }
https://algorithm.zone/blogs/baseservlet-servlet-optimization.html
CC-MAIN-2022-21
refinedweb
525
55.74
Link to original article with code snippets (recommended): As far as I know chrome.storage saves it’s keys globally, so it’s not like localstorage on normal pages that gets only works in the current page. For that purpose I had to figure out a way of achieving this. So I decided using namespaces using template literals. This is a really made up process so there might be inaccuracies, feel free to let me know :) Creating the extension Creating the extension from scratch is pretty straightforward, we just have to add a manifest.json file. We have to create a popup.html file and just use it as if it was a normal HTML. We can import scripts add stylsheets, etc. In the popup.js file is where we are going to have the logic for namespacing chrome.storage keys. Firstly, I have to point out that chrome.storage is an async api therefore we will have to use async/await on our main function. Then we going to use the tab API, that we enabled before in the manifest, to get the URL of the current page and we wait for the Promise to resolve. Then we use the storage api and use a template literal to get the settings only from the current URL. As we have no real way of getting the key from the results, we just resolve the first element in the Object.values() that returns an array of all the keys in the results which in this case is the settings object we want. Then we substitute the default settings object with the one we got from storage. To conclude, it works. To set up a new element we have to set the addEventListener inside the async function as we’ll need the URL for setting up the namespace. Wrap up I hope you will find this blog post useful and keep it handy for a quick reference. This solution is a little clumsy but I didn’t find any better way and wanted to share it with you. Feel free to send me a DM or to mention me on Twitter if you’ve got any suggestion or fix. You can look at the whole code in this repository Discussion (0)
https://dev.to/datsgabs/namespacing-chrome-storage-for-page-dependant-settings-for-your-chrome-extension-4dm9
CC-MAIN-2022-21
refinedweb
379
72.26
view raw I am trying to create an ordered dictionary from a split string. How do I maintain the order of the split string? Sorry, my original example was confusing and contradicted the idea of an ordered dictionary. This is a different problem but I am not sure how to split the string as such. My sample file "practice_split.txt" is as follows: §1 text for chapter 1 §2 text for chapter 2 §3 text for chapter 3 OrderedDict([('§1', 'text for chapter 1'), ('§2', 'text for chapter 2'), ('§3', 'text for chapter 3')]) OrderedDict([('1 text for chapter 1 ', '\xc2\xa7'), ('\xc2\xa7', '3 text for chapter 3'), ('2 text for chapter 2 ', '\xc2\xa7')]) # -*- coding: utf-8 -* import codecs import collections import re with codecs.open('practice_split.txt', mode='r', encoding='utf-8') as document: o_dict = collections.OrderedDict() for line in document: conv = line.encode('utf-8') a = re.split('(§)', conv) a = a[1:len(a)] for i in range(1, len(a) - 1): o_dict[a[i]] = a[i+1] print o_dict From my understanding of your code your loop is incorrect. You want the first § with the first text entry. You also want to skip the § elements as a key to your dictionary, therefore you need a step of 2 for the loop. Finally, you may want to strip spaces off the start/end of the text. for i in range(1, len(a), 2): o_dict["{}{}".format(a[i - 1], i / 2 + 1)] = a[i].strip() print o_dict for k, v in o_dict.iteritems(): print k.decode('utf-8'), v Output: OrderedDict([('\xc2\xa71', 'text for chapter 1'), ('\xc2\xa72', 'text for chapter 2'), ('\xc2\xa73', 'text for chapter 3')]) §1 text for chapter 1 §2 text for chapter 2 §3 text for chapter 3 Edit: I changed my code to reflect the edits to OPs question.
https://codedump.io/share/5BalwnHKrPRr/1/how-do-i-maintain-the-order-when-splitting-a-string
CC-MAIN-2017-22
refinedweb
310
64.91
!- Search Loader --> <!- /Search Loader --> Hi all I have a issue in following when i try to run my code to check speed IntelPython making. My system: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 30 On-line CPU(s) list: 0-29 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 30 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 42 Model name: Intel Xeon E312xx (Sandy Bridge, IBRS update) Stepping: 1 CPU MHz: 2194.710 BogoMIPS: 4389.42 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K L3 cache: 16384K NUMA node0 CPU(s): 0-29 My code: from sklearn.svm import SVC from sklearn.datasets import load_digits from time import time svm_sklearn = SVC(kernel="rbf", gamma="scale", C=0.5, probability=True) digits = load_digits() X, y = digits.data, digits.target start = time() svm_sklearn = svm_sklearn.fit(X, y) end = time() print(end - start) # output: 0.141261... t = time() print(svm_sklearn.score(X, y)) # output: 0.9905397885364496 print(svm_sklearn.score(X, y)) # output: 0.9905397885364496 print(svm_sklearn.score(X, y)) # output: 0.9905397885364496 print(time() - t, '(s)') from daal4py.sklearn import patch_sklearn patch_sklearn() # <-- apply patch from sklearn.svm import SVC svm_d4p = SVC(kernel="rbf", gamma="scale", C=0.5, probability=True) start = time() svm_d4p = svm_d4p.fit(X, y) end = time() print(end - start) # output: 0.032536... t = time() print(svm_d4p.score(X, y)) # output: 0.9905397885364496 print(svm_d4p.score(X, y)) # output: 0.9905397885364496 print(svm_d4p.score(X, y)) # output: 0.9905397885364496 print(time() - t, '(s)') and its result 1.0682997703552246 0.9905397885364496 0.9905397885364496 0.9905397885364496 0.6014969348907471 (s) Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) solvers for sklearn enabled: 0.9806723594665527 0.9905397885364496 0.9905397885364496 0.9905397885364496 0.6032438278198242 (s) I cant understand this. Speeds are similar but i hava 30 cores. conda list: # Name Version Build Channel absl-py 0.10.0 py37hc8dfbb8_1 <unknown> aiohttp 3.6.3 py38h1e0a361_2 <unknown> asn1crypto 1.3.0 py37_0 intel astunparse 1.6.3 py_0 <unknown> async-timeout 3.0.1 py37_0 <unknown> attrs 20.2.0 py_0 <unknown> blinker 1.4 py37_0 <unknown> boto 2.49.0 py_0 <unknown> boto3 1.16.9 pyhd8ed1ab_0 <unknown> botocore 1.19.9 pyhd3deb0d_0 <unknown> bzip2 1.0.8 h14c3975_2 intel c-ares 1.16.1 h516909a_3 <unknown> cachetools 4.1.1 py_0 <unknown> certifi 2020.4.5.2 py37_0 intel cffi 1.14.0 py37h14c3975_2 intel chardet 3.0.4 py37_3 intel click 7.1.2 py_0 <unknown> conda 4.8.3 py37_0 intel conda-package-handling 1.4.1 py37_2 intel cryptography 2.9.2 py37_0 intel cycler 0.10.0 py37_7 intel cython 0.29.17 py37h6ebd63d_0 intel daal 2020.2 intel_254 <unknown> daal4py 0.2020.2 py37h533f8aa_7 intel docutils 0.16 pypi_0 pypi et-xmlfile 1.0.1 py_0 <unknown> fastapi 0.60.1 pypi_0 pypi fastbpe 0.1.0 pypi_0 pypi flit 2.3.0 pypi_0 pypi flit-core 2.3.0 pypi_0 pypi freetype 2.10.2 0 intel funcsigs 1.0.2 py37_7 intel gast 0.3.3 py_0 <unknown> gensim 3.7.3 py37he1b5a44_1 <unknown> google-auth 1.22.0 py_0 <unknown> google-auth-oauthlib 0.4.1 py_2 <unknown> google-pasta 0.2.0 pyh8c360ce_0 <unknown> grpcio 1.31.0 py37hb0870dc_0 <unknown> h11 0.9.0 pypi_0 pypi h5py 2.10.0 nompi_py37hf7afa78_105 <unknown> hdf5 1.10.6 hb1b8bf9_0 <unknown> httptools 0.1.1 pypi_0 pypi icc_rt 2020.2 intel_254 <unknown> idna 2.9 py37_0 intel impi_rt 2019.8 intel_254 <unknown> importlib-metadata 2.0.0 py37hc8dfbb8_0 <unknown> intel-openmp 2020.2 intel_254 <unknown> intelpython 2020.2 0 intel ipp 2020.2 intel_254 <unknown> jdcal 1.4.1 py_0 <unknown> jmespath 0.10.0 py_0 <unknown> joblib 0.15.1 py37_0 intel keras 2.4.3 py_0 <unknown> keras-base 2.4.3 py_0 <unknown> keras-preprocessing 1.1.0 py_0 <unknown> kiwisolver 1.2.0 py37hf484d3e_0 intel libarchive 3.4.2 h62408e4_0 <unknown> libffi 3.3 11 intel libgcc-ng 9.1.0 hdf63c60_0 intel libgfortran-ng 7.3.0 hdf63c60_0 <unknown> libpng 1.6.37 4 intel libprotobuf 3.13.0.1 h200bbdf_0 <unknown> libstdcxx-ng 9.1.0 hdf63c60_0 intel libxml2 2.9.10 h14c3975_0 intel llvmlite 0.32.1 py37h75308e0_0 intel lz4-c 1.9.2 hf484d3e_1 intel lzo 2.10 h14c3975_4 intel markdown 3.3.2 py37_0 <unknown> matplotlib 3.1.2 py37hf484d3e_5 intel mkl 2020.2 intel_254 <unknown> mkl-service 2.3.0 py37_4 intel mkl_fft 1.1.0 py37h6ebd63d_3 intel mkl_random 1.1.1 py37h6ebd63d_3 intel mpi4py 3.0.3 py37hf484d3e_7 intel multidict 4.7.6 py37h7b6447c_1 <unknown> mysql-connector-python 8.0.22 pypi_0 pypi nltk 3.5 pypi_0 pypi numba 0.49.1 np118py37hf484d3e_2 intel numexpr 2.7.0 py37_2 intel numpy 1.18.5 py37h6ebd63d_5 intel numpy-base 1.18.5 py37_5 intel oauthlib 3.1.0 py_0 <unknown> openpyxl 3.0.5 pypi_0 pypi openssl 1.1.1g h14c3975_1 intel opt_einsum 3.1.0 py_0 <unknown> pandas 0.25.3 py37hf484d3e_6 intel pip 20.1 py37_0 intel protobuf 3.13.0.1 py37hb809cae_1 <unknown> pyasn1 0.4.8 py_0 <unknown> pyasn1-modules 0.2.8 py_0 <unknown> pycosat 0.6.3 py37_5 intel pycparser 2.20 py37_1 intel pydantic 1.6.1 pypi_0 pypi pyeditline 2.0.1 py37_0 intel pyjwt 1.7.1 py_0 <unknown> pymysql 0.10.1 pypi_0 pypi pymysql-pool 0.3.4 pypi_0 pypi pyopenssl 19.1.0 py37_1 intel pyparsing 2.4.7 py37_1 intel pysocks 1.7.0 py37_1 intel python 3.7.7 hf484d3e_13 intel python-crfsuite 0.9.7 py37h99015e2_1 <unknown> python-dateutil 2.8.1 py37_1 intel python-libarchive-c 2.8 py37_13 intel pytoml 0.1.21 pypi_0 pypi pytz 2020.1 py37_0 intel pyvi 0.1 pypi_0 pypi pyyaml 5.3 py37_0 intel regex 2019.8.19 pypi_0 pypi requests 2.23.0 py37_4 intel requests-oauthlib 1.3.0 pyh9f0ad1d_0 <unknown> rsa 4.6 pyh9f0ad1d_0 <unknown> ruamel_yaml 0.15.99 py37_5 intel scikit-learn 0.23.1 py37h6ebd63d_0 intel scipy 1.4.1 py37h6ebd63d_7 intel setuptools 47.3.0 py37_0 intel six 1.15.0 py37_0 intel sklearn-crfsuite 0.3.6 pyh9f0ad1d_0 <unknown> smart_open 2.1.0 py_0 <unknown> smp 0.1.4 py37_0 intel sqlite 3.32.1 h14c3975_1 intel starlette 0.13.6 pypi_0 pypi tabulate 0.8.7 pyh9f0ad1d_0 <unknown> tbb 2020.3 intel_254 <unknown> tbb4py 2020.3 py37_intel_0 <unknown> tcl 8.6.9 h14c3975_2 intel tensorboard 2.2.1 pyh532a8cf_0 <unknown> tensorboard-plugin-wit 1.6.0 py_0 <unknown> tensorflow 2.2.0 py37_0 <unknown> tensorflow-base 2.2.0 0 <unknown> tensorflow-estimator 2.2.0 pyh208ff02_0 <unknown> tensorflow-mkl 2.2.0 h4fcabd2_0 <unknown> termcolor 1.1.0 py37_1 <unknown> threadpoolctl 2.1.0 pyh5ca1d4c_0 <unknown> tk 8.6.9 6 intel tqdm 4.39.0 py37_2 intel unidecode 1.1.1 pypi_0 pypi urllib3 1.25.9 py37_0 intel uvicorn 0.11.6 pypi_0 pypi uvloop 0.14.0 pypi_0 pypi websockets 8.1 pypi_0 pypi werkzeug 1.0.1 pyh9f0ad1d_0 <unknown> wheel 0.34.2 py37_4 intel wrapt 1.11.2 py37h8f50634_1 <unknown> xgboost 1.1.1 497_gcd3d14apy37_0 intel xlrd 1.2.0 pypi_0 pypi xz 5.2.5 h14c3975_0 intel yaml 0.1.7 6 intel yarl 1.6.2 py37h8f50634_0 <unknown> zipp 3.3.1 py_0 <unknown> zlib 1.2.11.1 h14c3975_1 intel zstd 1.4.4 hf484d3e_1 intel i thinks its useful Can you give me a advice?? Plz Thanks for reading. Hi, Thanks for reaching out to us. Could you please try the same after exporting the below command and check whether there is any improvement in time. export USE_DAAL4PY_SKLEARN=1 We could see that you are using an old intel hardware. We tried the same in latest intel architecture (cascade lake). After exporting the above command , we observed an improvement in time. We got the following result : Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) solvers for sklearn enabled: 0.41217684745788574 0.9905397885364496 0.9905397885364496 0.9905397885364496 0.42245054244995117 (s) Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) solvers for sklearn enabled: 0.39907169342041016 0.9910962715637173 0.9910962715637173 0.9910962715637173 0.3754143714904785 (s) Also you can try with a bigger dataset to check if there is significant improvement in time. Thanks. Hi, Thanks for your reply. I did as below: export USE_DAAL4PY_SKLEARN=1 source activate root python3.7 test.py and output is: Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) solvers for sklearn enabled: 1.0459799766540527 0.9905397885364496 0.9905397885364496 0.9905397885364496 0.631990909576416 (s) Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) solvers for sklearn enabled: 1.02339506149292 0.9905397885364496 0.9905397885364496 0.9905397885364496 0.6148135662078857 (s)) I think they have no change in time. When i dont use command "export USE_DAAL4PY_SKLEARN=1", my training svm time is the same or better sometime. The program would be better than it? My code is from a sample in your github running very good. I look forward to be more. Thanks! Hi, No improvement in time may be because you are using old Intel architecture. We will check with the concerned team regarding this issue and get back to you soon. Thanks Hi, I am an engineer looking into your case. Please also let me know which OS system you have if the solutions I outline below do not work. There are a few options you can try: 1) Ensure that environment variable is turned on only when you run accelerated scikit-learn. If you turn on the environment variable before running the whole script, it will turn on Intel accelerations for both versions, regardless of whether you've included the monkey-patch programmatically or not (you can see it's turned on for both sklearn versions in the script as it indicates from the print out saying "solvers enabled" twice). If you separate the non-accelerated scikit-learn and the accelerated scikit-learn into two scripts, then only turn on the environment variable or monkey-patch only with the Intel-accelerated one, you should be able to see a difference. Also note that the toy scikit-learn dataset used in this sample is fairly small, which can also explain why you may not see a difference (it may not be large enough to show a difference on your hardware). You can also separate the non-accelerated and accelerated scikit-learn into different environments and then try running them (put the non-accelerated scikit-learn in a separate environment without IDP). 2) I notice that you are using IDP/daal4py 2020 update 2. Please consider updated IDP to version 2020 update 4 or daal4py/Intel scikit-learn to version 2020 update 3 directly. There were updates done to the scikit-learn accelerations in this, so this should help. This may also allow you to gain more performance. 3) You can also check if Intel accelerated scikit-learn is being utilized with verbose mode (it will show you a series of print statements which should indicate which implementation is being called). I suggest trying this while the non-accelerated scikit-learn is commented out. In your command line shell, use the following command: #for linux or macOS export IDP_SKLEARN_VERBOSE=INFO #for windows set IDP_SKLEARN_VERBOSE=INFO Please let me know if this helps, also refer here to the daal4py scikit-learn documentaion for more information. Best, Rachel Hi, I will try your solutions soon. I have a more question. I installed the Intel distribution for Python for ambitions possible to speed up my machine learning code (Sklearn) and deep learning code ( with Tensorflow) but my intel hardware system is old and not working really as my wish. So i want to ask whether my deep learning code is better when my system doesnt have a GPU for running it. Thanks Hi, Please let me know if my solutions helped resolve your scikit-learn issues or if you have any other problems with viewing scikit-learn performance related to this issue. I suggest posting your deep learning performance inquiries separately on the Intel Optimized AI Frameworks Forum. That forum will have better equipped experts to assist you on optimizing your deep learning code. Please note that Intel Distribution for Python includes Intel optimizations for NumPy, SciPy, Scikit-Learn, and daal4py. Intel Optimized TensorFlow is considered a separate product that is not included or optimized by Intel Distribution of Python. Best, Rachel Hi, After i update the software i think it worked. Time for training in my example have only 0.58 s and time for testing is 0.6s. It is significantly great for my task. Thanks. Hi, I am glad to hear that you can now see the performance speedup in your code. Please also look at the daal4py and accelerated scikit-learn API documentation for reference. Is your classical ML issue resolved? Best, Rachel Hi, my issue is solved by update Intel distribution for python version. Thank you very much for your supports Hi, No problem, I am glad your issue was resolved. Thanks for the confirmation. This thread will be no longer supported by Intel, please raise a new thread if you have any further issues. Best, Rachel
https://community.intel.com/t5/Intel-Distribution-for-Python/Optimization-Sklearn-svm-SVC-not-working/td-p/1225633
CC-MAIN-2020-50
refinedweb
2,218
72.32
Hi Everybody, In relation to jabber support for DotGNU , it has been found that we really need some *real* support for Jabber in Portable.Net . So we're thinking about implementing a Jabber# library for it. After giving much thought to how to implement Jabber I have a plan about how to about it. The System.Net.WebRequest should be extended to form a DotGNU.Net.JabberWebRequest like the System.Net.HttpWebRequest. The DotGNU namespace should contain all the "innovations" we make. It is also possible that someone from Jabber.Net already has such an implementation of the Jabber protocol , so I've mailed the Alpha Geek Joe HildeBrand <address@hidden> asking about the details on their work. If that project may provide a Jabber.Client.dll (or someother stuff) , we may be able to abstract out the details in that . Until the System.Xml is finished (or usable) we will have to write code with our eyes closed. Hoping that the Jabber.Net will also include the client code stuff. Priorities for Network Development (High : two weeks) ---------------------------------- * Socket.Select() Internal call read pnet/engine/lib_socket.c (Hacker level 3) * pnetlib/System/Net/NetworkStream.cs use the Platform.SocketMethods class and implement the Stream functions using the recv , send in it. (Hacker level 2) * pnetlib/System/Net/Dns.cs & internalcalls Really don't know the requirements or procedures (rhys: do you have uncommited code here ?) * Array declare woes in the compiler :( . a quick fix in the form of an IL HelperClasses code ie a helpers assembly which has the ArrayHelper.NewByteArray(len) see for example Jabber Specific (medium : next month) --------------- * pnetlib/System/Uri.cs implement the jabber addressing/ URI model here * finish XML stuff * implement SHA1 for Jabber Auth (or was it Md5 ?) * Do the base64 stuff to make it send binary * needs XPath ? Rpc Stuff (low : in the next 4 months) --------- (after all that's what jabber is used for) * A [WebService] attribute to mark a static function as Rpc visible (Hacker level 1) * A reflection based Rpc server (Hacker level 2) * Implement MethodInfo.GetParameters() check out my Pnet demos at for an example of what Pnet can do (and some tricks off my sleeve) . This shows how we can work around most of cscc's limitations .... Gopal -- The difference between insanity and genius is measured by success
https://lists.gnu.org/archive/html/dotgnu-general/2002-04/msg00343.html
CC-MAIN-2022-27
refinedweb
389
59.6
In this session we’ll review what we’ve done, reorganize the code, and then join together the separate parts into one unified program. Review Quick recap of palindrome things we’ve done so far: - First we wrote isPalindrome, the basic palindome test: word == reverse wordtells us whether a word is spelled the same forward and backward. - We wrote a mainaction to turn our function into an interactive program that somebody could execute and use. We also made some improvements to the basic palindrome test. - The nonemptyPalfunction rejects empty inputs, because if you haven’t typed anything at all, then that’s not really a palindrome. - The verbosefunction returns a nice output message instead of just printing “True” or “False”. - The isPalindromeIgnoringCasefunction ignores capitalization, so that we can capitalize the first letter in a word and still have it count as a palindrome. - The isPalindromePhrasefunction ignores spaces and punctuation, so that we can write “taco cat” as two words. Some of these improvements we made separately, though - We never got around to writing one big program that has all of these features. So, what we’re going to do right now is bring them all together into one bigger program that does everything. Before we get to that, though, we really need to spend some time on cleanup and organization. This is getting to be way too much code, and a bit of tidiness would go a long to help us keep track of what we have, what’s where, and what we still need to do. This code could really benefit from some refactoring. Here is what our main.hs file currently looks like: import Data.Char isPalindrome :: String -> Bool isPalindrome word = word == reverse word nonemptyPal :: String -> Maybe Bool nonemptyPal word = case word of [] -> Nothing _ -> Just (isPalindrome word)!" myHead :: [a] -> a myHead (first:rest) = first myTail :: [a] -> [a] myTail xs = case xs of [] -> [] (first : rest) -> rest allLowerCase :: String -> String allLowerCase word = myMap toLower word allLower :: String -> String allLower = myMap toLower myMap :: (a -> b) -> [a] -> [b] myMap func list = case list of [] -> [] first : remainder -> func first : myMap func remainder isPalindromeIgnoringCase :: String -> Bool isPalindromeIgnoringCase word = isPalindrome (allLowerCase word) isPalindromePhrase :: String -> Bool isPalindromePhrase phrase = isPalindrome (myFilter notSpace phrase) withoutSpaces :: String -> String withoutSpaces phrase = case phrase of [] -> [] ' ' : remainder -> withoutSpaces remainder first : remainder -> first : withoutSpaces remainder myFilter :: (Char -> Bool) -> String -> String myFilter predicate string = case string of [] -> [] first : remainder -> if (predicate first) then (first : myFilter predicate remainder) else (myFilter predicate remainder) notSpace :: Char -> Bool notSpace x = not (x == ' ') notPunctuation :: Char -> Bool notPunctuation x = not (isPunctuation x) Junk removal The first thing we can do is remove some things. We wrote some functions for practice that can be replaced by Prelude functions: myMapcan be replaced with map. myFiltercan be replaced with filter. Start by finding every place where myMap appears in the code (other than the definition of myMap), and change each occurrence to map. Do not delete the myMap function yet. Before: = myMap toLower wordallLowerCase word After: = map toLower wordallLowerCase word Save main.hs and :reload it in GHCi. When you refactor, try to work in small steps, and reload after each step to make sure that all of the code still compiles. When reloading succeeds, this confirms that we were correct in our understanding that map is a suitable replacement for our myMap function. Now that we are no longer using myMap, remove its definition. Reload after this step as well, to confirm that all of the usages of myMap have been elimated. If the reload fails, you may have mistakenly missed one of the replacements in the previous step. If it succeeds, then you have gotten them all. Do the same thing with myFilter. The myHead and myTail functions can also be removed; we only wrote them for practice, and we never used them. Remove these two definitions, and :reload to confirm that we were not using them. We can also remove withoutSpaces, because the isPalindromePhrase function now uses filter notSpace instead. allLower is the same as allLowerCase – we had written two versions of the same function to demonstrate two different ways to write it. Remove allLower, which is the one we aren’t using. Organization We’ve said that the order of definitions within a file doesn’t matter at all – this is true, to the compiler. But it does matter a lot to us. If affects the order in which we read it. We can use ordering and comments to communicate to readers and to ourselves which definitions are most closely related to each other. This program’s definitions can be separated pretty cleanly into two distinct categories: - The part of the code that deals with defining what a palindrome is and how to test it; - The interactivity, how we should prompt the user for a message, and how we display the results back to the user. The organization could perhaps be improved, then, by moving the main and verbose into their own section of the file and labelling the two sections with a comment. The file now looks like this: import Data.Char -- The interactive program --!" -- What a palindrome is -- isPalindrome :: String -> Bool isPalindrome word = word == reverse word nonemptyPal :: String -> Maybe Bool nonemptyPal word = case word of [] -> Nothing _ -> Just (isPalindrome word) allLowerCase :: String -> String allLowerCase word = map toLower word isPalindromeIgnoringCase :: String -> Bool isPalindromeIgnoringCase word = isPalindrome (allLowerCase word) isPalindromePhrase :: String -> Bool isPalindromePhrase phrase = isPalindrome (filter notSpace phrase) notSpace :: Char -> Bool notSpace x = not (x == ' ') notPunctuation :: Char -> Bool notPunctuation x = not (isPunctuation x) The best way to divide up code into sections is, of course, a subjective judgement. But there’s a couple reasons why we believe this grouping is a good one. The “what a palindrome is” code is all pure functions, and the interactive program involves IO. A lot of programs end up being separable in this way, into an I/O portion that deals with the program’s “behavior” and a pure portion that deals with its “logic”. There is a further difference between these two parts of the code beyond the superficial observation that an IO type appears in the one section and not in the other: We suspect it is very likely that the times and reasons we might want to update the code in one section or the other are pretty distinct. - Once we’ve settled on a definition of what a palindrome is, for instance, maybe we’re never going to need to change that code anymore – and it can just stay there, untouched, indefinitely – and we can remove it from our minds while we spend the rest of our time fiddling with the details of how the interactivity should work and how the messages should be displayed. - Or, if we do want to rethink the logic and redefine what it means for something to be a palindrome, then we know where to go: we’ll need to edit the code in the “what a palindrome is” section, and we can be fairly confident that we won’t have to deal with the “interactive program” section to make the change. Naming We do not want to dwell more than necessary on finding the perfect name for everything, but there is one change of name that may be particularly helpful in this code, because the present name is misleading. We called this function “isPalindrome” because it was our first attempt at defining what a palindrome is. We subsequently changed our mind about what exactly a palindrome is when we decided that this simple test wasn’t good enough – but the function called “isPalindrome” is still this old insufficient one. Now, we don’t want to get rid of this function, because we use it in several places, and still serves a purpose. But what we can do is rename this function to something that better describes what it actually does. Then we can free up the “isPalindrome” name to be the name of what we end up assembling at the end, our most complete definition of what we think constitutes a palindrome. So, what does this function really do? It doesn’t tell us if a string is a palindrome – it tells us if a string is its own reverse. So “isOwnReverse” seems like a fine name. Before: isPalindrome :: String -> Bool = isPalindrome word == reverse word word After: isOwnReverse :: String -> Bool = isOwnReverse word == reverse word word Then, each place isPalindrome appears is used elsewhere in the code, we must replace it with the new name, isOwnReverse. Two modules What’s nice about the separation of this module into two sections is that we can then start thinking about the relationship between the two sections – What are the points where they interact with each other? We need only one point of interaction between the sections. This is the function that we want to call isPalindrome. This will be the function that we haven’t written yet that combines all of the features that we wrote in the previous lessons. isPalindrome is going to constitute the entire relationship between the interactive program and the definition of palindromes. Modify the definition of verbose so that it uses this yet-to-be-defined isPalindrome function: Of all the definitions in the “what a palindrome is” section, isPalindrome is the only one that will appear in the “interactive program” section. To fully convince ourselves of this fact, and to make the division between the two sections of code apparent to anyone who is reading our code in the future, we can take all of this palindrome code in the second section and split it off into a different module. We’re going to call this module Pal (short for ‘palindrome’). Create a new file called Pal.hs. Begin this file as follows:The file name should match the module name, including the capitalization of the letter P. Strictly speaking, the file name does not have to match the module name, but doing so is helpful because it allows GHCi to find the file automatically. This opening line is how we specify that the name of the module defined by this file is Pal. We did not have a line like this in main.hs because when you omit the module line, the name of the module defaults to Main, which is what we wanted in that case. When you want to write a module whose name is anything other than Main, you need to include this module ... where bit at the top. Move the entire “what a palindrome” section from main.hs into Pal.hs. Also move the import Data.Char line from main.hs to Pal.hs, because the code that uses functions from Data.Char is now in the Pal module.In the video, you can see what happens when we accidentally forget to move the import. Whenever we move code between files, we have to make sure we also copy or move any relevant import statements. The new Pal.hs file should look like this: module Pal where import Data.Char isPalindrome :: String -> Maybe Bool isPalindrome = undefined isOwnReverse :: String -> Bool isOwnReverse word = word == reverse word nonemptyPal :: String -> Maybe Bool nonemptyPal word = case word of [] -> Nothing _ -> Just (isOwnReverse word) allLowerCase :: String -> String allLowerCase word = map toLower word isPalindromeIgnoringCase :: String -> Bool isPalindromeIgnoringCase word = isOwnReverse (allLowerCase word) isPalindromePhrase :: String -> Bool isPalindromePhrase phrase = isOwnReverse (filter notSpace phrase) notSpace :: Char -> Bool notSpace x = not (x == ' ') notPunctuation :: Char -> Bool notPunctuation x = not (isPunctuation x) Then go back to main.hs and add import Pal at the top of the file. This imports all of the definitions in the Pal module: isPalindrome, isOwnReverse, nonemptyPal, etc. and makes them all available for use within the Main module. The main.hs file should now look like this: import Pal -- The interactive program -- main :: IO () main = do word <- getLine print (verbose word) verbose :: String -> String verbose word = case (isPalindrome word) of Nothing -> "Please enter a word." Just False -> "Sorry, this word is not a palindrome." Just True -> "Congratulations, this word is a palindrome!" At this point, if you :reload in GHCi, both modules should load successfully. There is one interesting tweak that we can make at the place where main.hs imports the Pal module. Here’s what it looks like now: import Pal And we said that this imports the entire Pal module. But isPalindrome is the only function from Pal that we actually need in main.hs. So we can assert that in the code. In parentheses at the end of the import line, we may list the specific things that we want to import. In this case, only one thing: isPalindrome. import Pal (isPalindrome) If that compiles successfully (it does), that confirms our suspicion that isPalindrome is the one link between these two modules. We have no further changes to make to the interactive portion of this program, so can close main.hs and focus our attention solely on what we’ve moved into Pal.hs. We are now left with the slightly smaller problem of bringing together all of our different thoughts about the true definition of a palindrome. Function composition Back away from the code we’ve written for a moment and think in the abstract about what we’ve decided. There are three main steps that we need to put together:Notice that the order of the three steps matters. We need to do the normalization before rejecting empty inputs, for example, because if the input is a string consisting entirely of spaces, then the normalization process will turn that into an empty input which should subsequently be rejected by step two. - Normalization – the removal of extraneous characters from the input – e.g. the transformation of “Madam, I’m Adam!” into “madamimadam”. - Rejection of empty inputs – the transformation of ""into Nothingand e.g. "cat"into Just "cat". - The basic test to see whether a string is its own reverse – "cat"is not, but "tacocat"is. The output from step 1 needs to be the input to step 2. The output from step 2, in turn, needs to be the input to step 3. It is very nice when a problem decomposes in this way! Sometimes we describe this as a “pipeline” of steps, a chain of things that we need to do one after another. We can write this as a chain of three nested function applications. isOwnReverse is already written, but we’ll still need to write the other two functions. Before we go ahead and write real definitions for rejectEmpty and normalize, we should :reload in GHCi to verify that the concept we’ve outlined above passes the type checker. It does not! There are two error messages; we will focus on the second one. Pal.hs:8:37: error: • Couldn't match expected type ‘Maybe String’ with actual type ‘[Char]’ Expected type: String Actual type: Maybe String • In the first argument of ‘isOwnReverse’ The problem, it says, is the argument to isOwnReverse. - The expected type is String, because this is the input type for the isOwnReversefunction. The function type is String -> Bool, so the argument should always be a String. - The actual type – the type of the expression that we have used at the argument to isOwnReverse– is Maybe String. This is because the type of the rejectEmptyfunction is String -> Maybe String. So, what do we need to do? For step 3, what we really need is not a String -> Bool function, but a function that can accept a Maybe String and return a Maybe Bool. If we then use isOwnReverseMaybe instead of isOwnReverse, then this will work out. So now let’s write isOwnReverseMaybe. As usual, we can start by writing a case expression that lists the two cases of Maybe: Nothing and Just. Now think about what the results for each case should be:If you find something dissatisfying about having to write this isOwnReverseMaybe function, do not despair; we will get to the more concise approach a few lessons from now. An input of Nothingsignifies that the input had been rejected, so we want to return Nothinghere for the final output as well, since the input is still rejected. If we do have a string input, then we want to apply isOwnReverseto that string. We need a Maybe Boolresult, and isOwnReversegives us a Bool, so we have to wrap that up in a Justconstructor to give it the right type. Reload to confirm that this code loads successfully. Then let’s go back and finish implementing rejectEmpty and normalize. rejectEmpty rejectEmpty is going to be similar to nonemptyPal. As a reminder, this is what the nonemptyPal looks like: The rejectEmpty function is just going to do a bit less. The only difference is that we didn’t apply isOwnReverse to the word this time, because we’ve deferred that part to step 3 in the pipeline (1. normalize; 2. rejectEmpty; 3. isOwnReverse). Expressing the palindrome test as the composition of three separate steps allows each of the three constituent functions do perform a smaller job. We may now eliminate the nonemptyPal function. normalize Next, the normalize function that removes unwanted details from the input. Again, we can find that this decomposes into a series of three steps. - Remove spaces (“taco cat” → “tacocat”) - Remove punctuation (“Hip, hip, hooray!” → “Hip hip hooray”) - Convert to lower case (“Cat” → “cat”) In this case, it so happens that the order in which we perform these steps doesn’t matter at all. Each of the three is a String -> String function, and the result will be the same no matter what order we compose them in. Now we’re done! Try it out with an input that hits all of our potential pitfalls. λ> main Madam, I'm Adam!"Congratulations, this word is a palindrome!" The dot operator When programmers speak of “composition”, in the most general sense we are describing any means of combining multiple things into one bigger thing. We are often more specifically referring to combining things of the same type to produce another value of that type. For example: Combinining some functions to produce a function that applies them all one after the other. We’ve now seen two definitions like this: isPalindrome and normalize. We often like to write definitions like this using the (.) operator.API documentation for (.) This is a small function in Prelude that produces the composition of two functions. Its definition looks like this: (.) :: (b -> c) -> (a -> b) -> a -> c .) f g x = f (g x)( Here’s how we wrote the code above: = normalize string filter notPunctuation (filter notSpace (allLowerCase string)) = isPalindrome string isOwnReverse (rejectEmpty (normalize string)) And here’s what it looks likeEach of our examples here uses the (.) operator twice, because there are three functions to join together, not just two. written using the function composition operator: = normalize filter notPunctuation . filter notSpace . allLowerCase = isPalindrome . rejectEmpty . normalize isOwnReverse The difference is only aesthetic; we sometimes prefer to use (.) because it is slightly more concise, and because it saves us from the small burden of choosing a name for the parameter (notice that the string parameter disappears in the revised code).
https://typeclasses.com/beginner-crash-course/reorganization
CC-MAIN-2021-43
refinedweb
3,181
61.06
We're looking at using Microsoft's ESB Guidance package to implement a message bus. The bus will be the core of the client's planned Service Oriented Infrastructure, and all existing apps will route communication through ESB. There'll be a few posts coming up covering what it provides, how to use it and some simple walkthroughs, but I'll start with a basic overview and installation tips. Overview Ignore the name - ESB (Enterprise Service Bus) Guidance is not a set of documents giving you best-practice guidance. It's a Microsoft-coded framework which sits on top of BizTalk 2006 R2 and provides services typically seen in custom service bus solutions (dynamic endpoint resolution, namespace resolution, policy-based messaging etc.). It's published by the Patterns & Practices team, so it comes with full source and samples, and no licensing costs are involved. An exception handling layer is built into the framework, which supports attempted message repair and resubmission, and fault logging. The framework logs via BAM and the Guidance comes with a sample Management Portal, providing a rich Web interface for monitoring the health and performance of your service applications. The main download is from MSDN: , which contains source, samples, MSIs, and help file source. There's also a site on CodePlex which has a compiled help file in the Release:. The Patterns & Practices home page for ESB Guidance is here:. The help files provide a lot of detailed information on the framework itself and the samples provided. Installation As of version 1.0 (November 2007), installing ESB Guidance is not a straightforward task. The MSDN download comprises a single MSI file, but this just unpacks a raft of other MSIs (along with source files and help files), which need separate installation and configuration. The installation process is documented in the ESBDocs.chm help file, but that assumes you're doing a complete installation of all components. There are a lot of prerequisites (SQL Server 2005 with SP2, VS 2005 with SP2, BizTalk 2006 R2, Enterprise Library 3.1, .Net 3.0 and various hotfixes), and if you want to use UDDI for endpoint resolution (ESB can use other sources, e.g. sets of BRE policies) you'll want to be running from Windows Server 2003 as UDDI Services isn't part of XP. Depending on how much of this you have already installed, you'll need to allow at least a day for your first installation. A friendly guide to installing ESB Guidance which describes what the components are and has a checklist of tasks and prerequisites is attached to Peter Kelcey's post here: I've used this as the basis for successful installations, but found a couple of issues: Installing the Samples If you've followed the installation guide and installed from source, your GAC assemblies will have a different public key from the release files, so you won't be able to install the sample rules from GlobalBank.ESB.Policies.msi (the help files suggest copying your assemblies over the one in the \bin directory, but that won't help as the MSI has keys in the policies that you can't change). The source for the samples uses the correct key, so you can build and deploy from VS to create the sample app and populate the BRE components. A few of the samples have dependencies, so I installed in the following order: Verify the GlobalBank.ESB app contains all seven policies, and then you're ready to start trying to get the samples to work. Community Resources There are a few blogs out there, but I haven't found comprehensive coverage about using the framework in anger. Mikael Håkansson has useful posts about using and extending the framework:. If you know of any more useful resources, leave a comment and I'll add them to the post. Skin design by Mark Wagner, Adapted by David Vidmar
http://geekswithblogs.net/EltonStoneman/archive/2008/04/06/microsoft-esb-guidance-getting-started--installation.aspx
crawl-002
refinedweb
654
58.92
Dynamic / Anonymous types in C# was a great improvement to the framework, made in version 3.0. I use it a lot when doing AJAX callbacks from JavaScript to ASP.NET MVC Controllers, not to forget the extensive use of anonymous types already used in ASP.NET MVC. Then yesterday, one case where I absolutely needed to use anonymous types was in my application’s logging service. I want to be able to save behaviors/actions as well as errors. I have two separate tables, but both behaviors and errors can provide a set of details. To do the actual logging, I call an implementation of this interface: /// In the LogBehaviorWithData method, I wanted to specify behaviorData as XML, since the column in the database is an XML column. I do this, so that I’m able to query the table and using XPath to filter on behavior data. That requires me to send XML to the method, and I don’t want to fool around manually with an XmlDocument or something similar. I was looking around the internet for a way to serialize an anonymous type to XML, and came across a few questions on StackOverflow. The first one accepted there’s no way to do that – even though there was a fine answer below, and the second didn’t provide a solution. I took the code provided by Matthew Whited in his excellent answer (that I don’t believe is not the correct answer of the question). It worked out of the box, except for Arrays, so that needed some extensions. How to use it It’s simply an *Extension Method of the object type, *called ToXml(). And it is used like this: object d = new { Username = "martin", Roles = new[] { "Developer", "Administrator" } }; XElement xml = d.ToXml(); string xmlString = xml.ToString(); The output is a beautifully formatted XML string: <object><Username>martinUsername><Roles><RolesChild>DeveloperRolesChild><RolesChild>AdministratorRolesChild>Roles>object> To make it more database friendly, you can omit the formatting by specifying SaveOptions in the ToString method call: string xmlString = xml.ToString(SaveOptions.DisableFormatting); The actual code The actual code is quite simple, yet there’s some fiddling around with different types and such. I guess the name of child elements could also need some improvement, preferably changing the collection name to a* singular representation* and use that as the element name of its children. using System; using System.Linq; using System.Reflection; using System.Xml; using System.Xml.Linq; /// That way it is possible to serialize an anonymous type in C# to XML. Download Download the class from my SkyDrive.
http://martinnormark.com/serialize-c-dynamic-and-anonymous-types-to-xml/
CC-MAIN-2017-13
refinedweb
432
55.24
Results 1 to 3 of 3 Thread: String to strings? Help please - Join Date - Oct 2012 - 4 - Thanks - 1 - Thanked 0 Times in 0 Posts String to strings? Help please Hi guys, Im pretty new to Java so bare with me as I make a complete fool of myself. Im trying to start out by making a simple calculator, capable of the functions plus, minus, divide, multiply. I had no trouble doing this within eclipse, however turning it into a windowed program made it a lot harder for me. The basic problem im having is using the method: String = JOptionPane.showInputDialog("text") Mine looks like this: Code: inStr1 = JOptionPane.showInputDialog("Please Input an Equation\n" + "* = multiply, / = Divide,\n" + "Include spaces and Decimals (Using Doubles!)\n" + "For example: 2.0 + 4.2, Then ENTER!"); A good chunk of the problematic area, As you can see I tried playing around with a few things but I am completely lost at this point lol, thanks in advance. Code: import java.util.*; import javax.swing.*; import java.io.*; import java.util.regex.*; public class Calculator { static Scanner console = new Scanner(System.in); public static void main(String[] args) { // System.out.println("Please Input an Equation\n" + // "* = multiply, / = Divide,\n" + <--- Non-Windowed // "Include spaces and Decimals (Using Doubles!)\n" + // "For example: 2.0 + 4.2, Then ENTER!"); double num1; double num2; double sum; String sign; String num1a; String num2a; String inStr1; String inStr2; String outStr; int repeats; num1a = "a"; num2a = "b"; sum = 1; inStr1 = JOptionPane.showInputDialog("Please Input an Equation\n" + "* = multiply, / = Divide,\n" + "Include spaces and Decimals (Using Doubles!)\n" + "For example: 2.0 + 4.2, Then ENTER!"); String data = inStr1; String[] values = data.split(" "); num1a = String[0]; sign = String[1]; num2a = String[2]; num1 = Double.parseDouble(num1a); num2 = Double.parseDouble(num2a); // num1 = console.nextDouble(); <--- Old code (The none window // sign = console.next(); way) // num2 = console.nextDouble(); if (sign.equals("+")){ sum = num1 + num2; } if (sign.equals("-")){ sum = num1 - num2; } if (sign.equals("*")){ sum = num1 * num2; } if (sign.equals("/")){ sum = num1 / num2; } - Join Date - Sep 2002 - Location - Saskatoon, Saskatchewan - 17,026 - Thanks - 4 - Thanked 2,668 Times in 2,637 Posts This isn't javascript, it's java. Moving to the Java forum. Given the rules you have specified here, you can tokenize and cast the strings. Pulling from an array of String broken with a space is also sufficient. This is almost entirely correct. The problem is you cannot pull from String[x] as String is a datatype, not a variable. You need to pull from values[x] instead. Next to this, a try/catch should be used: PHP Code: try { String[] values = data.split(" "); num1a = values[0]; sign = values[1]; num2a = values[2]; num1 = Double.parseDouble(num1a); num2 = Double.parseDouble(num2a); double dResult = 0.0; if (sign.equals("+")) { dResult = num1 + num2; } else if (sign.equals("-")) { dResult = num1 - num2; } else if (sign.equals("*")) { dResult = num1 * num2; } else if (sign.equals("/")) { dResult = num1 / num2; } JOptionPane.showMessageDialog(null, "The result is: " + dResult); } catch (NumberFormatException ex) { JOptionPane.showMessageDialog(null, "There has been an error in input: " + ex.getMessage(), "Error", JOptionPane.ERROR_MESSAGE); } catch (ArrayIndexOutOfBoundsException ex) { JOptionPane.showMessageDialog(null, "There appears to be an error in input format (double op double expected)", "Error", JOptionPane.ERROR_MESSAGE); } - Join Date - Oct 2012 - 4 - Thanks - 1 - Thanked 0 Times in 0 Posts Hey, didn't notice I posted it in Java Script. Thought I had it in Java Forums. My bad and sorry for the inconvenience. That was extremely helpful and I learned something new :] The calculator now works fine for everything- Aside from input errors. Two more questions I had though if someone could answer, I'll make a new thread for it though and link it here: Last edited by Jposemato; 10-09-2012 at 07:30 AM.
http://www.codingforums.com/java-and-jsp/275336-string-strings-help-please.html?s=af5ddb0ff95dd65c63710e5a5a3bb786
CC-MAIN-2017-09
refinedweb
627
70.29
A manager-worker library. More... #include "work_queue.h" Go to the source code of this file. A manager-worker library. The work queue provides an implementation of the manager-worker computing model using TCP sockets, Unix applications, and files as intermediate buffers. A manager process uses work_queue_json_create to create a queue, then work_queue_json_submit to submit tasks. Once tasks are running, call work_queue_json_wait to wait for completion. Create a new work_queue object. Submit a task to a queue. Once a task is submitted to a queue, it is not longer under the user's control and should not be inspected until returned via work_queue_wait. Once returned, it is safe to re-submit the same take object via work_queue_submit. task document: (only "command_line" is required.) { "command_line" : string, "input_files" : array of objects with one object per input file (see file document below), "output_files" : array of objects with one object per output file (see file document below), "environment" : object with environment variables names and values (see environment document below), "tag" : string, # arbitrary string to identify the task by the user. } file document: { "local_name" : string, # name of the file at the machine running the manager "remote_name" : string, # name of the file local_name is copied to/from the machine running the task. "flags" : { "cache" : boolean, # whether the file should be cached at the worker. Default is false. "watch" : boolean, # For output files only. Whether appends to the file should be sent as they occur. Default is false. } } environment document: { string : string, # name and value of an environment variable to be set for the task. string : string, ... } Wait for a task to complete. { "command_line" : string , "tag" : string , "output" : string , "taskid" : integer , "return_status" : integer , "result" : integer } Remove a task from the queue. Get the status for a given work queue.
https://ccl.cse.nd.edu/software/manuals/api/html/work__queue__json_8h.html
CC-MAIN-2021-49
refinedweb
292
58.28
We now have regular-expression based chunkers and tokenizers in LingPipe. They work by compiling a regex using java.util.regex.Pattern and then running a java.util.regex.Matcher’s find() method over the inputs and pulling out the matches. Recently, I (Bob) have been tuning part-of-speech taggers for French trained on the French Treebank. I thought writing a regex-based tokenizer would make sense to match the way the treebank itself tokenized as best as possible. Unfortunately, it’s impossible to write a pure tokenizer to match because, like the Penn Treebank and Penn BioIE projects, the annotators decided to use syntactic and semantic contextual information in deciding when to group a sequence of characters into a token. For instance, hyphens after prefixes (a lexical syntactic issue) are coded one way and hyphens before suffixes another; in the Penn Treebank, periods at the end of sentences (a contextual decision) are coded as separate tokens whereas those appearing sentence internally are coded as part of the token they follow. It turns out that regular expressions don’t work the way I thought they would. I wanted to write a bunch of regexes and then or ( |) them together to produce a larger regular expression that would greedily match as much as it could in each chunk. Let’s consider a simple example: (a|b)+|(a|c)+ And consider running a find against the string "aacac". What do we get? Let’s ask Java. import java.util.regex.Pattern; import java.util.regex.Matcher; public class Regex { public static void main(String[] args) { test("(a|b)+|(a|c)+", "aacac"); } static void test(String regex, String input) { Pattern pattern = Pattern.compile(regex); Matcher matcher = pattern.matcher(input); matcher.find(); System.out.println("regex=" + regex + " input=" + input + " found=" + matcher.group()); } } This just sets up the pattern based on the regex, generates a matcher from the input and then runs find on the matcher and prints out the first thing found. What’ll it do? c:\carp\temp>javac Regex.java c:\carp\temp>java -cp . Regex regex=(a|b)+|(a|c)+ input=aacac found=aa Ouch. I was expecting the whole thing to match. Apparently, that’s what a POSIX regex would do. But Java follows the Perl model, in which eagerness overcomes greediness. Specifically, if the first disjunct of an alternation matches, the matcher does not try to find a longer match in the second disjunct. Unfortunately, there are no greediness/reluctance modifiers for the disjunct. So what do we do? Refactor the regex, of course. How about this one? a*(b(a|b)*|c(a|c)*)? This should do the trick of matching the longest possible sequence of alternating as and bs or alternating as and cs. Sure enough, adding the following to the main() method: test("a*(b(a|b)*|c(a|c)*)?", "aacac"); produces the expected output: regex=a*(b(a|b)*|c(a|c)*)? input=aacac found=aacac
http://lingpipe-blog.com/2008/05/07/tokenization-vs-eager-regular-expressions/
crawl-002
refinedweb
492
58.28
@@@ THIS DOCUMENT NEEDS TO BE REWORKED AND FINISHED ACCORDING TO THE pubrules. It is an Editor's draft copy. Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.. This is an editor's draft of a document planned to be published as an Interest Group note of the Semantic Web Education and Outreach Interest giving a tutorial explaining decisions of the TAG for semantic web beginners. This is an HTML conversion of DFKI Technical Memo TM-07-01, Cool URIs for the Semantic Web (PDF), reviewed by the Technical Architecture Group TAG. of the POWDER Use Cases and Requirements, developed by the POWDER Working Group as part of the Semantic Web Activity, to aid public discussion and solicit feedback on the group's aims. The group is particularly keen to learn of other potential use cases or additional features that should be considered for POWDER. This document was developed by the Semantic Web Education and Outreach (SWEO) Interest-sweo-ig of SWEO. The Semantic Web is envisioned as a decentralised world-wide information space for sharing machine-readable data with a minimum of integration costs. Its two core challenges are the distributed modelling of the world with a shared data model, and the infrastructure where data and schemas can be published, found and used. A basic question is thus how to publish information about resources in a way that allows interested users and software applications to find them. On the Semantic Web, all information has to be expressed as statements about resources, like the members of the company Acme are Alice and Bob or Bob's telephone number is "+1 800 262 or this Web page was created by Alice. Resources are identified by Uniform Resource Identifiers (URIs) [RFC3986]. This modelling approach is the Resource Description Framework (RDF) [RDFPrimer]. At the same time, Web documents have always been addressed with Uniform Resource Locators (URLs). Web site of ACME Inc., we may use. But what URI identifies the company as an organisation, not a Web site? [RDFPrimer]. We also assume some familiarity with the HTTP protocol [RFC2616]. Wikipedia's article [WP, each of the pages mentioned above are Web documents. Every Web document has its own URI. Note that a Web document is not the same as a file: A single Web document can be available in many different formats and languages, and a single file, for example a PHP script, may be responsible for generating a large number of Web documents with different URIs. A Web document is defined as something that has a URI and can return representations (responses in a format such as HTML or JPEG or RDF) in response to HTTP requests. In technical literature, such as Architecture of the World Wide Veb, Volume One [AWWW], the term information resource is used instead of Web document. On the traditional Web, URIs were used primarily for Web documents—to link to them, and to access them in a browser. In short, to locate a Web document—hence the term URL (Uniform Resource Locator). The notion of resource identity was not so important on the traditional Web, a URL simply identifies whatever we see when we type it into a browser. Today's Web clients and servers use the HTTP protocol [RFC2616] to request representations of Web documents and send back the responses. HTTP has a powerful mechanism for offering different formats and language versions of the same Web document: content negotiation. When a user agent (e.g. a browser) makes an HTTP request,: GET /people/alice HTTP/1.1 Host: Accept: text/html, application/xhtml+xml Accept-Language: en, de The server could answer: HTTP/1.1 200 OK Content-Type: text/html Content-Language: en followed by the content of the HTML document in English. Content negotiation [TAG-Alt] is often implemented with a twist: Instead of a direct answer, the server redirects to another URL where the appropriate version is found: HTTP/1.1 302 Found Location: The redirect is indicated by a special status code, here 302 Found. The client would now send another HTTP request to the new URL. By having separate URLs for all versions, this approach allows Web authors to link directly to a specific version. RDF/XML, the standard serialisation format of RDF, has its own content type too, application/rdf+xml. Content negotiation thus allows publishers to serve HTML versions of a Web document to traditional Web browsers and RDF versions to Semantic Web-enabled user agents. And it information publishing system—as a lookup service for resource descriptions. Whenever a URI is mentioned, we can look it up to retrieve a description containing relevant information and links to related data. This is so important that we make it our number one requirement for good URIs: Let's assume ACME Inc. wants to publish contact data of their employees on the Semantic Web so their business partners can import it into their address books. For example, the published data would contain these statements about Alice, written here in N3 syntax [N3]: : Is the homepage of Alice also named “Alice”? Has the homepage an email address? And why has the homepage a homepage? So we need another URI. (For in-depth treatments of this issue, see Tim Berners-Lee [HTTP-URI2] and David Booth [Booth]). Therefore our second requirement: We note that our requirements seem to conflict with each other. If we can't use URLs of documents to identify real-world object, then how can we retrieve a description about real-world objects based on their URL? The challenge is to find a solution that allows us to find the describing documents if we have just the resource's URI, using standard Web technologies. The following picture shows the desired relationships between a resource and its describing documents: Another question is where to draw the line between traditional Web documents and other, non-document resources. According to W3C guidelines ([AWWW], section 2.2.), other resources from regular Web documents. Since 303 is a redirect status code, the server can also give the location of a document that describes the resource. If, on the other hand, a request is answered with one of the usual status codes in the 2XX range, like 200 OK, then the client knows that the URI identifies a Web document. This practice has been embraced by the W3C's Technical Architecture Group in its httpRange-14 resolution [httpRange]. If ACME adopts this solution, they could use these URIs to represent the company, Alice and Bob: The Web server would be configured to answer requests to all these URIs with a 303 status code and a Location HTTP header that provides the URL of a document that describes the resource. The following picture shows the redirects for the 303 URI solution: The. The following picture shows the hash URI approach without content negotiation: Alternatively, content negotiation (see Section 2.1.) could be employed to redirect from the about URI to separate HTML and RDF documents. Again, the 303 See Other status code must be used. (Otherwise, a client could conclude that the hash URI refers to a part of the HTML document.) The following picture shows the hash URI approach with content negotiation: Which approach is better? It depends. The hash URIs have the advantage of reducing the number of necessary HTTP round-trips, which in turn reduces access latency. A family of URIs can share the same non-hash part. The descriptions of,, and product456 are retrieved with a single request to. There is a counter-effect, too. A client interested only in #product123 will inadvertently load the data for all other resources as well, because they are in the same file.. But the large number of redirects may cause higher latency. files to a Web server, without any special server configuration. This makes them popular for quick-and-dirty RDF publication. 303 URIs should be used for large sets of data that are, or may grow, beyond the point where it is practical to serve all related resources in a single document. If in doubt, it's better to use the more flexible 303 URI approach. The best resource identifiers don't just provide descriptions for people and machines, but are designed with simplicity, stability and manageability in mind, as explained by Tim Berners-Lee in Cool URIs don't change [Cool] and by the W3C Team in Common HTTP Implementation Problems ([CHIPS], sections 1 and 3): All the URIs related to a single real-world object—resource identifier, RDF document URL, HTML document URL—should be explicitly linked with each other to help information consumers understand their relation. For example, in the 303 URI solution for ACME, there are three URIs related to Alice: Two of them are Web document URLs. The RDF document located at might contain these statements (expressed in N3): <> foaf:page <>; rdfs:isDefinedBy <>; a foaf:Person; foaf:name "Alice"; foaf:mbox <mailto:alice@acme.com>; ... The document makes statements about Alice, the person, using the resource identifier. The first two properties relate the resource identifier to the two document URLs. The foaf:page statement links it to the HTML document. This allows RDF-aware clients to find a human-readable version of the resource, and at the same time, by linking the page to its topic, defines useful metadata about that HTML document. The rdfs:isDefinedBy statement links the person to the document containing its RDF description and allows RDF browsers to distinguish this main resource from other auxiliary resources that just happen to be mentioned in the document. We use rdfs:isDefinedBy instead of its weaker super". The following illustration shows how the RDF and HTML documents should relate the three URIs to each other:. Not all projects that work with Semantic Web technologies make their data available on the Web. But a growing number of projects follow the practices described here. This section gives a few examples. ECS Southampton. The School of Electronics and Computer Science at University of Southampton has a Semantic Web site that employs the 303 solution and is a great example of Semantic Web engineering. It is documented in the ECS URI System Specification [ECS]. Separate subdomains are used for HTML documents, RDF documents, and resource identifiers. Take these examples: Entering the first URI into a normal Web browser redirects to an HTML page about Wendy Hall. It presents a Web view of all available data on her. The page also links to her URI and to her RDF document. D2R Server is an open-source application that can be used to publish data from relational databases on the Semantic Web in accordance with these guidelines. It employs the 303 solution and content negotiation. For example, the D2R Server publishing the DBLP Bibliography Database publishes several 100k bibliographical records and information about their authors. Example URIs, again connected via 303 redirects: The RDF document for Chris Bizer is a SPARQL query result from the server's SPARQL endpoint: DESCRIBE+\%3Chttp\%3A\%2F\%2Fwww4.wiwiss.fu-berlin.de \%2Fdblp\%2Fresource\%2Fperson\%2F315759\%3E The SPARQL query encoded in this URI is: DESCRIBE <> This shows how a SPARQL endpoint can be used as a convenient method of serving resource descriptions. Semantic MediaWiki is an open-source Semantic Wiki engine. Authors can use special wiki syntax to put semantic attributes and relationships into wiki articles. For each article, the software generates a 303 URI that identifies the article's topic, and serves RDF descriptions generated from the attributes and relationships. Semantic MediaWiki drives the OntoWorld wiki. It has an article about the city of Karlsruhe: The URI of the RDF description is not cool, because it exposes the implementation (php) and refers redundantly to RDF in the path and in the query. A better URI would be for example. There is an effort underway [SMW] that calls for the adoption of Semantic MediaWiki as the software that runs Wikipedia. This would turn Wikipedia into a repository of identifiers with community-agreed descriptions. Many other approaches have been suggested over the years. While most of them are appropriate in special circumstances, we feel that they do not fit the criteria from Section 3, which are to be on the Web and don't be ambiguous. Therefore they are not adequate as general solutions for building a standards-based, non-fragmented, decentralized Semantic Web. We will discuss two of these approaches in some detail. HTTP URIs already identify Web resources and Web documents, not other kinds of resources. Shouldn't we create a new URI scheme to identify other resources? Then we could easily distinguish them from Web documents just by looking at the first characters of the URI. For example, the info scheme can be used to identify books based on a LCCN number: info:lccn/2002022641. Here are examples of such new URI schemes. A more complete list is provided by Thompson and Orchard in URNs, Namespaces and Registries [TAG-URNs]. info:lccn/2002022641) and the Dewey decimal system (info:ddc/22/eng//004.678). tag:hawke.org,2001-06-05:Taiko. @Jones.and.Company/(+phone.number)or xri://northgate.library.example.com/(urn:isbn:0-395-36341-1). To be truly useful, a new scheme must also define a protocol how to access more information about the identified resource. For example, the ftp:// URI scheme identifies resourcse (files on an FTP server), and also comes with a protocol for accessing them (the FTP protocol). Some of the new URI schemes provide no such protocol at all. Others provide a Web service that allows retrieval of descriptions using the HTTP protocol. The identifier is passed to the service, which looks up the information in a central database or in a federated way. The problem here is that a failure in this service renders the system unusable. Another drawback can be a dependence on a standardization body. To register new parts in the info: space, a standardization body has to be contacted. This, or paying a license fee before creating a new URI, slows down adoption. In cases a standardization body is desirable to ensure that all URIs are unique (e.g. with ISBNs). But this can be achieved using HTTP URIs inside an HTTP namespace owned and managed by the standardization organization. The problems with new URI schemes are discussed at length in URNs, Namespaces and Registries. This approach radically solves the URI problem by doing away with URIs altogether: Instead of naming resources with a URI, anonymous nodes are used, and are described with information that allows us to find the right one. A person, for example, could be described with her name, date of birth, and social security number. These pieces of information should be sufficient to uniquely identify a person. A popular practice is the use of a person's email address as a uniquely identifying piece of information. The foaf:mbox property is used in Friend of a Friend (FOAF) profiles for this purpose. In OWL, this kind of property is known as an Inverse Functional Property (IFP). When an agent encounters two resources with the same email address, it can infer that both refer to the same person and can treat them as one. But how to be on the Web with this approach? How to enable agents to download more data about resources we mention? There is a best practice to achieve this goal: Provide not only the IFP of the resource (e.g. the person's email address), but also an rdfs:seeAlso property that points to a Web address of an RDF document with further information about it. We see that HTTP URIs are still used to identify the location where to download more information. Furthermore, we now need several pieces of information to refer to a resource, the IFP value and the RDF document location. The simple act of linking by using a URI has become a process involving several moving parts, and this increases the risk of broken links and makes implementation more cumbersome. Regarding FOAF's practice of avoiding URIs for people, we agree with Tim Berners-Lee's advice: “Go ahead and give yourself a URI. You deserve it!” Resource names on the Semantic Web should fulfill two requirements: First, a description of the identified resource should be retrievable with standard Web technologies. Second, a naming scheme should not confuse documents and the things described by the documents. We have described two approaches that fulfill these requirements, both based on the HTTP URI scheme and protocol. One is to use the 303 HTTP status code to redirect from the resource identifier to the describing document. One is to use “hash URIs” to identify resources, exploiting the fact that hash URIs are retrieved by dropping the part after the hash and retrieving the other part. The requirement to distinguish between resources and their descriptions increases the need for coordination between multiple URIs. Some useful techniques are: embedding links to RDF data in HTML documents, using RDF statements to describe the relationship between the URIs, and using content negotiation to redirect to an appropriate description of a resource. Many thanks to Tim Berners-Lee who helped us understanding the TAG solution by answering chat requests. Special thanks go to Stuart Williams (HP Labs, member of TAG), who reviewed this document thouroughly and provided essential feedback about many sentences that were (accidentially) contrary to the TAG's view. We wish to thank everyone who has reviewed drafts of this document, especially Chris Bizer and Gunnar AAstrand Grimnes. This work was supported by the German Federal Ministry of Education, Science, Research and Technology (bmb+f), (Grants 01 IW C01, Project EPOS: Evolving Personal to Organizational Memories; and 01 AK 702B, Project InterVal: Internet and Value Chains) and by the European Union IST fund (Grant FP6-027705, Project Nepomuk).
http://www.w3.org/2001/sw/sweo/public/2007/cooluris/doc-20071008.html
CC-MAIN-2017-04
refinedweb
2,987
53.1
Re: Copy project files from one machine to the next - From: ralf <marty.overdear@xxxxxxxxxxxxxx> - Date: Mon, 10 Oct 2005 13:09:33 -0700 OK, solved my problem, or at least a workaround. Doesn't seem like this would be the way it was intended. But, anyway, here is what I had to do. After copying the project under the wwwroot directory, I created a new solution. I had to go into the project directory for the project to import and modify the vbproj.webinfo file. It is coded in there the port that it is supposed to be running off of. 80 is the default and if its 80, just use C:\localhost\etc. If its something other than 80, had to use C:\localhost:8080\etc, 8080 is your port your using. Then, after modifying, just add existing web project to your solution and it will open up. -- Sent via .NET Newsgroups . - References: - Prev by Date: OOP question - Next by Date: getting parent and child tables from datagrid - Previous by thread: Re: Copy project files from one machine to the next - Next by thread: getting table source from a currencymanager - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2005-10/msg00652.html
crawl-002
refinedweb
194
71.65
From: Jonathan Turkanis (technews_at_[hidden]) Date: 2004-09-09 19:51:42 "Pavel Vozenilek" <pavel_vozenilek_at_[hidden]> wrote in message news:chq3qp$pc4$2_at_sea.gmane.org... > > "Jonathan Turkanis" wrote: > > ____________________________________________________ > > Here's a suggestion for a concept which might be reusable > > in this way (view with fixed-width font): > > > > Concept: Resettable > > > > Valid Expressions | Return Type | Semantics > > -------------------------------------------------------------------- > > t.reset(discard) | bool | resets t to the state it had > > | | upon construction, discarding > > | | cached data if discard is true. > > | | Returns true for success. > > > Counterexample: the radio link pipeline has > member counting # of passed bytes, member that keeps > ouput rate in sync with current air conditions. > > If you want discard data in pipeline, you would reset all > these settings as side-effect. I see. > I have feeling there should be two concepts: > - reset(bool discard) > - discard() > ____________________________________________________ > > So when I define the convenience base > > classes, I want to make the i/o category refine openable, > > closable and resettable, and provide default no-op implementations. > > > Now, if the user only implements reset, I'd like boost::io::open and > > boost::io::close to call reset. > > > > ... variant with dummy instance + member function compare > > ... variant with CRTP > > > I do not have clear picture: when io::open() is called the > real type of filter is not known? Right now, there's no function open. And in pronciple, there never needs to be since acomponent can set an 'open' flag upon each i/o operation, and clear it when close() is called. I'm thinking Openable might be a good addition, as a convenience. If so, it would be called as soon as a chain becomes complete. > > > - it is not really sure flush would flush: for example filter removing > > > certain word may wait intil words get finished. What is flush() > > > semantic of theirs? > > > > This is an important point. When I originally implemented Flushable, flush() > > was just a 'suggestion' to flush buffers. So users could never be sure there > > wasn't any cached data remaining. I did it this way because for many filters, > > as you mentioned, you can't force a complete flush without violating data integrity. > > But this whimpy version of flush turned out not to be very useful, which is > > partly why I eliminated it. > > > > If I restore the concept, I will probably have flush return true only if all > > cached data could be safely sent downstream. This means that > > sometimes flushing a filtering_ostream will fail, and in those cases > > you won't be able to insert or remove filters while i/o is in progress. > > > Maybe bool flush(bool force = false). > E.g. close() should probably call flush(true) to avoid data loss. > > If an filter decides individually its current data just cannot > be flushed it could ignore the 'force' flag. > > The flush() returns true if flush was complete. User then may > discard the rest of data sitting in caches. Sounds good. _______________________________________________ > > > > > - halt temporarily/restart (for output streams only?). > > > > > This action may be propagated downstream since some > > > > > filters may have busy sub-threads of their own > > > > > > > > I'm not sure I understand. > > > > > > > Equivalent to putting null sink on the top of pipeline > > > (and maybe after each part of pipeline, if the part acts > > > asynchonously). > > > > > > Not sure now it it is worth the troubles. > > > > Neither am I ;-). Maybe if you can think of an important use case ... > > > > > Only contrived: someone has reference to one and only one > member of pipeline, not to initial data source or end data sink. > This one may want to halt temporarily the flow but doesn't want > any dependencies on the rest of application. > > E.g. module who keeps TCP/IP throughput limit and manages > multiple opened sockets. User registers socket_streams > and they could all be halted/restarted by the module. > > (Here the halt means stop reading/sending, not null-sink > equivalent. The make-it-null-sink looks as yet another thing.) I think this could be done currently just by keeping a reference to the filter. I'm fairly confident we can work out reasonable semantics for these operations. To me, the most important question is what should happen if not all the components in a chain model the concept. I'm very nervous about assuming that a no-op is a reasonable default behavior, so I'm inclined to say the operations must fail in that case. But doesn't that, in effect, force people who want to write reusable components to provide implementations for a long list of operations? ________________________________________________ > > > > > - generate string description of what is in stream chain > > > > > (like stream::dump() for debugging) > > > > > > > > The character sequences, or the filter/resource sequence? > > > > > > > The latter. E.g. for to print debug into on console. > > > > This is a good idea for a debug-mode feature, but I think it only > > really makes sense if I implement flushable and allow filters to > > be swapped in and out during i/o. Otherwise, it should be pretty > > obvious which filters are in a chain just by looking the sequence > > of pushes. > > > And good idea for maintenace programmer. True. > ____________________________________________________ > Few notes from reading sources: > > 1. scope_guard.hpp: maybe this file could be moved into > boost/detail and used by multi_index + iostreams until > something gets boostified. I'm for this. Unfortunately, my simplified scope_guard isn't working on CW8.3 (though it passes the regression tests I've written for it.) So I'll probably use Joaquín's. > 2. utility/select_by_size.hpp: would it be possible to use > PP local iteration here to make it bit simpler? Probably. I didn't know about local iteration when I wrote it ;-) > 3. io/zlib.hpp: > > #ifdef BOOST_MSVC > # pragma warning(push) > # pragma warning(disable:4251 4231 4660) // Dependencies not > exported. > #endif > > could be rather > > #if BOOST_WORKAROUND(BOOST_MSVC, <= ....) > # pragma warning(push) > # pragma warning(disable:4251 4231 4660) // Dependencies not > exported. > #endif Good. I guess I should use TESTED_AT here. > 4. the > > #if (defined _MSC_VER) && (_MSC_VER >= 1200) > # pragma once > #endif > > should be added everywhere. > I do not see it in sources I have here, maybe its old version. It was a relatively recent addition. It seems to be on the web; maybe I didn't update the zips. Anyway, the version you have should be almost identical. ( I actually used #if defined(_MSC_VER) && (_MSC_VER >= 1020) # pragma once #endif which I copied from somewhere; this is a cruel joke because I doubt the library will ever work for _MSC_VER < 1300.) > 5. io/io_traits.hpp: > > #define BOOST_SELECT_BY_SIZE_MAX_CASE 9 > > ==> > > #ifndef BOOST_SELECT_BY_SIZE_MAX_CASE > # define BOOST_SELECT_BY_SIZE_MAX_CASE 9 > #endif The docs for select-by-size are here:. Maybe I should add it to the current lib docs. The usage is supposed to be #define BOOST_SELECT_BY_SIZE_MAX_CASE xxx #include <boost/utility/select_by_size.hpp> Including the header undef's BOOST_SELECT_BY_SIZE_MAX_CASE. > 6. docs "Function Template close": the link to the header is broken. Thanks. When I wrote that page operations was still in boost::detail. > In this page: somesting like UML's sequnce or collaboration > diagram could be added. Good idea. Closable is actually the hardest concept to document. > (I am visual type so I always ask for pictures and code examples.) > > 7. windows_posix_config.hpp: I find the macros > BOOST_WINDOWS/BOOST_POSIX too general for iostreams. > Either they should be in Boost.Config or something as > BOOST_IO_WINDOWS should be used. That's one of the other changes I made late. I had been borrowing BOOST_WINDOWS/BOOST_POSIX from Boost.Filesystem, but didn't realize until recently that cygwin users are supposed to be able to pick either configuration. That defintely doesn't work for my library. > It may also fail on exotics as AS400. Do you mean neither configuration will work in this case? I have to figure out graceful ways for mmap and file descriptors to fail on unsupported systems. > 8. Maybe the library files could be more structured. > Right now its directories contain 32, 24, 11 and 9 files > and it may be confusing to orient in it. > > There may be subdirectories as utils/, filters/, > filters/compression/ etc. I'm leaning in that direction. If the library is accepted, I may ask your advice on organization after I decide what other filters to include. > 9. Maybe assert() could be replaced with BOOST_ASSERT(). Okay. I always forget about BOOST_ASSERT. There doesn't even seem to be a link to it from the libraries page or from Boost.Utility. > 10. disable_warnings.hpp: maybe this and other similar files could > be merged into iostream_config.hpp. I like to disable warnings at the beginning of a file, and enable them again at the end. I only use this in a few places. > (Others: enable_stream.hpp.) Okay. > 11. details/assert_convertible.hpp: > > The macro here doesn't feel as useful too much. > Straight BOOST_STATIC_ASSERT(is_convertible....) > would take the same space and would convey more information > and immediatelly. I'll probably use BOOST_MPL_ASSERT(is_convertible<.. >), now that it's available. > 12. detail/access_control.hpp: this feel as as it could be in boost/utility/ > or in boost/detail/. > > An example(s) could be provided in the header. I'm glad to hear you think it may be useful. > > 13. detail/buffer.hpp: should have > #include <boost/noncopyable.hpp> > > > Commenting nitpick: maybe instead of > > // Template name: buffer > // Description: Character buffer. > // Template paramters: > // Ch - The character type. > // Alloc - The Allocator type. > // > template< typename Ch, > typename Alloc = std::allocator<Ch> > > class basic_buffer : private noncopyable { > > > could be > > // what it is used for.... > template< typename Ch, > typename Alloc = std::allocator<Ch> > > class basic_buffer : private noncopyable { > > Too many comments here are obvious and this makes > reader easy to skip them all. Agreed. I have a semi-standard way of documenting templates, which sometime results in the sort of uninformative comments you quote above. But I don't follow this pattern consistently, so there's not much point. > OTOH it should be explained why there's basic_buffer > and buffer and why its functionality isn't merged into one > coherent class. This is far from obvious. Okay. For the record: - The extra pointers aren't needed most of the time, so basic_buffer is used for (pretty trivial) space-saving and to emphasize that only the limited interface is used. > E.g. the design allows swap(basic_buffer, buffer). I guess I could disable that. > It should be explained why std::vector or boost::array > isn't enough. - vector<Ch> initializes each character - boost::array is statically sized > Maybe these class(es) could be factored out into boost/details/ > or standalone mini-library. > > 14. detail/config.hpp: the trick with > > #ifndef BOOST_IO_NO_SCOPE_GUARD > > I have feeling it should be removed. Compilers who cannot > handle even scope guard are not worth to support. > Or the scope_guard for these could be dummy. Unfortunately, my scopeguard fails to work on one pretty good compiler. I assume I'll either fix it or use Joaquín's version. Either way, these conditional sections will be removed. But I need to be able to run the regression tests in the mean time. > Number of macros in iostreams library would be > better reduced. > > The BOOST_IO_NO_FULL_SMART_ADAPTER_SUPPORT > macro: it should be explained in code comment what it means. > Maybe something like it could be moved into Boost.Config > or maybe something like this is already in Boost.Config. This is for Borland, which seems to go into an infinte loop of template instantiations without this workaround. I'm not sure what the underlying problem is. > BOOST_IO_DECL: this should be in Boost.Config > as BOOST_DECLSPEC. Other libraries (regex) have > their own macros and it is all one big mess. But it uses BOOST_IO_DYN_LINK. Are you saying users shouldn't be able to link dynamically to selected boost libraries? Maybe it could be BOOST_DECLSPEC(IO). > 15. converting_chain.hpp: how brackets are positioned > would be better unified. Here I see the > if (xxxxxxxxx) > { > .... > } I don't see that in converting_chain.hpp. I typically write if's that way only if the condition takes multiple lines. > and elsewhere Kernigham notation > and it makes me wonder whether whether it has some > hidden meaning. Are you perhaps talking about the distintion between void f() { } and void f() { } ? I prefer the latter when defining functions within a class body, because it seems more readable. But I admit I am somewhat inconsistent. > > Agreed. converting_chain, converting_streambuf and converting_stream are not yet up and running. > 16. double_object.hpp: > > typo in source "simalr" Thanks. > Wouldn't it make sense to have this in compressed_pair library? I find it very useful, but I wasn't sure if others would. Adding it to compressed_pair makes sense. Maybe I'll start by putting it in detail. > 17. forwarding.hpp: the macro > BOOST_IO_DEFINE_FORWARDING_FUNCTIONS > is used on exactly one place. Maybe it could be defined/used/undefined > here to make overall structure simpler. That's probably a good idea. Originally it was part of detail/push, which could be useful to end-users (when I document it). I guess when I factored out the forwarding part I didn't realize that it had limited utility. > 18. io/detail/streambufs/ : there are temporary files > indirect_streambuf.hpp.bak.hpp and indirect_streambuf.~hpp. These are gone now. > 19. details/chain.hpp: > > #if defined(BOOST_MSVC) && _MSC_VER == 1300 > virtual ~chain_base() { } // If omitted, some tests fail on VC7.0. > Why? > #endif > > doesn't this change semantics of the class? Theoretically. But it's never supposed to be used for run-time polymorphism. And it's an implementation detail. > Ans ASCII class diagram could be here. I guess that't a reasonable request, since chain is the heart of the filtering implementation. > 20. detail/iterator_traits.hpp: looking on specializations: > would it make sense to have unsigned char/signed char > versions as well? I guess it wouldn't hurt. But is char_traits typically specialized for these types? > (There are more places with explicit instantions that would > need possible update). I don't see any. Unless you mean std::char_traits ;-) > 21. test lzo.cpp refers to non-existing boost/io/lzo.hpp. Right. I got rid of it because of copyright issues. > 22. io/file.hpp: what is exactly reason to have pimpl in > basic_file_resource class? > > I do not see headers that don't need to be included, > I do not see dynamic swicthing of pimpls, I do not see > eager or lazy optimizations. I see only overhead and complexity. Exception safety. I left this out of the latest rationale, but it's an important part I'm going to put back in. Several iteration ago, the usage would have been: filtering_istream in; in.push(new gzip_decompressor()); in.push(new file_source("hello.gz")); Generalizing this convention of passing by pointer and transfering ownership led to exception safety problems in other parts of the library. So the current convention is that filters and resources are passed by value, and so must be copy constructible. (streams and stream buffers are stored by reference by default, and the same effect can be achieved for an arbitrary component using boost::ref()) So, to answer your question: basic_file_resource wraps a basic_filebuf, which is generally non-copyable, so I used a shared_ptr. > 23. io/memmap_file.hpp: why is pimpl in mapped_file_resource class? To avoid having to include operating system headers from a header file. > 24. io/regex_filter.hpp, function do_filter() contains: > > void do_filter(const vector_type& src, vector_type& dest) > { > ...... > iterator first(&src[0], &src[0] + src.size(), re_, flags_); > > Is the &src[0] safe if vec is empty? I don't know if standard > allows it, it just caught my eyes. Good point. I'd don't know if it's safe. But it's easy enough to handle this case separately. Thanks for the very detailed criticism! > > /Pavel > Jonathan Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/09/71698.php
CC-MAIN-2021-17
refinedweb
2,601
59.19
. import os import shutil os.chdir('C:\\') #Make sure you add your source and destination path below dir_src = ("C:\\source\\") dir_dst = ("C:\\destination\\") for filename in os.listdir(dir_src): if filename.endswith('.mdb'): shutil.copy( dir_src + filename, dir_dst) print(filename) If you need to construct both paths (source and destination), then probably os.listdir() with os.path.join() is better.If you need to construct both paths (source and destination), then probably os.listdir() with os.path.join() is better. #!python3 import glob import os import shutil dir_src = 'C:/source' dir_dst = 'C:/destination' for filename in glob.iglob(os.path.join(dir_src, '*.mdb')): print(filename) shutil.copy(filename, dir_dst) If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions.
https://www.experts-exchange.com/questions/28975523/Copy-Files-Python.html
CC-MAIN-2017-26
refinedweb
133
54.08
Ever wanted different behavior between DEBUG and RELEASE builds? But without having to fork your code or have to result to preprocessor directives? Well let me introduce you to the Conditional attribute. The Conditional attribute is simple: the method isn't executed unless a particular string is defined at compile time. Let's look at an example: using System.Diagnostics;[ By default Visual Studio defines "DEBUG" for debug builds, so method1 would work. But method2 would be replaced with a noop unless "custom" was defined during the build process. (You can add your custom symbols on the Build application designer page.) This way you can change program behavior with a simple build flag, which can be extremely useful in certain situations.
http://blogs.msdn.com/b/chrsmith/archive/2005/07/07/the-conditional-attribute.aspx?Redirected=true
CC-MAIN-2015-22
refinedweb
121
58.18
Yesterday I built my first microservice (a RESTful API) using Go, and I wanted to collect a few of my thoughts on the experience here before I forgot them. The project, Scribo, is intended to aid in my research by collecting data about a specific network that I’m looking to build distributed systems for. I do have something running, which will need to evolve a lot, and it could be helpful to know where it started. When I first sat down to do this project, I thought it was pretty straight forward. I watched “Writing JSON REST APIs in Go (Go from A to Z)” and read through the “Making a RESTful JSON API in Go” tutorial. I was going to deploy the service on Heroku, add testing with Ginkgo and Gomega, and use continuous integration with Travis-CI. I was used to doing the same kind of thing with Flask or Django and figured it couldn’t take that long. After a full day of coding, I did manage to do all the things I mentioned above, but with a number of angsty decisions that have caused me to write this post. There were three main holdups that caused me to have trouble moving forward quickly: - The choice of a RESTful API framework - Structuring the project - Managing dependencies and versions Briefly I want to go over how each went down and the choices I made. Framework At the moment I’ve ended up using Gorilla mux though it was in pretty strong contention with go-json-rest. Note that these frameworks were the two proposed in both of the tutorials I mentioned earlier. I saw but did not consider Martini, which is no longer maintained, and Gin which apparently is Martini-like but faster. I was warned off of these frameworks by a post by Stephen Searles even though the majority of tutorials on the first page of Google results mentioned and used these frameworks. I think the response by Code Gangsta to Searles' criticism highlights the trouble that I had selecting a framework. I was expecting to come in and have to perform some hoop jumping to select a framework sort of like Flask vs. Django or Sinatra vs. Rails. I hoped that I would have been easily steered away from projects like Bottle (not a bad project, just not very popular) simply because of the number of tutorials. The issue is that Go is so new and Go developers come from other communities, that idiomatic Go frameworks are still pretty tough to write because a lot of thought has to go into what that means. Moreover, Go’s standard library, namely net/http is so good that you don’t really have to build a lot on top of it (whereas you would never build a web app directly on top of Python’s HTTPServer). Go is intended to be the compilation of small, lightweight packages that are very good at the one thing they do well. It is not intended for large frameworks. Even Gorilla seems a bit to large with this context. I guess what I want is some small lightweight Resource API like the one described in “A RESTful Microframework in Go” — which I intend to build for my platform. Since we can’t be expected to build all these small components on our own this me led to the next problem: packaging. Project Structure There is actually a lot about how to structure Go code, in fact it is one of the first things discussed in the Go documentation. This is because the Go tools are dependent on how you organize your projects. The src/pkg/bin structuring along with namespaces based on repositories and use of go get to fetch and manage dependencies (see next section) makes Go “open source ready”. However, at the end of the day, it still feels weird for me to create multiple repositories for a single project – particularly as it seems that they are suggesting that you create your library in one repository, and your commands and program main.go in a second repository. Moreover, I don’t like my code to be at the top level of the repository, I need some organization for large projects that don’t span repositories, and would like things to be in a subfolder (maybe I’ll get over this and be a better Go programmer). Based on Heroku’s suggestions, I followed the advice of Ben Johnson in “Structuring Applications in Go”. I put my main.go in a cmd folder so that it wouldn’t be built automatically on go get. My library I still forced into a subpackage, which requires me to specify ./... for most go commands to recursively search the directory for Go code. I’m decently ok with how things are now, but still not wholley comfortable. Also - this is a web application, so I need to add HTML, CSS, and JavaScript files. Where to put those? Right now they’re in the root of the directory, but honestly this doesn’t feel right. I just wanted to create a small and simple one page app to view the microservice under the hood. The essential problem was that I couldn’t find a single example of a web app built using Go. This is a matter of not being able to Google correctly for it, but I still need those examples! Dependency Management Apparently there was some discussion in Go between when I first started using it (1.3) and when I came back to it (1.6), and during Go 1.5 there was a “vendor experiment”. Vendoring is a mechanism of preserving specific dependency requirements for a project by including them (usually in a subfolder called vendor) in the source version control for your project. This is opposed to other mechanisms where you simply specify the version of the dependency you want and can fetch it (e.g. with go get) during the build process. From what I can tell, the vendor experiment one, and dependency management tools like Godep and The Vendor tool for Go had to do a bit of reorganizing. Because of Travis-CI and Heroku (which automatically look for a folder in your project called Godeps, created by the godep save command), I went with Godep over anything else. Still I’m not happy with this solution. I have no guide about what projects to select or use. Moreover, my src/github.com directory is getting filled up with a TON of projects. I feel like more investigation needs to be done here as well. Conclusion Yesterday I was super excited, today I’m nervous but ready. I have a lot of questions, but I hope that I’ll be moving forward to doing some serious Go programming in the future. I hope to be as good a Go programmer as I am a Python programmer in the future, so that I can naturally create fast, effective systems.
https://bbengfort.github.io/2016/05/a-microservice-in-go/
CC-MAIN-2021-17
refinedweb
1,170
68.91
Using Java in Talend Java is a hugely popular and incredibly rich programming language. Talend is a Java code generator which makes use of many open source Java libraries, so this means that Talend functionality can easily be extended by integrating Java code into Talend jobs. The Java representation allows you to transform Java object instances. You can import Java classes individually, from a folder, or in a JAR file, and the Java importer will create structure definitions from each class. At runtime, you can provide Java object(s) as the source(s) of a transformation or accept them as the result(s). This section contains recipes that show some of the techniques for making use of Java within Talend jobs. Introduction For many data integration requirements, the standard Talend components provide the means to process the data from start to end without needing to use Java code apart from in tMap. For more complex requirements, it is often necessary to add additional Java logic to a job, and in other cases it may be that adding custom Java code will provide a simpler or more elegant or more efficient code than using the standard components. Performing one-off pieces of logic using tJava The tJava component allows one-off logic to be added to a job. Common uses of tJava include setting global or context variables prior to the main data processing stages and printing logging messages. Getting ready Open the job jo_cook_ch05_0000_tJava. How to achieve it… - Open the tJava - Type in the following code: System.out.println("Executing job "+jobName+" at "+TalendDate.getDate("CCYY-MM-dd HH:mm:ss")); - Run the job. You will see that message is printed showing the job name and the date and time of execution. How it works… If you examine the code, you will see that the Java code is simply added to the generated code as it is. This is why you must remember to add ; to the end of the line to avoid compilation errors. Setting the context and globalMap variables using tJava Although this recipe is centered on the use of tJava, it also acts as a convenient means of illustrating how the context and globalMap variables can be directly referenced from within the majority of Talend components. Getting ready Open jo_cook_ch05_0010_tJavaContextGlobalMap, then open the context panel, and you should see a variable named testValue. How to achieve it… - Open tMap_1 and type in the following code: System.out.println("tJava_1"); context.testValue ="testValue is now initialized"; globalMap.put("gmTestValue", "gmTestValue is now initialized"); - Open tMap_2 and type in the following code: System.out.println("tJava_2"); System.out.println("context.testValue is: "+context.testValue); System.out.println("gmTestValue is: "+(String) globalMap.get("gmTestValue")); - Run the job. You will see that the variables initialized in the first tJava are printed correctly in the second. How it works… The context and globalMap variables are stored as globally available Java hashMaps, meaning that they are keyed values. This enables these values to be referenced within any of the other components, such as tMap, tFixedFlowInput, and tFileInputDelimited. There’s more… This recipe shows variables being set in a one-off fashion using tJava. It is worth noting that the same principles apply to tJavaRow. Because tJavaRow is called for every row processed, it is possible to create a global variable for a row that can be referenced by all components in a flow. This can be useful when pre and post field values are required for comparison purposes later in the flow. Storing in the globalMap variables avoids the need to create additional schema columns. Adding complex logic into a flow using tJavaRow The tJavaRow component allows Java logic to be performed for every record within a flow. Getting ready Open the job jo_cook_ch05_0020_tJavaRow. How to achieve it… - Add the tJavaRow and tLogRow - Link the flows as shown in the following screenshot: - Open the schema and you will see that there are no fields in the output. Highlight name, dateOfBirth, and age, and click on the single arrow. - Use the + button to add new columns cleansedName (String) and rowCount (Integer), so that the schema looks like the following: - Close the schema by pressing OK and then press the Generate code button in the main tJavaRow screen. The generated code will be as follows: //Code generated according to input schema and output schema output_row.name = input_row.name; output_row.dateOfBirth = input_row.dateOfBirth; output_row.age = input_row.timestamp; output_row.cleanedName = input_row.age; output_row.rowCount = input_row.age; - Change the row age = input_row.timestamp from the code to read output_row.age = input_row.age. - Remove the rows for cleanedName and output_row.rowCount, and replace with the following code: if (input_row.name.startsWith("J ")) { output_row.cleanedName = StringHandling.EREPLACE(input_row.name, "J ", "James "); } if (input_row.name.startsWith("Jo ")) { output_row.cleanedName = StringHandling.EREPLACE(input_row.name, "Jo ", "Joanne "); } output_row.rowCount=Numeric.sequence("s1",1,1); output_row.rowCount=Numeric.sequence("s1",1,1); - Run the job. You will see that “J ” and “Jo ” have been replaced, and each row now has a rowCount value How it works… The tJavaRow component is much like a 1 input to 1 output tMap, in that, input columns can be ignored and new columns can be added to the output. Once the output fields have been defined, the Generate code button will create a Java mapping for every output field. If the names are the same, then it will map correctly. If input fields are not found or are named differently, then it will automatically map the field in the same position in the input or the last known input field, so be careful when using this option if you have removed fields. In some cases, it is best to propagate all fields, generate the mappings and then remove unwanted fields and mappings. Tip Also, be aware that the Generate Code option will remove all code in the window. If you have code that you wish to keep, then ensure that you copy it into a text editor before regenerating the code. As you can also see from the code that was added, it is possible to use Talend’s own functions (StringHandling.EREPLACE, Numeric.sequence) in the Java components along with any other normal Java syntax, like the if statement and startsWith String method. Importing JAR files to allow use of external Java classes Talend has rich set of functions and libraries available with in its suite. However, you may want to use libraries provided by data vendor or API vendor e.g. if you want to fetch data from Google Adwords then you may want to include or import the library/jar files provided by Google into Talend. Occasionally, during development, it is necessary (or simpler) to make use of Java classes that aren’t already included within Talend. These may be pre-existing Java code such as financial calculations or open source libraries, which are provided by The Apache Software Foundation. In this example, we will make use of a simple Java class ExternalValidations and its ExternalValidateCustomerName method. This class performs the following simple validation: if (customerName.startsWith("J ")) { return customerName.replace("J ", "James "); } else { if (customerName.startsWith("Jo ")) { return customerName.replace("Jo ", "Joanne "); } else { return customerName; } } Getting ready Open job jo_cook_ch05_0050_externalClasses. How to do it… - Create a code routine called externalValidation. - Right-click and select the option Edit routine Libraries. - In the next dialogue, click on New. - Select the option Browse a library file, and browse to the cookbookData folder which contains a sub-folder named externalJar. Click on jar, then click OK to confirm. The import dialogue should now look at the following: - Return to the job and open tJavaRow, and click on the Advanced settings tab. - Add the following code: import talendExternalClass.ExternalValidations; - Return to the Basic tab and add the following code: output_row.validatedName =ExternalValidations.ExternalValidateCustomerName(input_row.name); - Run the job. You will see that the validations have taken place, and the customer names have been changed. Note If you get an error when running this job, then it is possibly because the new class has not been set up as a dependency automatically. How it works… The code routine externalValidations is a dummy routine used to attach the external jar file and make it available for all jobs in the project. In order to use the classes in the JAR file, it is necessary to add an import statement within the tJavaRow so that the code knows where to find the methods. There’s more… An alternate method of achieving this for just a single job is to use the tLibraryLoad components at the start of the job to define the location of the external libraries and the JAR files required.
http://mindmajix.com/talend/using-java
CC-MAIN-2016-44
refinedweb
1,450
54.32
In this tutorial, I will provide a brief overview of proportional navigation family of guidance laws, but more importantly, we will go over some sample code so you get straight to trying it out in your own game. This tutorial (and accompanying demonstration video) is written using World in Conflict MW Mod. Sample codes are written in Python for MW Mod's FLINT Missile System. History of Proportional Navigation (PN) PN is the foundation of modern homing guidance in virtually every guided missiles in use today. The theory of PN was developed by the US Navy during World War II as desperate measures were needed to protect their ships from Japanese kamikaze attacks, and gun-based defense systems (even automated fire director controlled guns) simply did not have the range to take out the target at a safe distance, and easily became overwhelmed by saturation. Automatic homing missiles were urgently needed to defend ships at sea. Although theory of PN was apparently known by the Germans during WW II, no applications were reported and the war shortly ended thereafter. The US Navy's experimental Lark missile was the first to implement PN, soon followed by Sparrow and the venerable AIM-9 Sidewinder after the war. PN is cheap to implement and had demonstrated to provide effective homing guidance-- It is because PN does not require geometry information such as range to target and target speed (though it can also benefit from it), making it ideal to implement on simpler "range-denied" missiles, such as passively heat-seeking IR missiles and semi-active laser homing variants. Not only that, the theory of using Line of Sight (LOS) information to develop collision course is such a powerful concept, that virtually every advanced guidance laws used today share their ancestry lineage back to PN. If you ever see people attempting to code so-called "advanced target homing" missiles in games using quadratic equations and trigonometry, you can clearly see that they're reinventing the wheels and got some learning to do :-) PN totally laughs at, and beats out every other forms of homing guidance used by games, which integrate states by solving for what is essentially a high school geometry. Understanding Theory of PN PN works on the principle of "Constant Bearing Decreasing Range" (CBDR), where when two objects are heading in the same direction with no change in Line of Sight (bearing) angle, objects *will* collide. Line of Sight (LOS) is an imaginary sight-line between you and the target; when target is moving from left to right in your field of view, it is said that the LOS angle is changing from left to right. If however, you were to also run from left to right and accelerate appropriately to keep the target centered in your field of view, it is then said the rate at which LOS angle is changing (LOS rotation rate) is zero ("null"). Continuing to run with LOS rate maintained at zero will result in intercept and "lead to pursuit" collision with the object you're chasing. Mathematically, PN is stated as follows: Commanded Acceleration = N * Vc * LOS_Rate N = Unitless navigation gain (constant) -- between 3 to 5 Vc = Closing Velocity (or "range closing rate") LOS_Rate = LOS Rotation Rate To describe this along a two-dimensional missile-target engagement geometry: When working with PN, it is important to understand that missile's acceleration is commanded (Acmd) normal to (aka perpendicular to) the LOS, and proportional to the LOS rotation rate. LOS as stated earlier, is the sight-line between missile and the target, which would be "Rtm" line in the above diagram. LOS rotation rate is the angular rate at which the LOS line changes -- denoted by the over-dot theta "LOS" grey angle in the diagram above. Prerequisites Before we get started, we first assume that you already have very basic knowledge in implementation of homing missiles in game. If you are this far, you probably know that in order for your game physics to work, you need some sort of integration scheme (i.e Euler, Verlet, Runge-Kutta 4, etc) that provides step-by-step numerical integration at every frame. FLINT missile system in WiC MW uses Velocity-Verlet, but essentially, what it all comes down to is simple: you step forward one step (or frame) at a time and solve for your states. If you need help with game physics and integrators, check out Gafferongames.com Implementing PN in Game Now, to implement PN, we need to fill in the blanks from the above equation "An = N * Vc * LOS_Rate." Let's discuss how to obtain the individual input variables to calculate our required lateral acceleration (latax). 1. Obtaining the LOS Rotation Rate (LOS_Rate) As explained above; LOS Rotation Rate is the rate at which target is crossing your field of view, or more exactly, the rate at which sight-line angle is changing. Obtaining LOS_Rate is easy, especially in game environment. In real life, the missile seeker sits on a gimbaled gyro -- the seeker rotates in the gyro to keep the target locked on; the rate at which it rotates is your LOS rate. In real life, engineers have to contend with seeker noise contamination, but in games, we're essentially working in a noise-free environment. First, you need to obtain the directional vector between the missile and the target (remember "Rtm" in the above diagram?) -- this is your LOS: RTM_new = math.Vector3( targetPosition ) - math.Vector3( missilePosition ) You will measure and record LOS at every frame; you need to now get the difference between the new RTM (RTM_new) you just measured, and the LOS obtained from the previous frame (RTM_old) -- this is your change in LOS (LOS_Delta). RTM_new.Normalize() RTM_old.Normalize() LOS_Delta = ( RTM_new - RTM_old ) LOS_Rate = LOS_Delta.VectorLength() 2. Closing Velocity (Vc) Closing Velocity (Vc) is the rate at which the missile and the target are closing onto one another. In other words, as the missile is traveling toward its target, it gets closer to the target, no? So, the rate at which our missile is closing the distance to its target is called the "range closing rate" or simply "closing velocity." This range rate is mathematically defined as follows: Vc = -Rtm_overdot -Rtm_overdot = Negative rate of change of the distance from the missile to the target. Now, this raises a curious question: How do you obtain 'range rate' on a passive heat-seeking missiles that have no radar to measure distance? Well, you guesstimate it (lol) -- that's what the early rudimentary version of Sidewinder in 1953 did. Doesn't work very well against accelerating or maneuvering targets, but it was effective enough for what was world's first PN heat-seeking missile. In modern versions like the AIM-9L, AIM-9X, Stinger etc, you have computation power available in the seeker to "process" the intensity of IR or the image it sees. As IR signature gets closer, it gets more intense -- the rate of intensity change is your range closing rate. On radar guided missiles, including semi-active homers like the original Sparrow missile, you have better luck! On a radio-frequency based sensor, the seeker can observe Doppler frequency of the target return to calculate the rate at which the missile is closing onto its target. Doppler effect is quite measurable on wavelengths as you get closer or away from the target. Negative rate of change in Doppler frequency is the closing velocity. Anyway, now let's back to our in-game code. Recall from earlier that we derived our LOS_Rate in vector space as the difference between the current frame's missile-target vector (RTM_new) and the previous frame's missile-target vector (RTM_old). The length of the vector for this difference is denoted as LOS_Rate above. Well, just so it happens, as our missile is closing onto the target, RTM_new is shorter than RTM_old! So LOS_Rate length itself is a rate of change in missile-target distance. Then, conversely speaking, the negative rate of this distance change is our range closing rate, aka the closing velocity: Vc = -LOS_Rate 3. Navigation Gain (N) Navigation gain (or navigation constant) is a designer-chosen unitless variable usually in the range of 3 to 5. Higher the N, the faster your missile will null out heading errors. Generally, it is recommended for N to stay between 3 to 5. FLINT missiles in WiC MW use N of 3; most missiles in real-life use 3 as well. 4. Augmented Proportional Navigation (APN) When implementing PN, it is best practice to focus on target's acceleration normal to LOS'-- we're using a missile to hit a moving target after all. What does 'acceleration normal to LOS' mean exactly? Well, as the missile is homing onto the target, the target would most likely move laterally, or 'perpendicular to' along the LOS sight-line (crossing target would present the most movement normal to LOS). The reality is that, often times the target is not moving at a constant velocity -- it changes directions, or it slows down or accelerates. You also have upward sensible acceleration of 1 G even for non-maneuvering targets if you're simulating gravity. To account for these factors, you add a term to our PN formula by adding "(N * Nt ) / 2": Commanded Acceleration = N * Vc * LOS_Rate + ( N * Nt ) / 2 Nt = Target acceleration (estimated) normal to LOS Even for targets that do not maneuver, the target's one-g sensible acceleration is multiplied by N/2, producing a more efficient intercept. Putting it all together - Sample Code Below is a sample FLINT missile system code for FIM-92 Stinger heat-seeking missile, written in Python. It is the simplest missile in game to employ augmented PN, as it's a passive heat-seeking missile. def GCFLINT_Lib_APN( msl_pos, tgt_pos, msl_pos_previous, tgt_pos_previous, latax, N = None, Nt = None ): """ Augmented Proportional Navigation (APN) A_cmd = N * Vc * LOS_Rate + N * Nt / 2 msl_pos: Missile's new position this frame. tgt_ps: Target's new position this frame. msl_pos_previous: Mutable object for missile's position previous frame. tgt_pos_previous: Mutable object for target's position previous frame. Set these objects to "0" during first time initialization, as we haven't yet started recording previous positions yet. latax: Mutable object for returning guidance command. N: (float, optional) Navigation gain (3.0 to 5.0) Nt: (float, optional) Target acceleration amount normal to LOS """ import wic.common.math as math from predictorFCS_flint_includes import * from predictorFCS_EXFLINT import * if N is None: # navigation constant N = 3.0 else isinstance(N, float) is not True: raise TypeError("N must be float") if Nt is None: # one-g sensible acceleration Nt = 9.8 * EXFLINT_TICKTOCK else isinstance(Nt, float) is not True: raise TypeError("Nt must be float") if msl_pos_previous is not 0 and tgt_pos_previous is not 0: # Get msl-target distances of previous frame and new frame (Rtm) RTM_old = ( math.Vector3( tgt_pos_previous ) - msl_pos_previous ) RTM_new = ( math.Vector3( tgt_pos ) - msl_pos ) # normalize RTM vectors RTM_new.NormalizeSafe() RTM_old.NormalizeSafe() if RTM_old.Length() is 0: LOS_Delta = math.Vector3( 0, 0, 0 ) LOS_Rate = 0.0 else: LOS_Delta = math.Vector3( RTM_new ) - RTM_old LOS_Rate = LOS_Delta.VectorLength() # range closing rate Vc = -LOS_Rate # Now, calculate the final lateral acceleration required for our missile # to home into our target. latax = RTM_new * N * Vc * LOS_Rate + LOS_Delta * Nt * ( 0.5 * N ) # Update mutable position objects so we can integrate forward to next frame. msl_pos_previous = math.Vector3( msl_pos ) tgt_pos_previous = math.Vector3( tgt_pos ) # my job is done, it's now up to EXFLINT.Integrate() to steer the missile. return True Video Demonstration and Wrapping it All Together The augmented PN (APN) formula above should always be used for any moving targets-- even non-maneuvering aircraft should have upward one-g sensible acceleration in a proper simulation environment, necessitating the need for APN. The advantage of using APN guidance as compared to classic PN is that the commanded lateral acceleration (latax) is initially high but falls as the missile approaches the maneuvering target. Watch the YouTube video accompanying this blog post very closely-- you'll see that APN guided missile initially makes very violent maneuver to snap itself onto the collision path, then only minor adjustments are made just prior to missile impact. This is how optimal PN is implemented in the real world and how it should be done in realistic combat simulation games. In the next (near future) tutorial, we will go over advanced guidance laws using theories of optimal control, which are derivatives of PN with more engagement information fed in, such as estimated time-to-intercept; and missile-target range. Feel free to message me on ModDB or YouTube if you have any questions or comments about implementing PN in your game environment. YouTube HD Link: youtu.be/Osb7anMm1AY Is that okay with you if I link your article on /r/gamedev ? Absolutely, please feel free! This comment is currently awaiting admin approval, join now to view. Bro, I really really love your post!! can you give me some references for all the approaches that you have developed? please!! Thanks for sharing such amazing info.
https://www.moddb.com/members/blahdy/blogs/gamedev-introduction-to-proportional-navigation-part-i
CC-MAIN-2018-43
refinedweb
2,174
51.07
See other posts in this series Fuzzy Logic In this post I’ll show you how to build an object avoidance behaviour using Fuzzy Logic. In contrast with traditional logic [Fuzzy Logic] can have varying values, where binary sets have two-valued logic, true or false, fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. – Wikipedia Since my own knowledge of fuzzy systems is somewhat limited, I’d recommend you read Fuzzy Logic Obstacle Avoidance by Seshi. This is where I found the equations that this post is based on. Like my last post, this code is designed to be ‘plugged’ into my Netduino Rover project as a behaviour. The behaviour starts by initializing the weights in the FAMM (fuzzy associative memory matrix), and calculating the total sum of these weights. This will be used later. Since this behaviour should not prevent others from being executed, the Execute method returns false. using System; using Microsoft.SPOT; using NetduinoRover.Outputs; using NetduinoRover.Sensors; namespace NetduinoRover.Behaviours { public class FuzzyBehaviour : IBehaviour { private Motor _leftMotor; private Motor _rightMotor; private RangeSensor _leftSensor; private RangeSensor _rightSensor; private int[][] _weights = new int[3][]; private double _sumOfWeights; public FuzzyBehaviour(Motor leftMotor, Motor rightMotor, RangeSensor leftSensor, RangeSensor rightSensor) { _leftMotor = leftMotor; _rightMotor = rightMotor; _leftSensor = leftSensor; _rightSensor = rightSensor; // Define FAMM weights (fuzzy associative memory matrix) _weights[0] = new int[3] { 3, 4, 5 }; _weights[1] = new int[3] { 2, 3, 4 }; _weights[2] = new int[3] { 1, 2, 3 }; // Calculate sum of weights _sumOfWeights = 0; for (int x = 0; x < 3; x++) { for (int y = 0; y < 3; y++) { _sumOfWeights += _weights[x][y]; } } } public bool Execute() { // Pass the sensor readings into the fuzzy system double delta = GetFuzzyResult(_leftSensor.Read(), _rightSensor.Read()); // Change the motor speeds based on the value of delta ChangeDirection(delta); return false; } // More methods to follow... } } GetFuzzyResults takes both sensors readings (in cm) and calculates to what degree each reading belongs to each fuzzy set (Near, Far or VeryFar). To get the fuzzy value, multiply each weight in the FAMM by the left and right membership, then divide the total by the sum of all weights in the FAMM. The output should be on the scale 0.07 (turn left) to 0.15 (turn right). 0.11 means go straight ahead. private double GetFuzzyResult(int leftDistance, int rightDistance) { // Membership function (left sensor) double[] leftMembership = new double[3]; leftMembership[0] = Near(leftDistance); leftMembership[1] = Far(leftDistance); leftMembership[2] = VeryFar(leftDistance); // Membership function (right sensor) double[] rightMembership = new double[3]; rightMembership[0] = Near(rightDistance); rightMembership[1] = Far(rightDistance); rightMembership[2] = VeryFar(rightDistance); // Defuzzifier double total = 0; for (int x = 0; x < 3; x++) { for (int y = 0; y < 3; y++) { total += _weights[x][y] * (leftMembership[x] * rightMembership[y]); } } return total / _sumOfWeights; } Each Fuzzy Set function calculates to what degree the distance belongs to the function. This is a core concept in fuzzy logic; the idea that things are not true/false, rather they exist on a scale from 0 to 1. A given distance can be both near and far at the same time, but to different degrees. private double Near(double distance) { return Bound(-(distance / 50) + 1); } private double Far(double distance) { if (distance < 50) return Bound(distance / 50); else return Bound(-(distance / 50) + 2); } private double VeryFar(double distance) { return Bound((distance / 50) - 1); } The Bound method ensures the supplied value is kept within the range 0 to 1, rounding it up or down as required. private double Bound(double value) { if (value < 0) return 0; else if (value > 1) return 1; else return value; } Finally, the result of GetFuzzyResults needs to be converted into something the motors can use. Remember, delta will be in the range 0.07 to 0.15 (with 0.11 in the centre). With a little bit of maths, the input is shaped into a percentage of power for each motor. private void ChangeDirection(double delta) { // The scale is now -0.04 to +0.04 delta -= 0.11; // Convert to the scale -0.40 to +0.40 delta *= 10; double leftSpeed = 0.5; // 50% power as a starting point double rightSpeed = 0.5; // 50% power as a starting point leftSpeed -= delta; rightSpeed += delta; // Send steering to motors _leftMotor.SetSpeed(leftSpeed); _rightMotor.SetSpeed(rightSpeed); } With the behaviour complete, it can be plugged into the behaviour stack (see my last post). Here’s a video of the finished rover:
https://codeoverload.wordpress.com/2012/12/23/building-a-netduino-rover-part-5/
CC-MAIN-2019-18
refinedweb
732
54.63
Opened 6 years ago Closed 6 years ago #7019 closed (invalid) ImportError: No module named ImageFile Description Depending upon how an environment is configured django/utils/images.py ImageFile (from PIL) not importing on some hosts (OSX 10.4 'ports') the following fails: import ImageFile ImportError: No module named ImageFile However when images.py is modified with the following, it works fine. from PIL import ImageFile. Attachments (0) Change History (1) comment:1 Changed 6 years ago by mtredinnick - Resolution set to invalid - Status changed from new to closed Note: See TracTickets for help on using tickets. This isn't a problem with Django. It's a problem with the way PIL is installed and should be reported to the packager for fixing. PIL should be installed so that the PIL directory is part of the Python module search path (normally done via a PIL.pth file) and thus import ImageFile must work; otherwise it's installed incorrectly.
https://code.djangoproject.com/ticket/7019
CC-MAIN-2014-15
refinedweb
158
56.45
HOW-TO:Write Python Scripts XBMC features a Python Scripts Engine and WindowXML application framework (a XML-based widget toolkit for creating GUI window structure) in a similar fashion to Apple Mac OS X Dashboard Widgets and Microsoft Gadgets in Windows Sidebar. So normal users can add new functionality to XBMC themselves (using the easy to learn Python programming language) without an illegal copy of the XDK and without knowledge of the complex C/C++ programming language. Current plugin scripts include functions like Internet-TV and movie-trailer browsers, weather forecast and cinemaguides, TV-guides (EPG), e-mail clients, instant messaging, train-timetables, scripts to front-end control PVR software and hardware (like: MediaPortal, MythTV, TiVo, ReplayTV, Dreambox/DBox2), Internet-radio-station browsers (example SHOUTcast, Xm radio, Sirius Satellite Radio), P2P file-sharing downloaders (BitTorrent), IRC, also casual games (sometimes also referred to as mini-games or party-games) such as Tetris, Snake, Space Invaders, Sudoku, and much more. Please feel free to add more samples of simple scripting functions with comments that you know or figure out while you're learning to script. Adding anything at all no matter how basic, if its not already here add it! someone will more than likely benefit from it. The more difficult your snippet please heavily comment it with "#" don't be stingy on the comments you can never have too much information, what's simple to you may make no sense at all to someone else, also URLs that were helpful to what you were doing would be great to add to your snippet or to the bookmark section (python sites, chatrooms, etc): Contents - 1 Python Example - 2 Python resources - 3 Basic Information - 4 Structure - 5 Basic Steps - 6 Code Snippets - 6.1 XBMC Core Player Options - 6.2 XBMC GUI - 6.3 Scaling your script using setCoordinateResolution() - 6.4 String Manipulation - 6.5 Imports - 6.6 HTTP - 7 Script Arguments - 8 Keymapping: How To Map Your Controls In Python - 9 See also 1 Python Example Description: Python won't run the lines that we put a '#' in front of they are considered as “commented out” python will skip them. # import the XBMC libraries so we can use the controls and functions of XBMC import xbmc, xbmcgui # name and create our window class BlahMainWindow(xbmcgui.Window): # and define it as self def __init__(self): # add picture control to our window (self) with a hardcoded path name to picture self.addControl(xbmcgui.ControlImage(0,0,720,480, 'Q:\\scripts\\background.jpg')) # store our window as a short variable for easy of use W = BlahMainWindow() # run our window we created with our background jpeg image W.doModal() # after the window is closed, Destroy it. del W 2 Python resources - XBMC Online Manual Python related sections - XBMC Manual – Python Basics - XBMC Manual – Built-in Scripting - Web-Server HTTP-API (HTTPAPI) - XBMC Manual – List of Built In Functions - XBMC Manual – Labels Available In XBMC (InfoLabels) - HOW-TO write Python Scripts for XBMC, a Tutorial - This HOW-TO tutorial should be the perfect place if you about to start making your first python script for XBMC and you do not yet know very much about python, (note though that this tutorial could now be considered a little old-fashioned, but it is still a very good starting point). - Other XBMC Python related articles - XBMC Python Documentation - Updated from XBMC SVN on a regular bases - XBMC HTTP API Commands (quick overview, with link to doc) - XBMC Python emulator for Windows PC (BETA) - XBMC Forum – Python Scripts Development - XBMC Forum – Python Scripts Support and Requests - XBMC Python scripts – Help for French users and coders - Python Documentation on python.org - PLEAC-Python - An Introduction to Python - A Brief Tour of Python - Based on presentation materials by David Beazley (author of Python Essential Reference) - Snyppets - Python snippets 2.3 IRC Channels /server irc.freenode.net /join #xbmc-scripting /server irc.freenode.net /join #python /server irc.freenode.net /join #python-cleese /server irc.freenode.net /join #python-gilliam 3 Basic Information To code python all you basically need is notepad or any other form of texteditor its not very hard to figure out the basics with the documents in Bookmark section above, have a run through them first if you have no clue whats going on in the code snippets to enlighten yourself a bit. – I would suggest looking at the ALEX's Tutorial below first to start, and remember python is all about indents... 4 Structure - Please keep all your files related to your script in its own folder. - Name the Main File default.py - If you want a thumbnail call it default.tbn (tbn can be jpg or png) Use this code to find the path you want Root = os.getcwd().replace(";","")+"\\" The Above line will return a folder so its like q:\\scripts\\GoogleScript\\ or where ever the script is located As of the 13th of July, MyScripts now flatterns meaning it will it will no longer shows the folders and their contents if there is a default.py in the folder (won't let you view any deeper) Another nice way to have all your files inside your app is to put them inside a lib/ dir inside it, so "the first level" can only have the default.py and default.tbn files. To have all your sub-scripts in that way you have to put at the head of your script something similar to this code: LIB_DIR = ROOT+"libary\\" sys.path.append(LIB_DIR) import your_own_personal_python_library so you'll be able to import your files as if them were on the root directory of your python app. 5 Basic Steps - Open a text document. - Paste some code in there. Read this tutorial: HOW-TO write Python Scripts for XBMC - Save it to any name you wish with .py on the end for the file extension. (myscript.py) - FTP it to the scripts folder in XBMC (F:/xbmc/scripts/) - In XBMC go to the submenu naturally beside the power button on the skin in Project Mayhem 1. - Click scripts in the submenu. - Find the script you uploaded and run it. - “White Button” on the controller for Python debug whilst in scripts window (Added:13-02-05) To find out a lot of stuff of what there is available to control in XBMC take a look at these documents from the CVS: XBMC python documentation 6 Code Snippets (Warning: Some code may require you to first create a window or something along those lines but might not be specified) Thanks to the following contributors: - ThreeZee (on EFNet: ThreeZee) - EnderW - alexsolex - Donno 6.1 XBMC Core Player Options 6.1.1 Play A File If you want to play a file this is how, you dont need to make it a variable but it makes your code a little more cleaner. (this code doesnt need a window but import your xbmc libraries as normal) # variable to contain the file location file = 'q:\\scripts\\Music\\Underworld-BornSlippy.mp3' # tell xbmc to play our file we specified in the above variable xbmc.Player().play(file) 6.1.2 Fetching artist name and song title from currently playing song By using the code example below you can get the song title and artist name of the song currently playing in XBMC. # Import XBMC module import xbmc # First we need to create an object containing the getMusicInfoTag() class from XBMC.Player(). # This is in order to use the same instance of the class twice and not create a new class # for every time we query for information. # This is a bit important to notice, as creating many instances is rarely # a good thing to do (unless you need it, but not in this case). tag = xbmc.Player().getMusicInfoTag() # Now tag contains the getMusicInfoTag() class which then again contains song information. # Now we use this object to get the data by calling functions within that class: artist = tag.getArtist() title = tag.getTitle() # Now you have two strings containing the information. An example of what you could do next is to print it: print "Playing: " + artist " - " + title # This will produce i.e: "Playing: AC/DC - Back in black" 6.2 XBMC GUI Also see WindowXML GUI Toolkit/WindowXML 6.2.1 Adding Buttons the Nice Way Using The following addButton and setupButtons import xbmcgui try: Emulating = xbmcgui.Emulating except: Emulating = False class Example(xbmcgui.Window): """ Example Showing Of Using Sub Buttons Module to Create Buttons on a Window """ def __init__(self,): if Emulating: xbmcgui.Window.__init__(self) setupButtons(self,10,10,100,30,"Vert") self.h1 = addButon(self,"Click Me") self.something = addButon(self,"Something") self.btn_quit = addButon(self,"Quit") def onControl(self, c): if self.h1 == c: print "hey" if self.something == c: print "you press med" if self.btn_quit == c: self.close() ### The adding button Code (only really need this bit) def setupButtons(self,x,y,w,h,a="Vert",f=None,nf=None): self.numbut = 0 self.butx = x self.buty = y self.butwidth = w self.butheight = h self.butalign = a self.butfocus_img = f self.butnofocus_img = nf def addButon(self,text): if self.butalign == "Hori": c = xbmcgui.ControlButton(self.butx + (self.numbut * self.butwidth),self.buty,self.butwidth,self.butheight,text,self.butfocus_img,self.butnofocus_img) self.addControl(c) elif self.butalign == "Vert": c = xbmcgui.ControlButton(self.butx ,self.buty + (self.numbut * self.butheight),self.butwidth,self.butheight,text,self.butfocus_img,self.butnofocus_img) self.addControl(c) self.numbut += 1 return c ### The End of adding button Code Z = Example() Z.doModal() del Z 6.3 Scaling your script using setCoordinateResolution() Example: Skinned for PAL resolution # Import the XBMC/XBMCGUI modules. import xbmc, xbmcgui # resolution values #1080i = 0 #720p = 1 #480p = 2 #480p16x9 = 3 #ntsc = 4 #ntsc16x9 = 5 pal = 6 #pal16x9 = 7 #pal60 = 8 #pal6016x9 = 9 class MyScript(xbmcgui.Window): def __init__(self): self.setResolution(pal) def setResolution(self, skinnedResolution): # get current resolution currentResolution = self.getResolution() offset = 0 # if current and skinned resolutions differ and skinned resolution is not # 1080i or 720p (they have no 4:3) calculate widescreen offset if currentResolution != skinnedResolution and skinnedResolution > 1: # check if current resolution is 16x9 if currentResolution == 0 or currentResolution % 2: iCur16x9 = 1 else: iCur16x9 = 0 # check if skinned resolution is 16x9 if skinnedResolution % 2: i16x9 = 1 else: i16x9 = 0 # calculate offset offset = iCur16x9 - i16x9 self.setCoordinateResolution(skinnedResolution + offset) # We need to link the class to an object, and doModal to display it. My_Window = MyScript() My_Window.doModal() del My_Window 6.3.1 Scaling your script for any size screen Based on NTSC 720x480 # Import the XBMC/XBMCGUI modules. import xbmc, xbmcgui class MyScript(xbmcgui.Window): def __init__(self): if Emulating: xbmcgui.Window.__init__(self) # This will calculate the actual screen size to 720x480 ratio self.scaleX = self.getWidth() / 720.0 self.scaleY = self.getHeight() / 480.0 self.addControl(xbmcgui.ControlImage(0, 0, int(720 * self.scaleX), int(480 * self.scaleY), "Q:\\scripts\\background.gif")) self.addControl(xbmcgui.ControlImage(int(0 * self.scaleX), int(23 * self.scaleY), int(720 * self.scaleX), int(53 * self.scaleY), "Q:\\scripts\\top.gif")) # Any X (width / left) setting should be changed to * self.scaleX # i.e. The above was: self.addControl(xbmcgui.ControlImage(0, 23, 720, 53, "Q:\\scripts\\top.gif")) # This change will make the controls/images/etc scale to the screen. # We need to link the class to an object, and doModal to display it. My_Window = MyScript() My_Window.doModal() del My_Window 6.4 String Manipulation 6.4.1 Add strings of text together # add strings of text together use the "+" 'Cookie' + ' Monster' # If was added to a label etc would look like: Cookie Monster 6.4.2 Split strings of text apart into variables # Our variable of a string we want to split up. data = '1|(123)456-7890|JimmyRay|06262305' # make sure our data is a string and save it in another variable varData = str(data) # Split our string at the character '|' or whatever one you specify # could be a space if you wish and save them in the specified variables i, Name, Number, DT = varData.split('|') 6.4.3 Replace strings of text with other text # letter or text we want to replace current text with rplText = 'cat' # current text we want to replace fndText = 'dog' # String we want to replace those words in strText = 'dogbatdog' # our new string variable which would be 'catbatcat' now # or basically strText.replace('dog', 'cat') strOutput = strText.replace(fndText, rplText) print strOutput 6.4.4 Convert Strings, Integer # Number variable notice no quotes NUM = 43 # String variable notice quotes SNUM = '43' # Convert our variable to STRING S1 = str(NUM) # Convert our variable to INTEGER S2 = int(SNUM) print S1 + S2 6.4.5 Check if text is in a string Note this is case sensitive, so you may want to use mystring.lower() and have the “world” all in lower case. """ The output of this script is world is in the string data is not in the string """ mystring = "Hello world" if "world" in mystring: print "world is in the string" else: print "world is not in the string" if "data" in mystring: print "data is in the string" else: print "data is not in the string" 6.5 Imports 6.5.1 Import Time 6.5.1.1 Delay # import our time class import time # call time with the sleep function for ten seconds or however many wanted # until script continues on running. time.sleep(10) 6.5.1.2 System Clock #Import our time class import time # call time with strftime function followed by the formatting we want to use # you can put any character between the format symbols you want EX: '/' Time = str(time.strftime ('%H:%M:%S%p')) Date = str(time.strftime ('%d/%a/%Y')) print Date + Time # END OF CODE # FORMAT DIRECTIVES Directive | Meaning Notes - - - - - - - - - - - - - - - %a Locales abbreviated weekday name. %A Locales full weekday name. %b Locales abbreviated month name. %B Locales full month name. %c Local Local). 6.5.2 Import OS 6.5.2.1 Retrieving the contents of a directory # need to use os functions import os # Get the directory contents and put them in **lstDirList** lstDirList = os.listdir("Q:\\scripts\\") # Cycle through each one, performing the functions within indentation # Example is print, you could do anything with it though. # Keep in mind, this will return files AND directories. for strFileDir in lstDirList: print strFileDir 6.5.2.2 File Exist Check to see if file exists on harddrive import os # Pretty self explanatory but basically we check to see if the Me.png file exist # where we specifiy if not the statement returns false so then we tell our variable # to use a different source ex.) NoPhoto.png. if os.path.isfile('Q:\\scripts\Photos\Me.png') == False: Photo = 'Q:\\scripts\Photos\NoPhoto.png' 6.5.2.3 Write to a text file # LF here stands for LOG FILE you can put anything there doesnt really # matter its basically our variable for the location of our text file. LF = open('Q:\\scripts\\BLAH.LOG', 'a') # Write to our text file the information we have provided and then goto next line in our file. (so if it was in a loop it would write to the next line instead of everything on the same line) LF.write('Some log information blah blah blah' + '\n') # Close our file so no further writing is posible. LF.close() 6.5.2.4 Read text file # LF here stands for LOG FILE you can put anything there doesnt really # matter it's basically our variable for the location of our text file. LF = 'Q:\\scripts\\BLAH.LOG' # Opens our Log File to 'r' READ from it. log = open(LF, 'r') # load our text file into an array for easy access. for line in log: # Output all of the array text file to the screen. print(line[:-1]) # Close our text file so no further reading is posible. log.close() 6.5.2.5 Clear text file # LF here stands for LOG FILE you can put anything there doesnt really # matter its basically our variable for the location of our text file. LF = 'Q:\\scripts\\BLAH.LOG' # basically all this is doing is opening our file and write nothing to it so it will be blank. clearfile = open(FL, 'w') # Close our file so no further writing is possible. clearfile.close() 6.5.2.6 Reading a file listname = [] listoptions = [] f = open('Q:\\test.txt',"r") s = f.write() f.close() ls = s.split("\n") # This means each new line will be loaded into it's own array. for l in ls: if l != "": item = l.split("=") # The above line will split Something=Hello into item[0] (Something) and item[1] (Hello) listname.append(item[0]) listoptions.append(item[1]) 6.6 HTTP 6.6.1 Read URLS from a web page The following code will search a page for all <a href="urls">Description</a> and store all the urls in one list array and all the descriptions in another import urllib,urllib2 , re # The url in which to use Base_URL = "" #Pre-define global Lists LinkDescription = [] LinkURL = [] WebSock = urllib.urlopen(Base_URL) # Opens a 'Socket' to URL WebHTML = WebSock.read() # Reads Contents of URL and saves to Variable WebSock.close() # Closes connection to url Temp_Web_URL = re.compile('<a href=["](.*)[.]zip["]>', re.IGNORECASE).findall(WebHTML) # Using find all mentions of stuff using regexp to use wildcards Temp_Web_Desc = re.compile('<a href=["].*[.]zip["]>(.*)</a>').findall(WebHTML) # find it for urls, desc in zip(Temp_Web_URL,Temp_Web_Desc): LinkURL.append(urls[9:-2]) # Adds urls to a list # note need to add extention for these links to really work LinkDescription.append(desc) # Adds the descrptions as a array) 6.6.2 Download Files with Progressbar Download a url from the net and saves to a file and shows the progressbar. Usage: to call the function use DownloaderClass(url,dest) import urllib, os,re,urllib2 import xbmc,xbmcgui def DownloaderClass(url,dest): dp = xbmcgui.DialogProgress() dp.create("My Script","Downloading File",url) urllib.urlretrieve(url,dest,lambda nb, bs, fs, url=url: _pbhook(nb,bs,fs,url,dp)) def _pbhook(numblocks, blocksize, filesize, url=None,dp=None): try: percent = min((numblocks*blocksize*100)/filesize, 100) print percent dp.update(percent) except: percent = 100 dp.update(percent) if dp.iscanceled(): print "DOWNLOAD CANCELLED" # need to get this part working dp.close() url ='' DownloaderClass(url,"e:\something.txt") 7 Script Arguments 7.1 Passing Arguments to a Script As of 2007/02/24, arguments can be passed into a script using the builtin command XBMC.RunScript. The first parameter this builtin takes is the absolute location of the python script and all additional parameters are passed as arguments to the script. import os import xbmc # get the parent path of the current script path = os.getcwd()[:-1]+"\\" # call a different script with the argument 'Hello World' xbmc.executebuiltin("XBMC.RunScript("+path+"argtest.py,Hello World)") 7.2 Using Arguments from sys.argv The arguments can be accessed from a different script using sys.argv. This is a list of strings that is populated when the script is launched using the builtin command. sys.argv[0] is the name of the script while the rest of the list, sys.argv[1:], are the arguments passed to the script. import xbmcgui import sys count = len(sys.argv) - 1 if count > 0: xbmcgui.Dialog().ok("Status",sys.argv[0] +" called with " + str(count)+" args", "["+", ".join(sys.argv[1:])+"]") else: xbmcgui.Dialog().ok("Status","no arguments specified") 7.3 Script Settings 7.3.1 Script install path This tips is usefull to let user the opportunity to install the script in the path of their choice. The tricks is to store the path where the script has just been launched # you'll need to import the os library import os # Then catch the actual path HOME_DIR=os.getcwd() # the returned path is not correct as it finish with a trailer coma HOME_DIR=HOME_DIR[:-1] # will delete the last char ; NOTE : with Win OS you'll not delete this trailing char # to be a good path, add double \ HOME_DIR=HOME_DIR+"\\" # all this can be done with a unique line : HOME_DIR=os.getcwd()[:-1]+"\\" # # Now we can set every path we need for the script PICS_DIR=HOME_DIR+"pics\\" DATA_DIR=HOME_DIR+"datas\\" # and when we need to get for example an image, we just use these previous vars as path : backgroundpic=PICS_DIR+"background_file.png" 8 Keymapping: How To Map Your Controls In Python 8.1 Control Types There are four Control Types available to map in XBMC: Controller, Remote, Mouse, & Keyboard 8.1.1 Control Type: Remote The Remote is the another Control Type that has two methods you can use to map actions to your Xbox conroller button(s): Button Codes & Action Codes (see Key.h for a full list of codes) 8.1.2 Button Code Reference Controls IDs Extra Info ======== === ========== A Button 256 B Button 257 X Button 258 Y Button 259 Start Button 274 Back Button 275 Black Button 260 White Button 261 Left Trigger Button 262 "Pressing the Left Trigger" Left Trigger Analog 278 "Holding down the Left Trigger" Right Trigger Button 263 "Pressing the Right Trigger" Right Trigger Analog 279 "Holding down the Right Trigger" Left ThumbStick 264 "Action is sent when the Left ThumbStick is moved" Left ThumbStick Button 276 Left ThumbStick Up 280 Left ThumbStick Down 281 Left ThumbStick Left 282 Left ThumbStick Right 283 Right ThumbStick 265 "Action is sent when the Right ThumbStick is moved" Right ThumbStick Button 277 Right ThumbStick Up 266 Right ThumbStick Down 267 Right ThumbStick Left 268 Right ThumbStick Right 269 DPad Up 270 DPad Down 271 DPad Left 272 DPad Right 273 To obtain an updated list of Action & Button Codes, please see: key.h 9 See also Development:
https://kodi.wiki/index.php?title=HOW-TO_write_Python_Scripts
CC-MAIN-2018-47
refinedweb
3,630
64
This C++ Program demonstrates implementation of Set_Symmetric_difference in STL. Here is source code of the C++ Program to demonstrate Set_Symmetric_difference in STL. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C++ Program to Implement Set_Symmetric_difference in Stl */ #include <iostream> #include <algorithm> #include <vector> using namespace std; int main () { int f[] = {5,10,15,20,25}; int s[] = {50,40,30,20,10}; vector<int> v(10); vector<int>::iterator it; sort (f, f + 5); sort (s, s + 5); it = set_symmetric_difference(f, f + 5, s, s + 5, v.begin()); v.resize(it - v.begin()); cout<<"The symmetric difference has "<< (v.size())<< " elements: "<<endl; for (it = v.begin(); it != v.end(); ++it) cout<< *it<<" "; cout <<endl; return 0; } advertisements $ g++ set_Symmetric_difference.cpp $ a.out The symmetric difference has 6 elements: 5 15 25 30 40 50 ------------------ (program exited with code: 0) Press return to continue Sanfoundry Global Education & Learning Series – 1000 C++ Programs. If you wish to look at all C++ Programming examples, go to C++ Programs. If you liked this C++ Program, kindly share, recommend or like below!
http://www.sanfoundry.com/cpp-program-implement-set-symmetric-difference-stl/
CC-MAIN-2016-44
refinedweb
188
56.25
21 August 2009 - 11:03 PM Posted 22 August 2009 - 06:41 AM Posted 22 August 2009 - 09:13 AM #include "wiimote.h" #include <sys/types.h> // for _stat #include <sys/stat.h> // " #include <process.h> // for _beginthreadex() #include <wtypes.h> #include <setupapi.h> #include <ddk/wdm.h> extern "C" { # ifdef __MINGW32__ # include <ddk/hidsdi.h>// from WinDDK # else #include <api/hidsdi.h> # endif } Posted 23 August 2009 - 08:11 AM Posted 23 August 2009 - 10:04 AM Posted 24 August 2009 - 08:27 AM Posted 25 August 2009 - 04:25 AM Posted 03 September 2009 - 01:32 AM Posted 21 September 2009 - 06:49 AM Posted 22 September 2009 - 05:18 AM Posted 02 October 2009 - 01:38 PM Posted 06 October 2009 - 06:38.
https://www.gamedev.net/topic/545127-trouble-with-hidpih-from-wdk-trying-to-use-wiiyourself/
CC-MAIN-2017-13
refinedweb
126
61.33
On Tuesday 15 March 2005 03:55 pm, Steve Grubb. > > But more interesting...what if a -1 was sent for fklen? > > + if (req->fklen) { > + ret = -ENOMEM; > + filterkey = kmalloc(req->fklen, GFP_KERNEL); > > Kaboom... I went ahead and just made then __u32. > > Also a nit, you have a structure audit_watch and a function audit_watch. > > In audit.c in the audit_receive_msg function, > @@ -413,6 +416,12 @@ static int audit_receive_msg(struct sk_b > err = -EOPNOTSUPP; > #endif > break; > + case AUDIT_WATCH_INS: > + case AUDIT_WATCH_REM: > + err = audit_receive_watch(nlh->nlmsg_type, > + NETLINK_CB(skb).pid, > + uid, seq, data); > + break; > > Shouldn't there be some checking of the packet like so (may not be 100% > right, but you should see what I mean): > > if (nlh->nlmsg_len != sizeof(struct audit_watch)) > return -EINVAL I'll put something like this in here. > > before sending it into audit_receive_watch? And then shouldn't some > reality checks be done like making sure no file name is greater than > MAX_PATH and the filterkey is reasonable size before using them in > audit_insert_watch? Well I suppose it depends on what you mean by "use" -- There are reality checks in there, when attempting to create the watch here: static struct audit_watch *audit_create_watch(const char *name, const char *filterkey, __u32 perms) { struct audit_watch *err = NULL; struct audit_watch *watch = NULL; err = ERR_PTR(-EINVAL); if (!name || strlen(name) + 1 > PATH_MAX) goto audit_create_watch_fail; if (filterkey && strlen(filterkey) + 1 > AUDIT_FILTERKEY_MAX) goto audit_create_watch_fail; if (perms > 15) goto audit_create_watch_fail; ..... } In the beginning of audit_insert_watch(), I'm merely bringing them from user space to kernel space. Then when I goto create the watchlist entry (wentry) that will hold the watch, if I fail upon creating the watch, I drop out. However, I suppose I should do all the user space->kernel space transitioning in in audit_receive_watch() for both audit_insert_watch() and audit_remove_watch(). That seems to make more sense. > > Also, how do you list the watches? I was looking in userspace code and only > see insert and remove, no listing. > Yeah this is the infamous feature that's missing that needs to be added :) > That's what I see for now... > Thanks. The patch is mostly done. It'll be out tommorow. Much of the feedback has been incorporated both by you and Stephen. I've got a couple more things to add/change. I've also added a couple of my own things -- mostly what I hope to be better locking around audit_insert/remove_watch() > -Steve > > -- > Linux-audit mailing list > Linux-audit redhat com > -tim
https://www.redhat.com/archives/linux-audit/2005-March/msg00122.html
CC-MAIN-2015-11
refinedweb
402
64.91
In this Python tutorial, you’ll learn how to move files and folders from one location to another. After reading this article, you’ll learn: – - How to move single and multiple files using the shutil.move()method - Move files that match a pattern (wildcard) - Move an entire directory Steps to Move a File in Python Python shutil module offers several functions to perform high-level operations on files and collections of files. We can move files using the shutil.move() method. The below steps show how to move a file from one directory to another. - Find the path of a file We can move a file using both relative path and absolute path. The path is the location of the file on the disk. An absolute path contains the complete directory list required to locate the file. For example, /home/Pynative/sales .txtis an absolute path to discover the sales.txt. - Use the shutil.move() function The shutil.move()function is used to move a file from one directory to another. First, import the shutil module and Pass a source file path and destination directory path to the move(src, dst)function. - Use the os.listdir() and shutil move() function to move all files Suppose you want to move all/multiple files from one directory to another, then use the os.listdir()function to list all files of a source folder, then iterate a list using a for loop and move each file using the move()function. Example: Move a Single File Use the shutil.move() method move a file permanently from one folder to another. shutil.move(source, destination, copy_function = copy2) source: The path of the source file which needs to be moved. destination: The path of the destination directory. copy_function: Moving a file is nothing but copying a file to a new location and deletes the same file from the source. This parameter is the function used for copying a file and its default value is shutil.copy2(). This could be any other function like copy()or copyfile(). In this example, we are moving the sales.txt file from the report folder to the account folder. import shutil # absolute path src_path = r"E:\pynative\reports\sales.txt" dst_path = r"E:\pynative\account\sales.txt" shutil.move(src_path, dst_path) Note: - The move() function returns the path of the file you have moved. - If your destination path matches another file, the existing file will be overwritten. - It will create a new directory if a specified destination path doesn’t exist while moving file. Move File and Rename Let’s assume your want to move a file, but the same file name already exists in the destination path. In such cases, you can transfer the file by renaming it. Let’s see how to move a file and change its name. - Store source and destination directory path into two separate variables - Store file name into another variable - Check if the file exists in the destination folder - If yes, Construct a new name for a file and then pass that name to the shutil.move()method. Suppose we want to move sales.csv into a folder called to account, and if it exists, rename it to sales_new.csv and move it. import os import shutil src_folder = r"E:\pynative\reports\\" dst_folder = r"E:\pynative\account\\" file_name = 'sales.csv' # check if file exist in destination if os.path.exists(dst_folder + file_name): # Split name and extension data = os.path.splitext(file_name) only_name = data[0] extension = data[1] # Adding the new name new_base = only_name + '_new' + extension # construct full file path new_name = os.path.join(dst_folder, new_base) # move file shutil.move(src_folder + file_name, new_name) else: shutil.move(src_folder + file_name, dst_folder + file_name) Move All Files From A Directory Sometimes we want to move all files from one directory to another. Follow the below steps to move all files from a directory. -.move()method to move the current file to the destination folder path. Example: Move all files from the report folder into a account folder. import os import shutil source_folder = r"E:\pynative\reports\\" destination_folder = r"E:\pynative\account\\" # fetch all files for file_name in os.listdir(source_folder): # construct full file path source = source_folder + file_name destination = destination_folder + file_name # move only files if os.path.isfile(source): shutil.move(source, destination) print('Moved:', file_name) Our code moved two files. Here is a list of the files in the destination directory: - profits.txt - revenue.txt - expense.txt Use the os.listdir(dst_folder) function to list all files present in the destination directory to verify the result. Move Multiple Files Let’s assume you want to move only a few files. In this example, we will see how to move files present in a list from a specific folder into a destination folder. import shutil source_folder = r"E:\pynative\reports\\" destination_folder = r"E:\pynative\account\\" files_to_move = ['profit.csv', 'revenue.csv'] # iterate files for file in files_to_move: # construct full file path source = source_folder + file destination = destination_folder + file # move file shutil.move(source, destination) print('Moved:', file) Output: Moved: profit.csv Moved: revenue.csv Move Files Matching a Pattern (Wildcard) Suppose, you want to move files if a name contains a specific string. The Python glob module, part of the Python Standard Library, is used to find the files and folders whose names follow a specific pattern. glob.glob(pathname, *, recursive=False) - We can use the wildcard characters for pattern matching. The glob.glob()method returns a list of files or folders that matches the pattern specified in the pathnameargument. - Next, use the loop to move each file using the shutil.move() Refer this to use different wildcard to construct different patterns. Move files based on file extension In this example, we will move files which has txt extension. import glob import os import shutil src_folder = r"E:\pynative\report" dst_folder = r"E:\pynative\account\\" # Search files with .txt extension in source directory pattern = "\*.txt" files = glob.glob(src_folder + pattern) # move the files with txt extension for file in files: # extract file name form file path file_name = os.path.basename(file) shutil.move(file, dst_folder + file_name) print('Moved:', file) Output: Moved: E:\pynative\report\revenue.txt Moved: E:\pynative\report\sales.txt Move Files based on filename Let’s see how to move file whose name starts with specific string. import glob import os import shutil src_folder = r"E:\pynative\reports" dst_folder = r"E:\pynative\account\\" # move file whose name starts with string 'emp' pattern = src_folder + "\emp*" for file in glob.iglob(pattern, recursive=True): # extract file name form file path file_name = os.path.basename(file) shutil.move(file, dst_folder + file_name) print('Moved:', file) Output: Moved: E:\pynative\reports\emp.txt
https://pynative.com/python-move-files/
CC-MAIN-2021-39
refinedweb
1,109
58.79
/src In directory 23jxhf1.ch3.sourceforge.com:/tmp/cvs-serv7844 Modified Files: simp.lisp Log Message: Checking for a mtimes expressions as a base in expressions like (a*b)^q*(a*b)^r, where q+r=1. Related bug report: ID: 826623 "simplifer returns %i*%i" Tested with GCL 2.6.8 and CLISP 2.44. No problems with the testsuite. Index: simp.lisp =================================================================== RCS file: /cvsroot/maxima/maxima/src/simp.lisp,v retrieving revision 1.78 retrieving revision 1.79 diff -u -d -r1.78 -r1.79 --- simp.lisp 22 May 2009 10:43:35 -0000 1.78 +++ simp.lisp 28 May 2009 22:39:30 -0000 1.79 @@ -1771,7 +1771,15 @@ ((maxima-constantp (car x)) (go const)) ((onep1 w) - (return (rplaca (cdr fm) (car x)))) + (cond ((mtimesp (car x)) + ;; A base which is a mtimes expression. + ;; Remove the factor from the lists of products. + (rplacd fm (cddr fm)) + ;; Multiply the factors of the base with + ;; the list of all remaining products. + (setq rulesw t) + (return (muln (nconc y (cdar x)) t))) + (t (return (rplaca (cdr fm) (car x)))))) (t (go spcheck)))) ((or (maxima-constantp (car x)) I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/maxima/mailman/maxima-commits/thread/E1M9oG4-00022k-FO@23jxhf1.ch3.sourceforge.com/
CC-MAIN-2016-36
refinedweb
232
69.79
Use Azure Toolkit for Eclipse to create Apache Spark applications for an HDInsight cluster Use HDInsight Tools in Azure Toolkit for Eclipse to develop Apache Spark applications written in Scala and submit them to an Azure HDInsight Spark cluster, directly from the Eclipse IDE. You can use the HDInsight Tools plug-in in a few different ways: - To develop and submit a Scala Spark application on an HDInsight Spark cluster. - To access your Azure HDInsight Spark cluster resources. - To develop and run a Scala Spark application locally. Important You can use this tool to create and submit applications only for an HDInsight Spark cluster on Linux. Prerequisites - Apache Spark cluster on HDInsight. For instructions, see Create Apache Spark clusters in Azure HDInsight. - Oracle Java Development Kit version 8, which is used for the Eclipse IDE runtime. You can download it from the Oracle website. - Eclipse IDE. This article uses Eclipse Neon. You can install it from the Eclipse website. Install HDInsight Tools in Azure Toolkit for Eclipse and the Scala plug-in Install Azure Toolkit for Eclipse HDInsight Tools for Eclipse is available as part of Azure Toolkit for Eclipse. For installation instructions, see Installing Azure Toolkit for Eclipse. Install the Scala plug-in When you open Eclipse, HDInsight Tool automatically detects whether you installed the Scala plug-in. Select OK to continue, and then follow the instructions to install the plug-in from the Eclipse Marketplace. User can either sign in to Azure subscription, or link a HDInsight cluster using Ambari username/password or domain joined credential to start. Start the Eclipse IDE and open Azure Explorer. On the Window menu, select Show View, and then select Other. In the dialog box that opens, expand Azure, select Azure Explorer, and then select OK. Right-click the Azure node, and then select Sign in. In the Azure Sign In dialog box, choose the authentication method, select Sign in, and enter your Azure credentials. After you're signed in, the Select Subscriptions dialog box lists all the Azure subscriptions associated with the credentials. Click Select to close the dialog box. On the Azure Explorer tab, expand HDInsight to see the HDInsight Spark clusters under your subscription. You can further expand a cluster name node to see the resources (for example, storage accounts) associated with the cluster. Link a cluster You can link a normal cluster by using the Ambari managed username. Similarly, for a domain-joined HDInsight cluster, you can link by using the domain and username, such as user1@contoso.com. Select Link a cluster from Azure Explorer. Enter Cluster Name, User Name and Password, then click OK button to link cluster. Optionally, enter Storage Account, Storage Key and then select Storage Container for storage explorer to work in the left tree view Note We use the linked storage key, username and password if the cluster both logged in Azure subscription and Linked a cluster. You can see a Linked cluster in HDInsight node after clicking OK button, if the input information are right. Now you can submit an application to this linked cluster. You also can unlink a cluster from Azure Explorer. Set up a Spark Scala project for an HDInsight Spark cluster In the Eclipse IDE workspace, select File, select New, and then select Project. In the New Project wizard, expand HDInsight, select Spark on HDInsight (Scala), and then select Next. The Scala project creation wizard automatically detects whether you installed the Scala plug-in. Select OK to continue downloading the Scala plug-in, and then follow the instructions to restart Eclipse. In the New HDInsight Scala Project dialog box, provide the following values, and then select Next: - Enter a name for the project. - In the JRE area, make sure that Use an execution environment JRE is set to JavaSE-1.7 or later. - In the Spark Library area, you can choose Use Maven to configure Spark SDK option. Our tool integrates the proper version for Spark SDK and Scala SDK. You can also choose Add Spark SDK manually option, download and add Spark SDK by manually. In the next dialog box, select Finish. Create a Scala application for an HDInsight Spark cluster In the Eclipse IDE, from Package Explorer, expand the project that you created earlier, right-click src, point to New, and then select Other. In the Select a wizard dialog box, expand Scala Wizards, select Scala Object, and then select Next. In the Create New File dialog box, enter a name for the object, and then select Finish. Paste the following code in the text editor: import org.apache.spark.SparkConf import org.apache.spark.SparkContext object MyClusterApp{ def main (arg: Array[String]): Unit = { val conf = new SparkConf().setAppName("MyClusterApp") val sc = new SparkContext(conf) val rdd = sc.textFile("wasb:///HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv") //find the rows that have only one digit in the seventh column in the CSV val rdd1 = rdd.filter(s => s.split(",")(6).length() == 1) rdd1.saveAsTextFile("wasb:///HVACOut") } } Run the application on an HDInsight Spark cluster: a. From Package Explorer, right-click the project name, and then select Submit Spark Application to HDInsight. b. In the Spark Submission dialog box, provide the following values, and then select Submit: For Cluster Name, select the HDInsight Spark cluster on which you want to run your application. Select an artifact from the Eclipse project, or select one from a hard drive. The default value depends on the item that you right-click from Package Explorer. In the Main class name drop-down list, the submission wizard displays all object names from your project. Select or enter one that you want to run. If you selected an artifact from a hard drive, you must enter the main class name manually. Because the application code in this example does not require any command-line arguments or reference JARs or files, you can leave the remaining text boxes empty. The Spark Submission tab should start displaying the progress. You can stop the application by selecting the red button in the Spark Submission window. You can also view the logs for this specific application run by selecting the globe icon (denoted by the blue box in the image). Access and manage HDInsight Spark clusters by using HDInsight Tools in Azure Toolkit for Eclipse You can perform various operations by using HDInsight Tools, including accessing the job output. Access the job view In Azure Explorer, expand HDInsight, expand the Spark cluster name, and then select Jobs. Select the Jobs node. If Java version is lower than 1.8, HDInsight Tools automatically reminder you install the E(fx)clipse plug-in. Select OK to continue, and then follow the wizard to install it from the Eclipse Marketplace and restart Eclipse. Open the Job View from the Jobs node. In the right pane, the Spark Job View tab displays all the applications that were run on the cluster. Select the name of the application for which you want to see more details. You can then take any of these actions: Hover on the job graph. It displays basic info about the running job. Select the job graph, and you can see the stages and info that every job generates. Select the Log tab to view frequently used logs, including Driver Stderr, Driver Stdout, and Directory Info. Open the Spark history UI and the Apache Hadoop YARN UI (at the application level) by selecting the hyperlinks at the top of the window. Access the storage container for the cluster In Azure Explorer, expand the HDInsight root node to see a list of HDInsight Spark clusters that are available. Expand the cluster name to see the storage account and the default storage container for the cluster. Select the storage container name associated with the cluster. In the right pane, double-click the HVACOut folder. Open one of the part- files to see the output of the application. Access the Spark history server - In Azure Explorer, right-click your Spark cluster name, and then select Open Spark History UI. When you're prompted, enter the admin credentials for the cluster. You specified these while provisioning the cluster. - In the Spark history server dashboard, you use the application name to look for the application that you just finished running. In the preceding code, you set the application name by using val conf = new SparkConf().setAppName("MyClusterApp"). So, your Spark application name was MyClusterApp. Start the Apache Ambari portal - In Azure Explorer, right-click your Spark cluster name, and then select Open Cluster Management Portal (Ambari). - When you're prompted, enter the admin credentials for the cluster. You specified these while provisioning the cluster. Manage Azure subscriptions By default, HDInsight Tool in Azure Toolkit for Eclipse lists the Spark clusters from all your Azure subscriptions. If necessary, you can specify the subscriptions for which you want to access the cluster. - In Azure Explorer, right-click the Azure root node, and then select Manage Subscriptions. - In the dialog box, clear the check boxes for the subscription that you don't want to access, and then select Close. You can also select Sign Out if you want to sign out of your Azure subscription. Run a Spark Scala application locally You can use HDInsight Tools in Azure Toolkit for Eclipse to run Spark Scala applications locally on your workstation. Typically, these applications don't need access to cluster resources such as a storage container, and you can run and test them locally. Prerequisite While you're running the local Spark Scala application on a Windows computer, you might get an exception as explained in SPARK-2356. This exception occurs because WinUtils.exe is missing in Windows. To resolve this error, you need download the executable to a location like C:\WinUtils\bin, and then add the environment variable HADOOP_HOME and set the value of the variable to C\WinUtils. Run a local Spark Scala application Start Eclipse and create a project. In the New Project dialog box, make the following choices, and then select Next. - In the left pane, select HDInsight. - In the right pane, select Spark on HDInsight Local Run Sample (Scala). To provide the project details, follow steps 3 through 6 from the earlier section Setup a Spark Scala project for an HDInsight Spark cluster. The template adds a sample code (LogQuery) under the src folder that you can run locally on your computer. Right-click the LogQuery application, point to Run As, and then select 1 Scala Application. Output like this appears on the Console tab: Reader-only role When users submit job to a cluster with reader-only role permission, Ambari credentials is required. Link cluster from context menu From Azure Explorer, expand HDInsight to view HDInsight clusters that are in your subscription. The clusters marked "Role:Reader" only have reader-only role permission. Right click the cluster with reader-only role permission. Select Link this cluster from context menu to link cluster. Enter the Ambari username and password. If the cluster is linked successfully, HDInsight will be refreshed. The stage of the cluster will become linked. Link cluster by expanding Jobs node Click Jobs node, Cluster Job Access Denied window pops up. Click Link this cluster to link cluster. Link cluster from Spark Submission window Create an HDInsight Project. Right click the package. Then select Submit Spark Application to HDInsight. Select a cluster which has reader-only role permission for Cluster Name. Warning message shows out. You can click Link this cluster to link cluster. View Storage Accounts For clusters with reader-only role permission, click Storage Accounts node, Storage Access Denied window pops up. For linked clusters, click Storage Accounts node, Storage Access Denied window pops up. Known problems When link a cluster, I would suggest you to provide credential of storage. There are two modes to submit the jobs. If storage credential is provided, batch mode will be used to submit the job. Otherwise, interactive mode will be used. If the cluster is busy, you might get the error below. Creating and running applications - Create a standalone application using Scala - Run jobs remotely on an Apache Spark cluster using Apache Livy Tools and extensions - Use Azure Toolkit for IntelliJ to create and submit Spark Scala applications - Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely through VPN - Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely through SSH - Managing resources Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-eclipse-tool-plugin
CC-MAIN-2019-26
refinedweb
2,070
64.91
Created on 2007-05-07 17:09 by k0wax, last changed 2009-03-15 15:39 by mrabarnett. This issue is now closed. --- if map[x][y].overlay: map[x][y].overlay.blit(x,y) --- ... and ... --- if map[x][y].overpay as ob: ob.blit(x, y) --- the second one looks much more fun I think. Toss your idea out on python-ideas. It isn't horrible. And it helps with while-statement issues. FWIW, I have a patch for it. :) Hi, I like this idea. I've put together a short patch that will implement inline assignment. if f() -> name: use(name) or more powerfully: if f() -> name == 'spam': usespam(name) the old syntax if something as x: is still available if that is what is desired. if (f() == 'spam') -> name: newname = name.replace('p', 'h') Patched against Py3k please kick the tires, I've added some tests and developed a PEP. What's wrong with this? ob = map[x][y].overpay if ob: ob.blit(x, y) Is this proposal just about saving one line? If we allow this, how many of the following will be allowed? if expr as name: <block> while expr as name: <block> expr as name # alternative to "name = expr" Frankly, the only one that seems to be useful to me is the second. As for using "->", please no, there are plenty of languages that use line noise, but Python doesn't need to be one of them.? > If we allow this, how many of the following will be allowed? > if expr as name: <block> > while expr as name: <block> > expr as name # alternative to "name = expr" This patch implements your final point: expr as name (albeit with a nominal '->' RARROW rather than 'as') the patch creates a new expression, assexp (assignment expression) there is no need to implement this for countless other if/while/for because they accept expressions and this assignment is an expression. (Note it is a patch for a different behaviour than the OP suggested.) > As for using "->", please no, there are plenty of languages that use > line noise, but Python doesn't need to be one of them. I have begun a discussion about this on python-ideas to give it some air as suggested by Raymond. We can always close the issue as 'wont fix' if it doesn't get off the ground. This issue (although addressing an old concern dating back to the beginning of python) has been sitting unloved for 9 or so months and I felt that we should at least resolve it. Cheers, Jervis > Regarding the proposed syntax: > if (f() == 'spam') -> name: > newname = name.replace('p', 'h') > Surely that should assign the *bool* result of comparing f() > with 'spam' to name? Doing anything else is opening the door to a > world of pain. You are correct. It does assign the result of the bool. I have made an error in creating the example. This is what happens when I copy and paste and don't check the result. should read if f -> name: # use name, (pointless example but in line with the OP's suggestion) Thanks for picking this up. At the moment binding occurs either right-to-left with "=", eg. x = y where "x" is the new name, or left-to-right, eg. import x as y where "y" is the new name. If the order is to be right-to-left then using "as" seems to be the best choice. On the other hand, if there should be a form of binding explicitly for use in an expression in order to prevent accidental use of "=" then the order should probably be the same as "=", ie right-to-left, and a new symbol is needed (using punctuation feels preferable somehow, because "=" uses punctuation). The only symbol I can think of is "~=". How does this: if ob ~= map[x][y].overpay: ob.blit(x, y) look compared to: if map[x][y].overpay as ob: ob.blit(x, y) IMHO, of course. Matthew suggested ~= instead of -> or "as". I dislike this because ~= first makes me think of "approximately equal to", and then it makes me think of augmented assignment, and only then do I remember that although ~ is used in Python for bitwise-not, ~= is not a legal augmented assignment. > Matthew suggested ~= instead of -> or "as". Try the patch, you can make changes (for those that aren't aware) by changing the token in Grammar/Grammar to whatever you wish. It is easy to do and you need only recompile after this step. example: assexp: xor_expr ['->' xor_expr] could become assexp: xor_expr ['magic' xor_expr] >>> 'hello' magic words 'hello' >>> words 'hello' Note that Mr Barnett may need to look at other fixes to get his '~=' idea off the ground (tokenizer.c and specifically adding a new token) I've recommended that we close this issue. Cheers, Jervis Rejecting this after discussion on python-ideas: Overview of some of the major objections here: Just for the record, I wasn't happy with "~=" either, and I have no problem with just forgetting the whole idea.
http://bugs.python.org/issue1714448
CC-MAIN-2016-40
refinedweb
847
73.27
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #17253 closed New feature (wontfix) Add foreign object in memory without saving to database Description (last modified by lukeplant) Similar to the following post class Group: name = models.Chars() def save(): super(self, Group).save() # access foreign objects members = self.groupmember_set.all() class GroupMember: group = models.ForeignKey(Group) member = models.ForeignKey(User) I have a page to allow people to create Group , and invited existing user to become member. When form is submitted, Group data and GroupMember data is submitted together. I would like to override the Group's save function() , and in the save function(), I need to access the GroupMember data. Example code g = Group(name='abc') gm1 = GroupMember(member=user1, group=g) gm2 = GroupMember(member=user2, group=g) g.groupmember_set.add(gm1) # Add to memory , I do not want to save to db immediately g.groupmember_set.add(gm2) # Add to memory , I do not want to save to db immediately g.save() gm1.save() gm2.save() Since the .add() function save the related object to db immediately , causing error. I do not want to save the Group object first , because it will trigger 2 times save() Change History (3) comment:1 Changed 5 years ago by lukeplant comment:2 Changed 5 years ago by cyberkoa@… Sorry that need to call save() twice. And my case is not as complex as M2M by the way. Actually my case is as below, we are creating a Sports community website. When a user create a Sport Group, at the same time , the user can invite other users to join as member . In this case , each invited users will be created as a record in GroupMember , with status field as "invited" When this Group is created, we want to notify the users who has set the favourite sport same as the new Group created BUT skip those "invited" members. If I would like to do it in views.py , it can be done. However , I would like to do it in the models.py because I feel that this code should be in the models.py. Therefore, I plan to override the save() in Group. In this case , I need to know "invited" users in the save() , so that I can skip them from the notification. If I save Group before adding the GroupMember , in the save function() , I can't retrieve this information (can I pass parameters ?) example : I created a football Group class Group def save() super().save() .. # notify to all users who has 'football' as favourite sport but skip the invited users notify() comment:3 Changed 5 years ago by akaariai Why not add method send_invites to the Group model, and first save the group, then add the members, and then call send_invites. Having a side effect of sending invites to .save() isn't IMHO the way to go. It wouldn't be surprising if you hit a situation where you accidentally send invites because of that side effect. In short: the use case you have is solvable without altering the behaviour of .add(). Formatting fixed, please use 'Preview' to check formatting, thanks. If we don't save M2M objects to the DB immediately when running add(), when do we save them? save() doesn't do that, we would need a 'flush-everything-to-the-database' call, which we don't have. Adding one would require a fundamental change to the way that the ORM works - essentially something like the unit-of-work pattern in SQLAlchemy. I'm therefore closing WONTFIX. Note that saving the Group object first doesn't necessarily mean you need to call save() twice - the add() calls do not need to be followed by save(). (I'm guessing you may have reasons why it is this way in your case, but I don't think the situation is forced on you by Django).
https://code.djangoproject.com/ticket/17253
CC-MAIN-2016-26
refinedweb
652
63.9
#include <mw/vwsdef.h> Identifies an application view using two unique identifiers (UIDs): a view UID and an application UID. The application UID is required so that the application associated with the view can be started if it is not already running. A unique application ID (or application UID). Uniquely identifies the application associated with the view. Constructs a TVwsViewId object, and initialises both the application UID and the view UID to NULL. Constructs a new TVwsViewId object from an existing one. This simply performs a member-wise copy, each member variable of the passed in object is individually copied to the corresponding member variable of the new object. Constructs a TVwsViewId object with the specified application UID and view UID. Checks whether the TVwsViewId object being operated upon and the TVwsViewId object specified are different. Returns true if either the application UIDs or view UIDs are different, otherwise returns false. Checks whether the TVwsViewId object being operated upon and the TVwsViewId object specified are the same. Returns true if both application UIDs and both view UIDs are the same, otherwise returns false.
http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-3DEA9A17-CB50-3DCD-87AC-0E91B377FB0E.html
CC-MAIN-2018-47
refinedweb
183
56.25
Writing CGI Scripts in Python Before putting your CGI scripts on-line, you should be sure that they're really clean, by testing them carefully, especially in near bounds or out of bounds conditions. A script that crashes in the middle of its job can cause large problems, like data inconsistency in a database application. You can eliminate most of the problems by running your script from the command line; then testing it from your HTTP daemon. First, you have to remember that Python is an interpreted language. This means that several syntax errors will not be discovered until run time. You must be sure your script has been tested in every part of the control flow. You can do that by generating parameter sets that you will hardcode at the beginning of your script. Then, be sure that incorrect input cannot lead to an incorrect behaviour of your script. Don't expect that all parameters received by your script will be meaningful. They can be corrupted during communication, or some hacker could try to obtain more data then normally allowed. Listing 5 shows a different version of our Hello World script and demonstrates the following features: Tuples: Tuples are arrays consisting of a number of values separated by commas. Ouput tuples are enclosed in parenthesis. The localtime() function returns a tuple which can be assigned in one variable (that becomes a tuple). Or as in this script, individual elements of the tuple can be assigned at one time to several variables. The elif (“else if”) statement: Listing 5 has two syntax errors that are not detected when the interpreter loads the script, but will crash it when executed. It will crash at Christmas, because there is a call to a Christmas() function which has not been defined, and it will crash again at the New Year's Day, because in addition to “Happy New Year!”, it tries to print a “Max” variable which doesn't exist (due perhaps to a cut-and-paste from a script intended to wish someone happy birthday?). Here is what you'll find in the error_log file if the script is accessed on Christmas: Traceback (innermost last): File "/cgi-bin/buggy.py", line 59, in ? Main() File "/cgi-bin/buggy.py", line 53, in Main Christmas() NameError: Christmas The fact that the script seems to execute normally (especially on New Year's Day, since everything that should have been printed is actually printed) can be a pitfall. The script has actually crashed! Of course, in this script, crashing is not a big problem. But in an Intranet application, it could be very harmful. Imagine, for example, a script that displays a message saying it has updated your stock database, but has in fact crashed immediately after giving the message. The user thinks everything is going well, but the data have not been updated. Let's get back to Listing 4. We've already seen that the generated xbm is not good; but maybe there are other problems. What happens if: The script is called with: <img src="?\"> instead of: <img src="? _url=_">? The database file counters.gdbm does not exist? The access count exceeds 9999? I suggest you try these, and try your own solutions. For the last situation in the list—the access count exceeds 9999—there are several solutions; I suggest modifying the DIGITS value if the incremented value in the inc_counter() function has a length that exceeds DIGITS. How would you see the generated file if your web browser displays nothing? Maybe you could add the following code, replace the call to CGImain() with TSTmain() and run the script from the command line: def TSTmain() : ####### url = "" counter = get_put_counter( url ) print_header() print_digits_values( counter ) print_footer() Listing 6 shows the HTML source for a form we are going to discuss for the remainder of this article. It allows the user to enter some values to perform a query on a database. The action parameter of the form should be adapted to your needs. For a real application, you should replace localhost by the fully qualified name of your host. The name of the script should also be adapted to call the right thing. Note that the HTML code defines a hidden field (TableName). Let's start with a script that just echoes values entered by the user (see Listing 7). You'll see that, even if you leave the form empty, two parameters are displayed. The first one is (TableName), a hidden parameter in our form, and the second one is the value of the Submit button (which is also a field). Notice that: CGI module imported by our scripts is used to parse the input sent by an HTML form. It works with GET and POST methods. cgi.SvFormContentDict() builds a dictionary with: { field name ; field value } couples corresponding to the data encoded by the user. cgi.escape() is used to convert special characters into their HTML escape sequence (for example, < tutorial! I found a syntax error. Thanks for the tutorial. :) There is a syntax error in listing 7, in the line "if len( fields ) = 0 :". You probably see it now, it should have been "==" and not "=" - we need the comparison operator, not the assignment operator. -Nobody
http://www.linuxjournal.com/article/1368?page=0,2
CC-MAIN-2014-49
refinedweb
875
63.7
This project was inspired existing cheminformatics APIs such as the CDK, Marvin, and Frowns. The goal of the project is to produce a full featured cheminfo api for the .Net framework. Early stage dev will focus on data structures and classes for file io. The first rough class outlines are in the repository. The basic ChemicalStructure object does not contain bonds. A Molecule object is a ChemicalStructure which contains bond data. In the vecmath namespace, currently the Tuple3D objects are classes. They ... The rough draft of the core classes is almost done. If I have the free time the first files should be added to the repository by sat. Copyright © 2009 SourceForge, Inc. All rights reserved. Terms of Use
http://sourceforge.net/projects/chemsharp/
crawl-002
refinedweb
119
78.35
#include <CoreLinuxGuardPool.hpp> #include <CoreLinuxGuardPool.hpp> List of all members. Default constructor. [protected] createPoolGroup creates a semaphore set with the requested number of semaphores in the group and will add the semaphores to theSemaphores with initial count and index set properly. destroyPoolGroup validates that all the semaphores in the extent are not being used and then destroys the extent and all the semaphores associated with it. The method assumes that the group is the last in the vector. [static] isLocked determines if the object is currently locked. Calls singleton instance isSynchronizedLocked. isSynchronizedLocked resolves whether Synchronized is in a locked state. lock is called by a guard when control is needed over a objects resource access. Calls singleton instance lockedSynchronized. lockSynchronized manages the associations of objects to the semaphore in the pool in establishing the guard. release is called by a guard object during its destruction. releaseSynchronized manages the associations of objects to the semaphore in the pool when releasing a guard. Run time interface for changing the extent size. The next time the pool goes into extent processing, this will be used.
http://corelinux.sourceforge.net/cl_classref/class_corelinux__CoreLinuxGuardPool.html
crawl-001
refinedweb
182
50.84
Tip : If you are using floats instead make them doubles as C will convert them anyway and give you warnings. Here is an example EX6_2 with two parameters. // ex6_2This is a simple function with two parameters a and b that are both doubles. It calculates the percentage of a to b and returns that value. #include "stdafx.h" #include <stdio.h> double getpercent(double a, double b ) { return a/b*100.0; } int main () { // Call the function double value = getpercent(10.6,90.2) ; printf("The percentage is %f",value) ; return 0; } The output from this is The percentage is 11.751663 Parameter Passing - Pass By ValueIn C, pass by value is the mechanism used to pass parameters into a function. A copy is made of the variable and that copy is used. This is important. Not so much for int or double variables but larger data structures like arrays or structs. Because a copy is used, the function cannot alter variables that are passed in. Parameters only pass values in, not out. This is called Pass by Value. The alternative is Pass by Reference but you need C++ for that as C does not support this. There is a workaround in C but it needs pointers and in the next lesson, we'll cover those. On the next page : Learn about Function Prototypes.
http://cplus.about.com/od/learningc/ss/clessonsix_3.htm
crawl-002
refinedweb
224
76.52
>>>>> On Thu, 21 Aug 2008 08:35:11 +0300, Eli Zaretskii <address@hidden> said: >> > Please suggest which variables to GCPRO. >> >> GCPRO doesn't help here. It just protects Lisp Objects from being >> collected, but not for Lisp String contents from being relocated. > Really? that's news to me. Yes. It seems to be a common misunderstanding. This can cause and has actually been caused nasty bugs that are difficult to reproduce or debug. So every developer should be aware of this. GCPRO1 (s); p = SDATA (s); SOME_OPERATION_INVOLVING_GC; /* e.g., DECODE_FILE, ENCODE_UTF_8 */ /* p no longer points to valid data if GC happened. */ /* One should do p = SDATA (s) again before using p. */ > So what means do we have for protecting pointers to Lisp strings > from GC? Nothing can prevent Lisp String contents from being relocated by GC. >> Yes, `nm' is not corrupted if DOS_NT because of copying. But >> otherwise, it may be corrupted by GC and it is actually used >> afterwards. >> >> 1455 if (1 1456 #ifndef DOS_NT 1457 /* /... alone is not absolute >> on DOS and Windows. */ 1458 && !IS_DIRECTORY_SEP (nm[0]) 1459 >> #endif > Is nm the only variable in danger? If so, how about if we simply > copy it on all platforms? I think it is one possible solution if properly commented. YAMAMOTO Mitsuharu address@hidden
https://lists.gnu.org/archive/html/emacs-devel/2008-08/msg00916.html
CC-MAIN-2016-44
refinedweb
214
69.48
When building a classifier, we assume a large enough training data set with labels is available. This situation is what we call as supervised learning. In a real world setting, such training examples with labels need to be acquired. In any application domain where labeling requires domain expertise such as in medicine, gathering a large training set with labels is an expensive and time consuming task. In such cases, it is not uncommon to use a small set of correctly labeled examples to label the rest of training examples. This type of learning is referred as semi-supervised learning and it falls somewhere between supervised and unsupervised learning. Often the term semi-supervised classification is used to describe the process of labeling training examples using a small set of labeled examples to differentiate from semi-supervised clustering. In semi-supervised clustering, the goal is to group a given set of examples into different clusters with the condition that certain examples must be clustered together and certain examples must be put in different clusters. In other words, some kind of constraints are imposed on resulting clusters in terms of cluster memberships of certain specified examples. In this blog post, I am going to illustrate semi-supervised classification leaving semi-supervised clustering for another post. When we have a small set of labeled examples and we want to rely on them to label a much larger set of unlabeled training examples, we need to make some assumptions. For example, we might make an assumption that training examples close to each other are likely to have similar class labels, an assumption made when applying k-nearest neighbor classification. Instead one might assume classes to have Gaussian distribution and we may try to iteratively find the distribution parameters. We must remember that our result will be as good as our assumptions are. Label Propagation Algorithm (LPA) One of the semi-supervised classification method is label propagation that I will explain here. This method is based on the assumption that examples near each other are likely to have similar class labels. The basic idea of this method is to consider all examples, labeled and unlabeled, as interconnected nodes in a network. Each node in the network tries to propagate its label to other nodes. How much of a node’s label influences other nodes is determined by their respective closeness or proximity. We will work through a series of steps to illustrate the working of the label propagation algorithm. Let us consider the following nine training examples, each with two-features: [5.4 3.9], [4.8 3.0], [5.1 3.3], [5.7 2.8], [5.7 3.0], [5.9 3.2], [6.9 3.2], [6.4 2.7], [6.7 3.0] We know labels of three examples only. These three examples are shown above in color; each color representing a different class label. What label propagation algorithm does is that it tries to determine the labels of the six unlabeled examples. The first step is to calculate closeness between each pair of examples. In LPA, the closeness between examples is measured by the following formula: , where is the squared euclidean distance between the example pair ‘i-j’ and is a parameter to scale proximity. Since we have nine examples, we end up with a 9×9 symmetric weight matrix W. The following code snippets show the calculations for W. The examples are in array X and sigma equals 0.4. from sklearn.metrics.pairwise import euclidean_distances D = euclidean_distances(X, X, squared = True) W =np.exp(-D/0.16) print(W) [[1. 0. 0.06 0. 0. 0.01 0. 0. 0. ] [0. 1. 0.32 0. 0.01 0. 0. 0. 0. ] [0.06 0.32 1. 0.02 0.06 0.02 0. 0. 0. ] [0. 0. 0.02 1. 0.78 0.29 0. 0.04 0. ] [0. 0.01 0.06 0.78 1. 0.61 0. 0.03 0. ] [0.01 0. 0.02 0.29 0.61 1. 0. 0.04 0.01] [0. 0. 0. 0. 0. 0. 1. 0.04 0.61] [0. 0. 0. 0.04 0.03 0.04 0.04 1. 0.32] [0. 0. 0. 0. 0. 0.01 0.61 0.32 1. ]] Next, we associate with each node a c-dimensional label vector, c being the number of classes, that reflects the class probabilities associated with the node training example. In our example, we have three classes. We set the label vectors to ensure the sum of each vector being one and the examples/nodes with known labels have o’s and 1’s only in there respective vectors. These initial nine, three-dimensional vectors are shown below as a matrix Y. [[0.33 0.33 0.34] [0.33 0.33 0.34] [1. 0. 0. ] [0.33 0.33 0.34] [0. 1. 0. ] [0.33 0.33 0.34] [0.33 0.33 0.34] [0.33 0.33 0.34] [0. 0. 1. ]] Having initialized the label probabilities and determined the transition matrix, we are now ready to propagate label information in the network. We do this by updating the Y matrix by the following relationship: . The rows of the updated Y matrix are normalized to ensure sum of each row equals 1. The rows corresponding to nodes of known labels are reset to have o’s and 1’s only in there respective vectors as the labels of these nodes are known and fixed. The result of these steps is the following Y matrix, shown as in transposed form. [[0.36 0.48 1. 0.23 0. 0.25 0.22 0.27 0. ] [0.32 0.26 0. 0.54 1. 0.5 0.22 0.28 0. ] [0.33 0.26 0. 0.23 0. 0.25 0.56 0.46 1. ]] [[0.58 0.93 1. 0.04 0. 0.04 0.01 0.02 0. ] [0.24 0.05 0. 0.91 1. 0.89 0.02 0.22 0. ] [0.18 0.02 0. 0.05 0. 0.07 0.97 0.76 1. ]]
https://iksinc.online/2019/03/27/how-to-build-a-labelled-training-data-set-with-a-few-labelled-examples/
CC-MAIN-2020-29
refinedweb
1,028
57.2
Windows Mobile 6 and 6.1 both share the same version of the Windows CE kernel (5.2) meaning that previous techniques to determine the version of Windows Mobile on a device need to be modified to differentiate between these two most recent versions. It is not enough to compare the major and minor version numbers of the kernel. One possible technique is to programatically determine the current AKU in use by the Windows Mobile device as outlined by this blog posting. What is an AKU? An Adoption Kit Update (AKU) is an update to the Windows Mobile operating system which is akin to a service pack for a desktop version of Microsoft Windows. An AKU is usually a vehicle to ship an extra feature or fix required by a specific Windows Mobile device under development. Typically the features enabled by an AKU require specific hardware (such as a new type of keyboard) meaning it does not make sense to make AKUs available to older devices. Occasionally an AKU enables significant features which are of a software nature. For example AKU 2.0 for Windows Mobile 5.0 introduced the Messaging and Security Feature Pack (MSFP) which enabled closer integration with Exchange Server 2003. Determining the AKU Since an AKU typically needs specific hardware and generally doesn’t alter the end user behaviour of a device it isn’t typical to need to detect a device’s AKU. However when you must detect the AKU of a device you can look within the HKLM\SYSTEM\Versions registry key for a string value called not surprisingly Aku. An example of how you may access this registry value is shown below: using Microsoft.Win32; private string GetAKUVersion() { RegistryKey key = null; string aku; try { key = Registry.LocalMachine.OpenSubKey(@"SYSTEM\Versions"); aku = (string)key.GetValue("Aku"); // Most of the time the AKU string is prefixed with a . // so remove it. if (aku.StartsWith(".")) aku = aku.Substring(1); } finally { if (key != null) key.Close(); } return aku; } The Channel9 Windows Mobile Developer Wiki contains a list of AKUs that enables you to match up AKUs with OS build numbers which is another way to determine which AKU is present. Sample Application [Download akudetection.zip - 10 KB] A small example application is available for download that demonstrates making use of the GetAKUVersion function outlined above to display the Windows CE Kernel and Adoption Kit Update version numbers for the device the application is currently running on. If you run this application on a Windows Mobile 6.1 device you will notice that the AKU version is reported as 1.0 (or higher) compared to older Windows Mobile 6 devices which have an AKU version below 1.0 (such as 0.4.2). Both versions of Windows Mobile report the use of various builds of the Windows CE 5.2 kernel. Just noting that the link for akudetection.zip on appears to now be bad. Thanks ~Bill Hi Bill, Thanks for the feedback, it’s greatly appreciated. I recently moved my blog between servers and it seems some of the content had a few errors after the move. I’ve fixed the link, and you should now be able to download the code sample. This weekend I’ll check this issue isn’t present in other posts on this blog (as I plan to complete the migration). Thanks, Christopher Fairbairn Do you agree with me if I use build ID to differentiate WM6.0 and WM6.1 OS devices. check this link: This is what it says on the link above: “Aside from the visual and feature distinctions, the underlying CE versions can be used to differentiate WM6.0 from WM 6.1. The version of Windows CE in WM 6.0 is 5.2.*, with the final number being a 4 digit build ID (e.g. 5.2.1622 on HTC Wing). In WM 6.1, the CE version is 5.2.* with a 5 digit build number (e.g. 5.2.19216 on Palm Treo 800w).”
http://www.christec.co.nz/blog/archives/337/comment-page-1
CC-MAIN-2013-20
refinedweb
670
66.74
Hello, I have tomcat4 installed and when i run tomcat with /etc/init.d/tomcat4 start, it uses JAVA_HOME=/usr/lib/kaffe and then tomcat immediately stops running. when i check the status of the server, i get the message that Tomcat servlet engline is not running but the pid file exists. So, now when i change the JAVA_HOME variable using: export JAVA_HOME=/usr/java/j2sdk1.4.2_09 , and then start tomcat again using /etc/init.d/tomcat4 start, it runs continuously without a problem. After this, when i try 2 run a jsp file, it dooesn't run and comes up with errors: org.apache.Jasper.JasperException: unable to compile class for jsp /var/lib/tomcat4/work/Standalone/localhost/../jspfilename.java:8: package servlet_classes does not exist import servlet_classes.Database;; ^ so.. where am i going wrong? kindly guide since as u would b knowing, im a complete newbie here! thanks in advance --------------------------------- Yahoo! FareChase - Search multiple travel sites in one click.
http://mail-archives.apache.org/mod_mbox/tomcat-users/200511.mbox/%3C20051117101634.69341.qmail@web50311.mail.yahoo.com%3E
CC-MAIN-2014-15
refinedweb
162
60.92
Introduction Considering more and more people are writing blog in Markdown, in this wagtail tutorial, I will show you how to add Markdown support to our Wagtail blog app. Import new MarkdownField and MarkdownPanel Before we start, we should have a plan about how to implement this feature. Actually, there are mainly two points you should consider here, first, we should make user can edit Markdown in Wagtail admin page and save the content in Database, second, the content in markdown format should be rendered properly in our template page. To make this Wagtail-Markdown support app reusable, we create a new app called wagtailmd here by using command python manage.py startapp wagtailmd, and activate it in INSTALLED_APPS of settings.py. └── wagtailmd ├── __init__.py ├── utils.py Here is the app structure, in utils.py, we create new Wagtail Filed and Wagtail Panel to make user save content in Markdown syntax into DB. Now we edit the utils.py. from django.db.models import TextField from django.utils.translation import ugettext_lazy as _ from wagtail.admin.edit_handlers import FieldPanel from wagtail.utils.widgets import WidgetWithScript class MarkdownField(TextField): def __init__(self, **kwargs): super(MarkdownField, self).__init__(**kwargs) class MarkdownPanel(FieldPanel): def __init__(self, field_name, classname="", widget=None, **kwargs): super(MarkdownPanel, self).__init__( field_name, classname=classname, widget=widget, **kwargs ) if self.classname: if 'markdown' not in self.classname: self.classname += "markdown" else: self.classname = "markdown" Here we create MarkdownField, it is actually a built-in Django TextField, the value of this type would be treated as Text in Wagtail. What you should notice here is that we add classname called markdown in MarkdownPanel, then in Wagtail admin page we can know which edit panel is Markdown editor, later we can know how to use javascript to init the Markdown editor. After we create new Filed and Panel, we can the blog post body type, so we can edit the blog/models.py in this way. class PostPage(Page): body = MarkdownField() content_panels = Page.content_panels + [ MarkdownPanel("body"), ] As you can see, now we change the body of PostPage to make it use Markdown syntax, and we also need to change the Panel in content_panels to change the online editor to Markdown editor. Remember to migrate db after model change. python manage.py makemigrations python manage.py migrate SimpleMDE-Markdown-Editor Of course we can use the native textarea as our Markdown editor, however, a powerful Markdown editor can help users a lot who are less familiar with Markdown syntax. That is the reason we import SimpleMDE into our Wagtail Blog. We can download the SimpleMDE-Markdown-Editor here and then import the css, js file into our project. └── utils.py simplemde.min.css and simplemde.min.js is the file to run SimpleMDE-Markdown-Editor, simplemde.attach.js is the file added by us to insert some custom code. Now we need to inject some code into the admin page of Wagtail, Wagtail has provided a way to help us get it done. On loading, Wagtail will search for any app with the file wagtail_hooks.pyand execute the contents. Create file wagtail_hooks.py from django.conf import settings from wagtail.core import hooks @hooks.register('insert_editor_js') def editor_js(): s = '<script src="{0}wagtailmd/js/simplemde.min.js"></script>\n' s += '<script src="{0}wagtailmd/js/simplemde.attach.js"></script>\n' return s.format(settings.STATIC_URL) @hooks.register('insert_editor_css') def editor_css(): s = '<link rel="stylesheet" href="{0}wagtailmd/css/simplemde.min.css">\n' s += '<link rel="stylesheet" href="">\n' return s.format(settings.STATIC_URL) We injected css and js file through wagtail hooks, simplemde.min.js, simplemde.min.css and font-awesome.min.css are files needed for SimpleMDE to work, We can add some custom code in simplemde.attach.js to init the markdown editor, below is the code of simplemde.attach.js. $(document).ready(function() { $(".markdown .field-content textarea").each(function(index, elem) { var mde = new SimpleMDE({ element: elem, autofocus: false }); mde.render(); }); }); Code above search the elememt which have class value markdown and init the textarea, this class value is set in definision of MarkdownPanel. Now we can edit Markdown content using SimpleMDE and save the content to DB as pure text. Here is the screenshot of the markdown editor. Render Markdown Now we need to render the markdown content from DB. There are many third-party packages can be used, here we choose Python-Markdown. First, we install Python-Markdown, which help us render the Markdown. pip install Markdown Like richtext filter from Wagtail, we can create a custom Django template filter to render the markdown. Edit so we can create a Django tags here, create file templatetags/wagtailmd.py, then the app structure would seem like this. ├── templatetags │ ├── __init__.py │ └── wagtailmd.py ├── utils.py └── wagtail_hooks.py import markdown register = template.Library() @register.filter(name='markdown') def markdown_filter(value): return markdown.markdown( value, output_format='html5' ) As you can see from the code above, we create a Django template filter and use Python-Markdown API to convert the markdown to html. Next we modify the post template to make it work. {% extends "blog/base.html" %} {% load static wagtailcore_tags wagtailimages_tags blogapp_tags wagtailmd%} {% block content %} <h1>{{ post.title }}</h1> <hr> {{ post.body|markdown|safe }} <hr> {% post_tags_list %} {% endblock %} In the template above, we load the wagtailmd first and then {{ post.body|markdown|safe }} is the magic here since it process body as markdown content. Here is the screenshot of post page. Other Wagtail markdown resources There is a package wagtail-markdown can help you finish the tasks above, it also have a block for you to use markdown in Streamfield (An awesome feature I will talk about in next tutorials) I wrote this to let you have a good understanding of how this work and we can add more feature (Latex support) in a bit. Conclusion In this Wagtail tutorial, I showed you how to import Markdown support to our wagtail blog app, you can enable Markdown in your Wagtail project or even Django project since there is no big difference. To quickly import Markdown support into your own Wagtail project, you can just copy the wagtail_md and do some modification, or you can use wagtail-markdown package.
https://www.accordbox.com/blog/wagtail-tutorials-8-add-markdown-support/
CC-MAIN-2021-31
refinedweb
1,034
58.08
My wife is always losing her phone so a while ago I wrote a little webservice that I could ping which would perform a "Find my iPhone" on her phone (logging into icloud etc) you can find the code for this on github. Now this works great if I'm about to kick it of, but it's not very easy for my wife to do.... so I have just written the following rule: rule "Find iPhone via Nest"when Item living_room_target_temperature_c changed to 17 then sendCommand(Find_Kaths_Phone, ON) sendCommand(living_room_target_temperature_c, 20)end Whilst I'm sure most will know what this does, for the beginners (like me!) this triggers when the temperature gets set to 17 degrees. It will perform a find my iphone, and then set the temperature back to 20! I will be surprised if this isn't my most popular rule in the house! Using the thermostat as an input device: clever! It's the only thing I have stuck to a wall not likely to get lost! This is genius!!! I love it! I wonder if you could do the same sort of thing from a smart Samsung TV to listen for "Where the F*$# is my phone?" rossdargan, I mentioned before how much I love this idea. So I decided I needed this as well. There are 4 ios devices in my house so this could come in handy. I made a php web service to run on my linux box to do the same thing. I have a thermostat on the way that will be tied in to do just what you did. Thanks for the creative idea!!! J Thanks for taking the time to let me know! My wife still loves it (beware it might take up to a minute for openhab to pick up on the temp change!) My pleasure. The thermostat I have coming is a zwave one so I can set the refresh polling for the item. I don't have any experience with the nest one. Either way if it helps me find my phone or the wifes phone, or the kids ipad, ect ect then I am happy. I'm looking to try out this idea, but have no idea where to begin with installation. How is the item "Find_Kaths_Phone" configured? Are there any instructions anywhere on how to get a webservice like this up and running? I ready the readme on GitHub, but I'm just not familial with this. Any tips? Thanks for the help! Sure, All that item does is make a POST request to a server I have: Switch Find_Kaths_Phone "Find Kaths Phone" (Find) { http=">[ON:POST:]" } The software that the server is running is what is in the github repo. It's a self hosted c# program that makes the appropriate web calls to iCloud to perform the find my phone. The code @crankycoder is writing might be more useful if he is willing to open source it, but I don't know what code would work best for you? Mine does pretty much the same thing just in php. I would be happy to put mine code out there. I have a github account I'll try to get it up on. I just use a rule for mine. So my item looks like this Switch FindJasonIphone "Find Jason's Iphone" and then I have a rule setup import org.openhab.core.library.types.* import org.openhab.model.script.actions.* rule "Find Jason Phone" when Item FindJasonIphone changed from OFF to ON then logInfo("FindMyIPhone", "Request For Jason's Iphone") sendHttpGetRequest("") sendCommand(FindJasonIphone, OFF) } else { } end I just send a get request and pass in the id for the phone I want to find. Then i turn the switch back off. So it's almost like the useless switch I love the idea of using the thermostat too! Anyways, I have the same problem. However, I'm using Alexa/Amazon Echo to trigger the Find my iPhone feature. "Alexa, turn on Q's iPhone". You need to do the reverse - lock the iPhone/iPad. That is my favorite automation feature (not so much my kids though). "Dinner Time Kids! Alexa, Turn off Red iPad Mini". Q @Jeff_Smeker i put my 2 files out on github Requires atleast php5.4 Let me know if I can help at all @crankycoder hey, thanks a bunch for that. Got it working perfectly. And was able to added it to my Amazon Echo via the Echo Bridge, so I can say "turn on jeff's phone". Love it! Thanks again. that's awesome!!!! glad to hear it worked out for you Hi, Is this code still working for you? I've followed your instructions but i get a blank table when commenting out line 21 and running index.php. I've confirmed my login credentials are correct, any ideas? I have two versions that still work: This runs as a windows service and works file This one runs in a docker container and uses message queues - probably the easiest to use with openhab. Follow the instructions in there to install the app using docker, then add an item like this: Switch Find_Adams_IPad "Find Adams Ipad" (Find) { mqtt = ">[localbroker:/findphone:command:*:Adam's iPad]" } Have fun.
https://community.openhab.org/t/find-my-iphone-from-my-nest/5663
CC-MAIN-2017-43
refinedweb
880
81.12
Not many people are using Linux right now, or even programming in it. But its pretty easy. You hear alot about gcc, but thats just for C if you want use C++ in your programs you need to use g++. Some versions of Linux don't have all the header and libary files, like Mandrake. But these are the advanced files like OpenGL and OpenAL. For these files search for them on sites like,, and/or the site of the people who make your version of Linux. First what you have to do it open a text or code editor(what ever comes with your version of linux. Something like vi or Advanced Editor. If your using an editor make sure it is set to C++. Now code as your normaly do. You will find some diffrences like when a program closes in win(in console) is says "Press any key to countinue..." in linux this dose not happen, so... You have to use a dummy so a program like hello world wont just pop up then close. A simpal hello world program is something like this. #include < iostream.h > int main(void) { char dummy[2]; cout << "hi\n"; cin >> dummy; return (0); } YOU SHOULD HAVE A SPACE AT THE END OR IT GIVES YOU A WARNING Now that you have coded something you have to save it. Make sure it is saved as a cpp file it is important. Save it some were that you know like your user dir. Go into Terminal or Konsole(what ever your version calls it) and type these commands. cd the directory of the file If you want to see the files in that dir type ls g++ hello.cpp filename.cpp -o hello were you want the exe to save That will compile your code if there are any errors or warnings g++ will tell you about them. Now goto that dir that it compiled in and click on it. If it dose not open up right click on it and check "Run in terminal". Please Write comments.
http://planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=3219&lngWId=3
CC-MAIN-2018-43
refinedweb
346
90.5
How to Create a Redirect on wikiHow This wikiHow article will show you how to create a redirect on wikiHow. A redirect connects one page with another, meaning when the redirect is accessed, it loads the target page. With pages in the article namespace, redirects must be interchangeable under the Merge Policy. Learn more about redirects at wikiHow:Redirect. Steps - 1Open the page editor. Click an Edit link when you are on the article. - 2Switch to advanced editing. Click the Switch to Advanced Editing link under the title at the top of the editing page. - If you're already in the Advanced Editor, skip this step. The Advanced Editor contains one main editing field (text box) - 3 - 4Type #REDIRECT [[name of target article]]. - 5Save your changes. Click the Publish button at the bottom of the page to save your changes. The edit summary will automatically be written if you left the box has always been suggested that only New Article Boosters and Administrators create redirects, but if you know the policies well, there is no solid policy on other users creating redirects.
https://www.wikihow.com/Create-a-Redirect-on-wikiHow
CC-MAIN-2018-05
refinedweb
182
65.93
A package to convert Python type annotations into JSON schemas Project description pytojsonschema Package that uses static analysis - ast - to convert Python 3 function type annotations to JSON schemas. This allows you to auto-generate the validation schemas for JSON-RPC backend functions written in Python. Current support is for Python 3.8+ and JSON schema draft 7+. Getting started Installation From a Python 3.8+ environment, run pip install pytojsonschema. Scan a package After installing the package, you can open a python terminal from the root of the repo and run: import os import pprint from pytojsonschema.functions import process_package pprint.pprint(process_package(os.path.join("test", "example"))) The example package will be scanned and JSON schemas will be generated for all the top level functions it can find. Scan a file You can also target specific files, which won't include the package namespacing in the result value. Following on the same terminal: from pytojsonschema.functions import process_file pprint.pprint(process_file(os.path.join("test", "example", "service.py"))) Include and exclude patterns Include and exclude unix-like patterns can be used to filter function and module names we want to allow/disallow for scanning. See the difference when you now run this instead: pprint.pprint(process_package(os.path.join("test", "example"), exclude_patterns=["_*"])) Similarly, but applied to specific files: pprint.pprint(process_file(os.path.join("test", "example", "service.py"), exclude_patterns=["_*"])) Things to take into account: - Exclude pattern matching overwrite include matches. __init__.pyfiles are not affected by pattern rules and are always scanned. However, you can still filter its internal functions. Type annotation rules Fitting Python's typing model to JSON means not everything is allowed in your function signatures. This is a natural restriction that comes with JSON data serialization. Hopefully, most of the useful stuff you need is allowed. Allowed types Base types Basic types bool, int, float, str, None and typing.Any are allowed. Also, you can build more complex, nested structures with the usage of typing.Union, typing.Optional, typing.Dict (Only str keys are allowed) and typing.List. All these types have a direct, non-ambiguous representation in both JSON and JSON schema. Custom types Your functions can also use custom types like the ones defined using an assignment of typing.Union, typing.List, typing.Dict and typing.Optional, as in: ServicePort = typing.Union[int, float] ServiceConfig = typing.Dict[str, typing.Any] You can use one of the new Python 3.8 features, typing.TypedDict, to build stronger validation on dict-like objects (Only class-based syntax). As you can see, you can chain types with no restrictions: class Service(typing.TypedDict): address: str port: ServicePort config: ServiceConfig tags: typing.List[str] debug: bool = False Also, if you need to restrict the choices for a string type, you can use Python enums: Note 1: Whilst Python itself will not auto-populate default values, you can use them to make the property not required import enum class HTTPMethod(enum.Enum): GET = "GET" POST = "POST" PATCH = "PATCH" DELETE = "DELETE" def my_func(http_method: HTTPMethod): pass # My code Note 1: This only works for enums whose values are strings, as that is the only case JSON schema supports Note 2: The resulting validation uses the enum values as the valid choices, as that is what JSON schema can understand Importing types from other files You can import these custom types within your package and they will be picked up. However, due to the static nature of the scan, custom types coming from external packages can't be followed and hence not supported. In other words, you can only share these types within your package, using relative imports. Other static analysis tools like mypy use a repository with stub files to solve this issue, see. This is out of the scope for a tiny project like this, at least for now. Rules The functions you want to scan need to be type annotated. Kind of obvious requirement, right? Only the types defined in the previous section can be used. They are the types that can be safely serialised as JSON. Function arguments are meant to be passed in key-value format, like a json object. This puts a couple of restrictions regarding *args, **kwargs, positional-only and keyword-only arguments: The following is allowed: - **kwargs: def func(**kwargs): pass - keyword-only arguments: def func(*, a): pass The following is not allowed: - *args: def func(*args): pass - positional-only arguments: def func(a, /): pass Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pytojsonschema/1.10.0/
CC-MAIN-2022-40
refinedweb
775
57.27
1. Download the latest version of the JDK (Java Development Kit) installer. As of this writing, this is available at. 2. Navigate to the directory where the JDK installer was downloaded and double-click the installer’s icon to begin the installation process. 3. Follow the prompts to install the JDK. Make a note of the directory to which the JDK is installed. This directory contains a subdirectory called “bin”, which is where the file “javac.exe” (the Java compiler) is stored. 4. In any convenient location, create a new directory named “HelloWorld”. 5. In the HelloWorld directory, create a new text file named “HelloWorld.java”, containing the following text. public class HelloWorld { public static void main(String[] args) { System.out.println("Hello world!"); } } 6. Still in the HelloWorld directory, create a new text file named “JavaPathAndProgramNameSet.bat”, and enter the following text. Substitute the path of the directory where javac.exe is located for the bracketed text. This directory was created in step 2. set javaPath="[the directory where javac.exe is located]" for %%* in (.) do (set programName=%%~n*) 7. Still in the HelloWorld directory, create a new text file named “ProgramBuild.bat”, containing the following text. call JavaPathAndProgramNameSet.bat %javaPath%\javac.exe %programName%.java pause 8. Double-click the icon for ProgramBuild.bat. A console window will appear and javac.exe will compile HelloWorld.java. 9. After the compilation is complete, a prompt saying “press any key to continue” will appear. Press a key to close the console window. 10. In the HelloWorld directory, a new file called “HelloWorld.class” should now be present. This file contains the “compiled” program. Note, however, that since Java compiles to “bytecode” rather than native machine language, this program cannot be run directly from the host operating system. Instead, the Java Runtime Environment must be used to run it. 11. In the HelloWorld folder, create a new text file named “ProgramRun.bat”, containing the following text. call JavaPathAndProgramNameSet.bat %javaPath%\java.exe %programName% pause 12. In the HelloWorld directory, double-click the icon for ProgramRun.bat to run it. A console window should appear, and the text “Hello, world!” should appear somewhere in it.
https://thiscouldbebetter.wordpress.com/2011/03/13/compiling-a-java-program-from-the-command-line/
CC-MAIN-2017-13
refinedweb
362
54.39
Thanks, Thomas. I've actually tried copying over the files and wiping only the indices before, but all it does is keep it from erroring. It won't display any of the data in the repository. Perhaps I'm doing something else incorrectly? On Wed, Oct 8, 2008 at 10:31 AM, Thomas Müller <thomas.mueller@day.com> wrote: > Hi, > > You will also need to copy some files, specially: > repository/namespaces/* > repository/nodetypes/* > > Regards, > Thomas > > > On Wed, Oct 8, 2008 at 5:19 PM, Peter Mondlock <peter.mondlock@gmail.com> wrote: >> Hi everyone, >> >> I just finished integrating Jackrabbit 1.4 into our webapp system, >> using MSSQL Server as the PersistenceDB. The company wants to move to >> a different database setup. I'm testing the new deployment and I'm >> encountering quite a snag. >> >> I figured moving data from the old DB to the new one would be as >> simple as copying the tables/data across and Jackrabbit would "find" >> the data and reindex it. It connects fine and seems to be ok until it >> throws a NoSuchItemStateException. >> >> The errors at the top of the stack trace contain two "failed to read >> bundle: 11de7c7b-5cb0-4c8a-9805-29c0a09bc118: >> java.lang.IllegalStateException: URIIndex not valid? >> javax.jcr.NamespaceException: URI for index 11 not registered." Then >> it tells me that the root node has a missing child. >> >> So I turned on consistency check/fix and they produce a bunch of: >> "Error in bundle" with the same "URI for index 11 not registered" >> message as above. I'm guessing one for every bundle in the table, from >> the looks of it. It doesn't seem to want to fix the consistency >> errors, to my dismay. >> >> So it appears that just migrating the data isn't the way to do this >> and I've been googling for 2 days trying to figure out what I should >> be doing to make this work to no avail. Any and all advice is greatly >> appreciated. >> >> Thank you, >> >> Peter >> >
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200810.mbox/%3Ce2e905560810080900y5aaebe9y43ba91d5d1e5a290@mail.gmail.com%3E
CC-MAIN-2015-35
refinedweb
330
65.42
Query Extraction This documentation isn’t up to date with the latest version of Gatsby. Outdated areas are: - queries in dependencies (node_modules) and themes are now extracted as well - add meta key for hook in JSON in diagram You can help by making a PR to update this documentation. Extracting queries from files Up until now, Gatsby has sourced all nodes into Redux, inferred a schema from them, and created all pages. The next step is to extract and compile all GraphQL queries from your source files. The entrypoint to this phase is query-watcher extractQueries(), which immediately compiles all GraphQL queries by calling into query-compiler.js. Query compilation The first thing it does is use babylon-traverse to load all JavaScript files in the site that have GraphQL queries in them. This produces AST results that are passed to the relay-compiler. This accomplishes a couple of things: - It informs us of any malformed queries, which are promptly reported back to the user. - It builds a tree of queries and fragments they depend on. And outputs a single optimized query string with the fragments. After this step, Gatsby will have a map of file paths (of site files with queries in them) to Query Objects, which contain the raw optimized query text, as well as other metadata such as the component path and page jsonName. The following diagram shows the flow involved during query compilation Store queries in Redux Gatsby is now in the handleQuery function. If the query is a StaticQuery, Gatsby will call the replaceStaticQuery action to save it to the staticQueryComponents namespace which is a mapping from a component’s path to an object that contains the raw GraphQL Query amongst other things. More details can be found in the doc on Static Queries. Gatsby also removes a component’s jsonName from the components Redux namespace. See Page -> Node Dependencies. If the query is just a normal every-day query (not StaticQuery), then Gatsby updates its component’s query in the redux components namespace via the replaceComponentQuery action. Queue for execution Now that Gatsby has saved your query, it’s ready to queue for execution. Query execution is mainly handled by page-query-runner.ts, so it accomplishes this by passing the component’s path to queueQueryForPathname function. Now let’s learn about Query Execution.
https://www.gatsbyjs.com/docs/query-extraction/
CC-MAIN-2020-40
refinedweb
390
61.87
A common display module that you can buy on the internet contain the Tm1638 driver chip, I was interested in this one which is the TM1637 which appears to be a more basic version which can only control a display, the TM1638 can also control LED’s, buttons and two displays at the same time. This is a common anode 4-digit tube display module which uses the TM1637 driver chip; Only 2 connections are required to control the 4-digit 8-segment displays Here is the module Features of the module - Display common anode for the four red LED - Powered supply by 3.3V/5V - Four common anode tube display module is driven by IC TM1637 - Can be used for Arduino devices, two signal lines can make the MCU control 4 8 digital tube. Digital tube 8 segment is adjustable Here is how to hook the module up, the good news is this worked with my LOLIN32 and 3.3v Schematic Code There is a library for this IC, you can get it from , as usual there is a built in example but here is a simple sketch #include <TM1637Display.h> const int CLK = A13; //Set the CLK pin connection to the display const int DIO = A12; //Set the DIO pin connection to the display int numCounter = 0; TM1637Display display(CLK, DIO); //set up the 4-Digit Display. void setup() { display.setBrightness(0x0a); //set the diplay to maximum brightness } void loop() { for(numCounter = 0; numCounter < 1000; numCounter++) //Iterate numCounter { display.showNumberDec(numCounter); //Display the numCounter value; delay(1000); } } Links TM1637 Red Digital Tube LED Display Module & Clock for Arduino LED
https://www.esp32learning.com/code/esp32-and-tm1637-7-segment-display-example.php
CC-MAIN-2021-39
refinedweb
271
59.67
Important: Please read the Qt Code of Conduct - Debugger: Undefined command "bb" trying to use Locals and Expressions Hello I can start my application and put breakpoints. But I am not able to see anything at 'Local and Expressions' window. I can see at Debugger Log window that at process start sequence there is a 'bb ...' command that give us the undefined command message. If I try to Insert New Expression Evaluator I got the same error: DUMPER FAILED:60^error,data={msg="Undefined command: \"bb" Try "help"."} I have been looking for bb command on the GDB commands list but I can not find it. Is really bb a GDB command? Is there another interpreter between QT desktop and GDB? QtCreator 2.8.0 Based on Qt 4.8.4 running on Ubuntu 12.04 Using GDB 7.5.1 Thanks and best regards, Jose Hello As I understand it, QT calls to * dumpers* to visualize the variables and expressions. At last in my installation they seems to be executed through Python. I suppose the undefined bb is part of the dumper commands. I did not pay attention to previous messages on Debbuger log window. Going back, looking before the bb error, there are previous errors related to dumpers and python. <24-interpreter-exec console "python execfile('/opt/qt4/share/qtcreator/dumper/gbridge.py')" <25importPlainDumpers &"Traceback (most recent call last):\n" &" File "<string>", line 1, in <module>\n" &" File "/opt/qt4/share/qtcreator/dumper/gbridge.py", line 9, in <module>\n" &" import subprocess\n" &"ImportError: No module named subprocess\n" &"Error while executing Python code.\n" 24^error,msg="Error while executing Python code." &"importPlainDumpers\n" &"Undefined command: "importPlainDumpers". Try "help".\n" 25^error,msg="Undefined command: \"importPlainDumpers". Try "help"." So python is failing. Any clue? Best regards Hello So, We went to pythonland. Here, thats is Ubuntu 12.04. Python version 2.7.3 If I execute execfile(’/opt/qt4/share/qtcreator/dumper/gbridge.py’) inside python from a terminal window, it detects subprocess module without problem. If I add a "print sys.version_info" as an additional start debugger command at QtCreator, it shows me: python 2.7.3 For that printing, I added a 'import sys' command that worked. If I add a 'import subprocces'' ... it is not found 8:| Why QTCreator python execute does not find subprocess module? Where is it being disabled? Thanks and best regards Hello Problem found. We have TWO ptyhonlands. Doing a print sys.path at QT debugger start, we get a totally different path from doing it from a terminal window. We are starting QTCreator using a special script for a embedded target. That script change the PYTHONHOME variable and redirect it to another different python with the same version number ... but different default modules. (Our) QTCreator uses the actual python to call the dumpers to show the variables values. To test what python are you using, you must add this commands to Debugger options: python import sys print sys.version_info print sys.path end Then you can see that values on Debugger Log window Thanks and best regards The second section of this post will help: Hello, I have the same problem as reported above. I get this error message when using the debugger in QT Creator: &"Traceback (most recent call last):\n" &" File "<string>", line 1, in <module>\n" &" File "/home/dirk/Sitara/Qt5.1.1/Tools/QtCreator/share/qtcreator/dumper/gbridge.py", line 9, in <module>\n" &" import subprocess\n" &"ImportError: No module named subprocess\n" &"Error while executing Python code.\n" 1302^error,msg="Error while executing Python code." &"importPlainDumpers\n" &"Undefined command: "importPlainDumpers". Try "help".\n" 1303^error,msg="Undefined command: \"importPlainDumpers". Try "help"." I am using QT 5.1.1 (QT Creator), the TI SDK 6.00 for cross-compiling to the AM335x EVM. My python version is according to "print sys.version_info" 2.7.3. My debugger is an "arm-linux-gnueabihf-gdb" version 7.5. Can someone help? I didn't figure out how to solve it. Thanks in advance, Dirk @Nazb2: The problem here is that your python installation does not have the 'subprocess' module which is used by the LLDB backend and on the GDB side only by one of the advanced dumper features (display vector data in gnuplot). Since 3.0 beta Qt Creator should silently disable the gnuplot feature when the subprocess module is not found, prior to that you can do it manually by removing or commenting out the offending 'import subprocess' line. Ok, thanks andrep, works fine for me.
https://forum.qt.io/topic/31649/debugger-undefined-command-bb-trying-to-use-locals-and-expressions/1
CC-MAIN-2020-45
refinedweb
762
60.21
This plugin - rearranges (reorders) class and class member declarations according to a user-specified order and/or method call hierarchy; - generates section-marking comments; and - provides several controls for spacing within and between methods and classes. Version 2.1 adds the following features: 1) A default configuration is loaded when plugin is first installed. An additional "Configuration" pane allows the user to clear the configuration, reload the default configuration, load the configuration from a file, or save it to a file. (Like other settings changes, 'Cancel' will discard these changes.) If you don't like the default configuration provided, it could be replaced in the plugin.jar file. File structure of the configuration is just the <![CDATA[]]> element of the IDEA configuration file, although you can also load a configuration from such a file; enclosing <![CDATA[ element is ignored. 2) Live Rearranger feature providing manual rearrangement capability. It is activated from the Edit menu, or with default keystroke Ctrl-comma. This pops up a window containing a tree view of the file structure (file, classes, class contents such as fields, methods, and inner classes.) You can drag and drop elements at the same level (with the same parent); this causes them to be reordered. Close the popup window by clicking outside or with any keystroke. If you hold Ctrl-comma until the dialog appears, perform at least one rearrangement, and then release Ctrl-comma, the dialog will close; if you release Ctrl-comma at any point before a rearrangement is made, the dialog will remain open. Three features are not yet implemented: - autoscrolling - automatic expansion of the node containing the current cursor location - simplification of the tree by removing top-level nodes with only one child (e.g., if the file contains only one class, that class will not be shown in the tree, only its children.) Thanks to Keith Lea for the request & design details. Bug fixes: - "force ]]> lines" spacing options now properly saved and restored. You should recheck your settings. - Force Spacing logic improved to handle nested anonymous classes. Special cases provided for last method of class (ignore "Force Spacing After Method Close Brace") and for last class in file (place one newline character after the final brace.) - Fixed exception in spacing logic. Thanks to Bas Leijdekkers for the report. I have uploaded this to the plugin manager site but not to the wiki site. (Going out of town for several days; will do it when I get back.) Please let me know if you have any problems or requests. -Dave - Additional planned plugin features: 1) ability to rearrange entire directories from the project tree. 2) Rearrange a selection, not entire file. 3) Progress bar. I'm using this plugin in combination with Tabifier. It seems that every time I <![CDATA[]]>-L, extra blank lines are added to ceratin block delimiters. For example: // - GETTER / SETTER METHODS - // - OTHER METHODS - continuously add blank lines to their sections. Additionally, the // - INNER CLASSES - delimiter is added on every invocation of <![CDATA[-L These behaviors are present when using the default configuration. Tony On Wed, 25 Feb 2004 00:10:47 +0000 (UTC), Dave Kriewall wrote: >This plugin >- rearranges (reorders) class and class member declarations according to a user-specified order and/or method call hierarchy; >- generates section-marking comments; and >- provides several controls for spacing within and between methods and classes. >]]>snip >3) Progress bar. Great work! Dave. A small bug: the inner class comment is not removed when i rearrange my class layout. So when i rearrange the layout several times, there are a lot of inner class comments. I try to use the default layout, still the same. this really make me feel bad. BTW, can u add keyboard operation to the living layout window, maybe ctrl-up/ctrl-down to move the selected elements. And another request is can u set an option to just layout the public class. many times i use some simple inner class and i don't want them to be populated with structure comments, and most of the times i don't want them to be rearranged. at least please add an option that disable the structure comments for inner classes. Thanks for the suggestions, tc. As a temporary workaround for the inner class comment problem, it might work to go to the General panel, and press the "Generate" global pattern button. This creates a regular expression that matches all generated comments; these should be removed next time you run the rearranger. I will look into adding keyboard support for the live rearranger popup window. Yes, I will add an option to avoid recursive rearrangement of inner classes. (Until just a few versions ago, the plugin couldn't rearrange inner classes.) -Dave Dave, sorry to post on here, don't have an email for you. The following gif shows a bug in rearranger where it considers my getName() method to be other, I'm guessing because there is a toUppercase() call in it, but that should not be the case. Thanks R Attachment(s): rearranger.gif Robert S. Sfeir wrote: Never mind just saw the setting to tell it return one line makes it a getter. Thanks R Robert S. Sfeir wrote: Hum... seems setting even ignore body as long as it has get in the method name still doesn't pull this method into its own group. Further if you keep rearranging (ctrlaltshift+r), it keeps adding a space after the //--- OTHER - // comment. Thanks R Yep, ctrl-shift-alt-r shows the same behaviour Cheers Tony On Wed, 25 Feb 2004 04:04:18 +0000 (UTC), Dave Kriewall <no_mail@jetbrains.com> wrote: Robert, I'll take a look at the getter bug you described. That problem with the // -- OTHER comment which you and others have noticed is due to one or more blank lines preceding the comment. When I remove the comment, the blank lines before and after the comment become adjacent and consecutive, and are viewed by the parser as belonging to the subsequent item (method or field). Then when the comment is regenerated, it is emitted in front of the newly grouped blank lines. In effect it moves any blank lines before the comment to after the comment. And to make matters worse, it introduces one more newline character before the comment, to perpetuate the error. :) So, I'll retract my original suggestion for workaround. I think there are only two workarounds: 1) remove blank lines before the generated comments. 2) delete the comments from the configuration! :) I'm out of town until Tuesday but will try to have all these issues fixed by mid next week. If the current version (2.1) of the plugin is too annoying, version 2.0 is still available at the wiki site. () -Dave Dave Kriewall wrote: Dave, I can only speak for myself, but I couldn't even consider going back to 2.0 now. I didn't expect it, but the live drag-n-drop rearranging has turned out to be a killer feature for me! Thanks so much for all your hard work on this plugin - I'm developing with even more pleasure because of it. There seems to be a problem with the toolbar icons in the live rearrange popup. They just show up as tiny (1x1 pixel) squares for me. I've attached a screenshot showing the problem. Finally, a suggestion: a bit more visual feedback while dragging methods/fields would be good. I'm thinking something like the Mozilla bookmarks manager, with a line showing where the the dropped item will be inserted. I wrote some code a few years ago that did this; I can resurrect it and send it to you if you think it will help. Thanks again, Vil. -- Vilya Harvey vilya.harvey@digitalsteps.com / digital steps / (W) +44 (0)1483 469 480 (M) +44 (0)7816 678 457 Attachment(s): screenshot.png A suggestion for a usability enhancement from a dumb user ;) I've been having problems using this plugin for a while - it never seemed to arrange things in the way I wanted - but I've not had a chance to properly work out why 'til now. What was happening was that I'd always end up with some methods at the end of my file no matter what I did. Then I realised what it was - I'd set up all the conditions for special methods, e.g. abstract, getter/setters, inner classes etc. but that everything else that didn't satisfy these conditions was sent to the end of the file. Once I set up an item for non-specific methods it all worked beautifully. So in essence there is an implicit item in the list "All other methods" which is always at the end, but invisible. I was just thinking it might stop others getting confused like me if an explicit undeletable element was added for "All other methods"; it would also be more convenient for the user to have this already defined as it would be one less entry to construct and is always used. Maybe if I worked from the default layout that is now provided I wouldn't have got this problem, and because of that you might not want to bother, but I just thought I'd suggest it. Cheers again for the great plugin! :) N. Robert, It occurred to me that perhaps you set the default getter/setter definition but did not change the definition in your rule. When you create a rule, I copy the default g/s definition into it. Subsequent changes to the default g/s definition don't affect existing rules. (This is probably bad -- I should really have a flag or something that indicates whether to use the global definition or the rule's overriding definition.) Anyway, could you either 1) set your default g/s definition to "ignore body," delete your method-matching rule, and recreate it; or 2) go to your method-matching rule's getter/setter definition and ensure that it says "ignore body." Thanks, -Dave You're right; all unmatched items match a default rule (which matches everything - fields, methods, inner classes.) The default rule shows up if you select "confirm rearrangement" on the General options pane and "Show Rules". (I hoped this would be a good diagnostic tool.) Anyway, you would see all your unmatched items showing up there. So perhaps I should add an undeleteable "default rule" entry ("matches everything") with a fixed priority of zero. For existing configurations, I can automatically add it to the end of the list of rules with the next version of the plugin. The default rule could appear anywhere in the list, i.e. it is moveable; but must appear somewhere. Then you could even do fancy stuff like put the default stuff somewhere besides the end. Any objections? Only one I can think of is a little more clutter to the rules list. -Dave Hi Vil, Wow - those icons are so small they hardly showed up on my screen. I suppose you only found them because a tooltip appeared? :) I had plans to use those icons on the existing "confirm before rearrangement" dialog (instead of the clunky checkboxes); now I'm glad I didn't get it done. I haven't any idea why they would be so small. They're just a JCheckbox with an icon assigned. Maybe there's some size attribute I need to set. If I can't reproduce it (which is likely), could I give you a test version? Yes, if you can send me some old code that illustrates the 'bookmark manager' behaviour you described, I'll try to add it. Java D&D offers the ability to show an Image with the cursor, but it isn't supported on Windows (and perhaps other platforms). So I abandoned that approach. But I agree, the D&D feedback is minimal. Thanks, -Dave Dave Kriewall wrote: I was wondering what those strange dots were, then I found that if I had my mouse pointer in exactly the right place a tool-tip came up... :) Most likely the icon-loading code (are you using ImageIcons?) is failing to find them on my system, for some reason, but I'm not sure why that would be happening. Will do. It may not be pretty... ;) Vil. -- Vilya Harvey vilya.harvey@digitalsteps.com / digital steps / (W) +44 (0)1483 469 480 (M) +44 (0)7816 678 457 Hey, I didn't even see this released until just now, I thought you were still working on Live Rearrange. It works just like it should so far, it works great. It seems like you got the modality and stuff problems fixed. Thanks for implementing this for me! :) Where I can get this 2.1 version? I can only see version 2.0 at the wiki pages. Is my proxy playing tricks with me? Hi Otto, Use the Plugin Manager in IDEA 4.0 (under File...Settings...IDE Settings), or go to. I had to rush out of town last Wednesday and didn't have time to upload 2.1 to the wiki site. Should be a newer version in both places in a day or two. -Dave Hi, First, thanks for the great plugin! It's exactly what I have wished to see in IDEA for a long time. Some suggestions: Could you support multiselection in the Class Member Order and similar lists? That would make it faster to edit the configuration. Could it be possible to enforce some empty lines before and after a separator comment when it is created by the rearranger? If I include the empty lines in the separator comment itself then it will not match my old comments that do not always have empty lines around them, and thus the comments will be inserted twice. It would also be nice if extra empty lines adjacent to added separator comments would be deleted. Perhaps just add a setting for number of empty lines to enforce before and after separator comments. Or alternatively, allow the user to specify the number of preceding and trailing empty lines around each separator comment (more control, but also more settings to edit). Dave Kriewall wrote: I've resurrected that code and checked that it still works. It was written for JDK 1.3, so it doesn't take advantage of any of the DnD simplifications that 1.4 introduced (in fact, the bulk of the code is for providing a simpler DnD API). Hopefully you'll be able to find something useful in it. If you email me directly, I'll send the code straight to you rather than clutter the list with it. No worries - glad to be giving something back! Vil. -- Vilya Harvey vilya.harvey@digitalsteps.com / digital steps / (W) +44 (0)1483 469 480 (M) +44 (0)7816 678 457 1) Regarding multiselection support, are you planning to move up/down or delete a set of rules? "Add" and "Edit" (and maybe "Duplicate") seem out of place with more than one rule selected; perhaps I should gray them out. 2) In working out the separator comment bugs reported above, I arrived at a solution similar to what you are suggesting. Here's what will happen: - When looking for existing comments, disregard empty lines in the text and in the comment. When an existing comment is found, remove it and all adjacent blank lines. (This prevents comments from being duplicated just because of differences in blank lines.) Specifically, anything matching the following pattern will be removed: <![CDATA[ \n*\n*]]> - Insert the new comment text including any blank lines specified. There was a bug that prevented blank lines from being saved in a comment. That's fixed (in 2.2), so now you can specify any number of preceding or trailing blank lines just by putting them in the comment text. - I'm not planning on removing "internal" blank lines; so a comment like <![CDATA[ // first line of separator comment; next is blank // third line of separator comment]]> will have to match exactly. -Dave > 1) Regarding multiselection support, are you planning > to move up/down or delete a set of rules? "Add" and > "Edit" (and maybe "Duplicate") seem out of place with > more than one rule selected; perhaps I should gray > them out. Yes, I'm planning to move rules. Greying out non-applicable commands on multiselection sounds reasonable. > 2) In working out the separator comment bugs reported > above, I arrived at a solution similar to what you > are suggesting. Here's what will happen: <snip> Sounds good to me! Hi Vil, please check out the toolbar icons in version 2.2. The case did not exactly match that of the icon .png resources in the plugin jar file. So I am guessing that it worked for me on Windows because case is ignored in filenames, but failed for you on a Linux or other "case sensitive filename" platform. If they still don't work, I added some special log messages. Let me know and I'll give you the details. -Dave Dave Kriewall wrote: I doubt that was the problem, since I'm on Windows 2000 SP2. :-/ I can now see the "Show Parameter Types" icon. The others all seem to be the correct size, but are completely invisible (i.e. as if transparency was set to 100%). The mouse-over highlighting appears for them though. I extracted the icons from rearranger.jar and wrote a little program to view them. All of them displayed correctly in my little test program (using the ImageIcon class to load the icons), so I can rule out video driver issues. What are the log messages I should be looking out for? Vil. -- Vilya Harvey vilya.harvey@digitalsteps.com / digital steps / (W) +44 (0)1483 469 480 (M) +44 (0)7816 678 457. It puzzles me that you can see one of the icons but not the rest. Only thing that could really be different between them at the .png level is transparency, but even then you should see something. I suppose you could try substituting the icon that you can see for the ones you can't in the rearranger.jar file. If it appears in every position, then it must be a problem with the creation of the icons themselves. Please also try a normal (automatic) rearrangement with "confirm before rearranging" option (general pane) set. I've just replaced all those checkboxes with the icons. Let me know if you see any of those. Very weird.. Thanks, -Dave Dave Kriewall wrote: >. Have done this, and was able to fix the problem as a result! You were right about it being to do with case sensitivity in the file names: the png files in the jar had different combinations of capitalisation for their names and their extension (i.e. the "Show Rules" icon was Showrules.PNG in the jar, whereas the "Show Parameter Types" icon - the one that worked - was ShowParamTypes.png). I changed the file names so that all of them were camel-case with a lower-case ".png" at the end and recreated the jar; now all of them appear as expected. I suspect that you're running the plugin from a directory, rather than a jar file...? Looks like file names in jars are case sensitive, regardless of whether the local file system is or not. Cheers! Vil. -- Vilya Harvey vilya.harvey@digitalsteps.com / digital steps / (W) +44 (0)1483 469 480 (M) +44 (0)7816 678 457
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206786405--ANN-Rearranger-plugin-new-version-2-1
CC-MAIN-2022-40
refinedweb
3,259
73.58
Hi Peter / Prashant, It's my understanding that I can use the full name everywhere or put an import statement at the top. However, when I put an import statement in Intellij it goes gray which indicates it's an unused import statement. Anyway, I replaced all class references with their full name. I did a negative lookbehind regular expression search in my source to see there were no occurrences of MyAppletExtension which were not com.trilogycomms. MyAppletExtension. The only occurrence is public class MyAppletExtension extends JApplet However I still get that message that it is not found. I also looked at my launch applet code and I was using MyAppletExtension.class so I qualified that as above without result. Prashant, I have not been able to figure out how to get Intellij to produce a build script, I wrote my own. Thanks, Tom. -----Original Message----- From: Peter Reilly [mailto:peter.kitt.reilly@gmail.com] Sent: 30 October 2006 20:59 To: Ant Users List Subject: Re: Package prefix with ant build & ClassNotFoundException The package of the class has changed, therefore the classname has changed from ClassName to package.ClassName, therefore the applet needs to be changed to refer to the new classname. Peter > Tom Corcoran wrote: > > I am using IntelliJ and have added a package prefix to my swing project. > > If my prefix is "com.mycompany", this means I can have my source in > > <src> rather than <src/com/mycompany>. At the moment I can't make the > > physical change due to source control issues. > > > > > > > > An ant build works fine but when I run the applet I get a console > > java.lang.ClassNotFoundException MyAppletExtension > > (where MyAppletExtension extends JApplet). > > > > > > > > I am assuming it can't find the class because of the prefix, but maybe I > > am wrong? > > > > > > > > The package prefix is contained in a Intellij inc file. What do I need > > to change in the build file (see below) so it can find the classes? > > > > > > > > Thanks a lot, > > > > > > Tom. > > > > > > > > Here's the ant parameters: > > > > <property name="src" location="src"/> > > <property name="build" location="classes"/> > > <property name="dist" location="jar"/> > > > > with targets : > > > > <target name="compile"> > > <javac srcdir="${src}" destdir="${build}"/> > > </target> > > > > <target name="dist" depends="compile"> > > <jar jarfile="${dist}/MyApplet.jar" basedir="${build}"/> > > </target> --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscribe@ant.apache.org For additional commands, e-mail: user-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-user/200611.mbox/%3C99A439FA0587124D93BF64A2D1C544ED3D1308@AARDVARK2.trilogycomms.local%3E
CC-MAIN-2018-51
refinedweb
392
56.76
04 May 2004 The SCWCD certification exam can be taken only by Sun Certified Programmers for Java 2 platform. The SCWCD certification consists of 13 main objectives dealing with servlets as well as JSP pages, using JavaBeans components in JSP pages, developing and using custom tags, and dealing with some important J2EE design patterns. The objectives are: Each chapter contains mock questions in the pattern of the SCWCD exam. These questions demonstrate the use of the ideas covered in that objective. Explanations about the correct and incorrect choices are included to give you a better HTTP methods The HTTP methods indicate the purpose of an HTTP request made by a client to a server. The four most common HTTP methods are GET, POST, PUT, and HEAD. Let's look at the features of these methods and how they are triggered. GET method The GET method is used to retrieve a resource (like an image or an HTML page) from the server, which is specified in the request URL. When the user types the request URL into the browser's location field or clicks on a hyperlink, the GET method is triggered. If a tag is used, the method attribute can be specified as " GET " to cause the browser to send a GET request. Even if no method attribute is specified, the browser uses the GET method by default. We can pass request parameters by having a query string appended to the request URL, which is a set of name-value pairs separated by an "&" character. For instance: Here we have passed the parameters studname and studno, which have the values "Tom" and "123" respectively. Because the data passed using the GET method is visible inside the URL, it is not advisable to send sensitive information in this manner. The other restrictions for the GET method are that it can pass only text data and not more than 255 characters. POST method The purpose of the POST method is to "post" or send information to the server. It is possible to send an unlimited amount of data as part of a POST request, and the type of data can be binary or text. This method is usually used for sending bulk data, such as uploading files or updating databases. The method attribute of the <form> tag can be specified as " POST " to cause the browser to send a POST request to the server. Because the request parameters are sent as part of the request body, it is not visible as part of the request URL, which was also the case with the GET method. PUT method The PUT method adds a resource to the server and is mainly used for publishing pages. It is similar to a POST request, because both are directed at server-side resources. However, the difference is that the POST method causes a resource on the server to process the request, while the PUT method associates the request data with a URL on the server. The method attribute of the <form> tag can be specified as " PUT " to cause the browser to send a PUT request to the server. HEAD method The HEAD method is used to retrieve the headers of a particular resource on the server. You would typically use HEAD for getting the last modified time or content type of a resource. It can save bandwidth because the meta-information about the resource is obtained without transferring the resource itself. The method attribute of the <form> tag can be specified as " HEAD " to cause the browser to send a HEAD request to the server. PUT doPut() Servlet lifecycle The servlet lifecycle consists of a series of events, which define how the servlet is loaded and instantiated, initialized, how it handles requests from clients, and how is it taken out of service. Initialization. The getParameter() method returns a single value of the named parameter. For parameters that have more than one value, the getParameterValues() method is used. The getParameterNames() method is useful when the parameter names are not known; it gives the names of all the parameters as an Enumeration. For instance: This method sets the content type of the response that is sent to the client. The default value is "text/html." The getWriter() method returns a PrintWriter object that can send character text to the client. The getOutputStream() method returns a ServletOutputStream suitable for writing binary data in the response. Either of these methods can be used to write the response, but not both. If you call getWriter() after calling getOutputStream() or vice versa, an IllegalStateException will be thrown. Redirecting requests It is possible to send a temporary redirect message to the browser, which directs it to This method can accept relative URLs; the servlet container will convert the relative URL to an absolute URL before sending the response to the client. If this method is called after the response is committed, IllegalStateException is thrown. The forward() method allows a servlet to do some processing of its own before the request is sent to another resource that generates the response. The forward() method should not be called after the response is committed, in which case it throws IllegalStateException. Object attributes A servlet can store data in three different scopes: request, session, and context. Data is stored as key value pairs, where the key is a String object and the value is any object. These data objects are called attributes. The attribute values persist as long as the scope is valid. The ServletRequest, HttpSession(), and ServletContext() methods provide the following methods to get, set, and remove attributes: The attributes set within the request scope can be shared with other resources by forwarding the request. However, the attributes are available only for the life of the request. A servlet can share session attributes with other resources that are serving a request for the same client session. The attributes are available only while the client is still active. The context scope is common for all the resources that are part of the same Web application, so the objects stored within a context can be shared between all these resources. These are available throughout the life of the Web application. Sample questions 2 Question 1: You need to create a database connection in your application after reading the username, password, and database server URL from the deployment descriptor. Which will be the best place to do this? Choices: • A. Servlet constructor • B. init() method • C. service() method • D. doGet() method • E. doPost() method Correct choice: • B Explanation: The init() method is invoked once and only once by the container, so the creation of the database connection will be done only once, which is appropriate. The service(), doGet(), and doPost() methods might be called many times by the container. The username, password, and URL are to be read from the deployment descriptor. These initialization parameters are contained in the ServletConfig object, which is passed to the init() method. That is why we need to use the init() method instead of the constructor for this purpose, even though the constructor is also called only once. Question 2: A user can select multiple locations from a list box on an HTML form. Which of the following methods can be used to retrieve all the selected locations? • A. getParameter() • B. getParameters() • C. getParameterValues() • D. getParamValues() • E. None of the above Correct choice: • C Explanation: Application structure A Web application exists in a structured hierarchy of directories, which is defined by the Java Servlet Specification. The root directory of the Web application contains all the public resources, such as images, HTML pages, and so on, stored directly or within subfolders. A special directory called WEB-INF exists, which contains any files that are not publicly accessible to clients. A META-INF directory will be present in the WAR file, which contains information useful to Java Archive tools. This directory must not be publicly accessible, though its contents can be retrieved in the servlet code using the getResource and getResourceAsStream calls on the ServletContext interface. Deployment descriptor The deployment descriptor must be a valid XML file, named web.xml, and placed in the WEB-INF subdirectory of the Web application. This file stores the configuration information of the Web application. The order in which the configuration elements must appear is important and is specified by the deployment descriptor DTD, which is available from java.sun.com (see Resources ). The root element of the deployment descriptor is the <web-app> element; all other elements are contained within it. <servlet-name>: The servlet's unique name within the Web application is specified by the <servlet-name> element. The clients can access the servlet by specifying this name in the URL. It is possible to configure the same servlet class under different names. <servlet-class>: The fully qualified class name used by the servlet container to instantiate the servlet is specified by the <servlet-class> element. The following code demonstrates the use of the <servlet> element within the deployment descriptor: <servlet> <servlet-name> TestServlet </servlet-name> <servlet-class> com.whiz.TestServlet </servlet-class> <init-param> <param-name>country</param-name> <param-value>India</param-value> </init-param> </servlet> Servlet mappings In some cases, it might be required to map different URL patterns to the same servlet. For this, we use the <servlet-mapping> element. 2. The container will recursively try to match the longest path-prefix. This is done by stepping down the path tree a directory at a time, using the "/" character as a path separator. The longest match determines the servlet selected. 3. An extension is defined as the part of the last segment after the last "." character. If the last segment in the URL path contains an extension (for instance, .jsp), the servlet container will try to match a servlet that handles requests for the extension. 4. If none of the previous three rules results in a servlet match, the container will attempt to serve content appropriate for the resource requested. If a "default" servlet is defined for the application, it will be used. <servlet-mapping> <servlet-name>servlet1</servlet-name> <url-pattern>/my/test/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>servlet2</servlet-name> <url-pattern>/another </url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>servlet3</servlet-name> <url-pattern>*.tst </url-pattern> </servlet-mapping> A string beginning with a "/" character and ending with a "/*" postfix is used for path mapping. If the request path is /my/test/index.html, then servlet1 is invoked to handle the request. Here the match occurs as was described in step 2 above. If the request path is /another, then servlet2 services the request. Here the matching occurs as was describe in step 1 above. But when the path is /another/file1.tst, servlet3 is chosen. This is because the URL mapping for servlet2 requires an exact match, which is not available, so the extension mapping as described above in step 3 is chosen. and so on. You also saw how the servlet details like name, class, and initialization parameters are specified in the deployment descriptor. Finally, you learned about how to specify mappings between URL patterns and the servlets to be invoked. Sample questions 3 Question 1: Which of the following are not child elements of the <servlet> element in the deployment descriptor? • A. <servlet-mapping> • B. <error-page> • C. <servlet-name> • D. <servlet-class> • E. <init-param> Correct choices: • A and B Explanation: The <servlet-name> element defines the name for the servlet, and the <servlet-class> element specifies the Java class name that should be used to instantiate the servlet. The <init-param> element is used to pass initialization parameters to the servlet. The <servlet-mapping> element is used to specify which URL patterns should be handled by the servlet. The <error-page> element can be used to specify the error pages to be used for certain exceptions or error codes. Which of the following requests will not be serviced by MyServlet (assume that the Web application name is test)? <servlet-mapping> <servlet-name> MyServlet </servlet-name> <url-pattern> /my/my/* </url-pattern> </servlet-mapping> • A. /test/my/my/my • B. /test/my/my/a/b • C. /test/my/my/a.jsp • D. /test/my/a.jsp • E. /test/my/my.jsp Correct choices: • D and E Explanation: To match a request URL with a servlet, the servlet container identifies the context path and then evaluates the remaining part of the request URL with the servlet mappings specified in the deployment descriptor. It tries to recursively match the longest path by stepping down the request URI path tree a directory at a time, using the "/" character as a path separator, and determining if there is a match with a servlet. If there is a match, the matching part of the request URL is the servlet path and the remaining part is the path info. In this case, when the servlet encounters any request with the path "/test/my/my," it maps that request to MyServlet. In choices A, B, and C, this path is present, hence they are serviced by MyServlet. Choices and D and E do not have this complete path, so they are not serviced. Context There is one instance of the ServletContext interface associated with each Web application deployed into a servlet container. If the container is distributed over multiple JVMs, a Web application will have an instance of the ServletContext for each VM. The servlet context is initialized when the Web application is loaded, and is contained in the ServletConfig object that is passed to the init() method. Servlets extending the GenericServlet class (directly or indirectly) can invoke the getServletContext() method to get the context reference, because GenericServlet implements the ServletConfig interface. The following code specifies the name of the company as the context parameter: <context-param> <param-name>CompanyName</param-name> <param-value> IBM </param-value> <description> Name of the company </description> </context-param> We can access the value of the CompanyName parameter from the servlet code as follows: String name=getServletContext().getInitParameter("CompanyName"); ServletContextListener Implementations of the ServletContextListener interface receive notifications about changes to the servlet context of the Web application of which they are part. The following methods are defined in the ServletContextListener: ServletContextAttributeListener The ServletContextAttributeListener interface can be implemented to receive notifications of changes to the servlet context attribute list. The following methods are provided by this interface: HttpSessionAttributeListener We can store attributes in the HttpSession object, which are valid until the session terminates. The HttpSessionAttributeListener interface can be implemented in order to get notifications of changes to the attribute lists of sessions within the Web application: The <listener> element has only one <listener> sub-element whose value is specified as the fully qualified class name of the listener class as shown in the following code: <listener> <listener-class>com.whiz.MyServletContextListener </listener-class> </listener> <listener> <listener-class>com.whiz.MyServletContextAttributeListener </listener-class> </listener> Distributed applications A Web application can be marked distributable, by specifying the <distributable> element within the <web-app> element. Then the servlet container distributes the application across multiple JVMs. Scalability and failover support are some of the advantages of distributing applications. In cases where the container is distributed over many VMs, a Web application will have an instance of the ServletContext for each VM. However, the default ServletContext is non-distributable and must only exist in one VM. As the context exists locally in the JVM (where created), the ServletContext object attributes are not shared between JVMs. Any information that needs to be shared has to be placed into a session, stored in a database, or set in an Enterprise JavaBeans component. However, servlet context initialization parameters are available in all JVMs, because these are specified in the deployment descriptor. ServletContext events are not propagated from one JVM to another. All requests that are part of a session must be handled by one virtual machine at a time. HttpSession events, like context events, may also not be propagated between JVMs. Also note that because the container may run in more than one JVM, the developer cannot depend on static variables for storing an application state. Sample questions 4 Question 1: Following is the deployment descriptor entry for a Web application using servlet context initialization parameters: <web-app> ... <context-param> <param-name>Bill Gates</param-name> // xxx </context-param> ... </web-app> • A. <param-size> • B. <param-type> • C. <param-value> • D. <param-class> Correct choice: • A. ServletContextListener.contextInitialized() • B. ServletContextListener.contextCreated() • C. HttpServletContextListener.contextCreated() • D. HttpServletContextListener.contextInitialized() • E. None of the above Correct choice: • A Explanation: Exception handling When a Web application causes errors at the server side, the errors must be handled appropriately and a suitable response must be sent to the end user. In this section, you will discuss the programmatic and declarative exception handling techniques used to provide presentable error pages. The first version of the sendError() method sends an error response page, showing the given status code. The second version also displays a descriptive message. The following code demonstrates the use of the sendError() method, handling FileNotFoundException. RequestDispatcher When an error occurs, you can use RequestDispatcher to forward a request to another resource to handle the error. The error attributes can be set in the request before it is dispatched to the error page, as shown below: Throwing exceptions The service methods in the servlet class declare only ServletException and IOException in their throws clauses, so we can throw only the subclasses of ServletException, IOException, or RuntimeException from these methods. All other exceptions should be wrapped as ServletException and the root cause of the exception set to the original exception before being propagated. The following configuration maps the error code 404 to error.jsp and SQLException to ErrorServlet: <error-page> <error-code>404</error-code> <location>error.jsp</location> </error-page> <error-page> <exception-type>java.sql.SQLException</exception-type> <location>/error/ErrorServlet</location> </error-page> Logging errors It might be required to report errors and other debug information from the Web application for later analysis. The first version of the log() method writes the specified message, while the second version writes an explanatory message and a stack trace for a given Throwable exception to the servlet log file. Note that the name and type of the servlet log file is specific to the servlet container. reviewed the methods for logging the exception and related messages to the applications log file. Sample questions 5 Question 1: • A. HttpServletRequest • B. HttpServletResponse • C. ServletRequest • D. ServletResponse • E. None of the above Correct choice: 1. <web-app> ... <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> ... <web-app> 2. <web-app> ... <error-page> <exception-type>java.sun.com.MyException</exception-type> <location>/404.html</location> </error-page> ... <web-app> The error-code contains an HTTP error code, ex: 404. The exception type contains a fully qualified class name of a Java exception type. The location element contains the location of the resource in the Web application. According to the above DTD definition, the <error-page> tag must contain either the error-code or exception-type and location. Thus both of the declarations in the question are true. Maintaining sessions HTTP, being a stateless protocol, has its own disadvantages. Each client request is treated as a separate transaction. In Web applications, it becomes necessary for the server to remember the client state across multiple requests. This is made possible by maintaining sessions for client server interactions. When a user first makes a request to a site, a new session object is created and a unique session ID is assigned to it. The session ID, which is then passed as part of every request, matches the user with the session object. Servlets can add attributes to session objects or read the stored attribute values. Session tracking gives servlets the ability to maintain state and user information across multiple page requests. The servlet container uses the HttpSession interface to create a session between an HTTP client and the server. HttpSession getSession() HttpSession getSession(boolean create) Both the methods return the current session associated with this request. The first method creates a new session, if there is no existing session. The second version creates a new session only if there is no existing session and the boolean argument is true. The following example shows how a session is retrieved from the current request and an Integer attribute is written into the session: HttpSessionBindingListener HttpSessionListener Terminating a session Sessions may get invalidated automatically due to a session timeout or can be explicitly ended. When a session terminates, the session object and the information stored in it are lost permanently. Session timeout It is possible to use the deployment descriptor to set a time period for the session. If the client is inactive for this duration, the session is automatically invalidated. The <session-timeout> element defines the default session timeout interval (in minutes) for all sessions created in the Web application. A negative value or zero value causes the session never to expire. The following setting in the deployment descriptor causes the session timeout to be set to 10 minutes: <session-config> <session-timeout>10</session-timeout> </session-config> You can also programmatically set a session timeout period. The following method provided by the HttpSession interface can be used for setting the timeout period (in seconds) for the current session: If a negative value is passed to this method, the session will never time out. URL rewriting Sessions are made possible through the exchange of a unique token known as session id, between the requesting client and the server. If cookies are enabled in the client browser, the session ID will be included in the cookie sent with the HTTP request/response. For browsers that do not support cookies, we use a technique called URL rewriting to enable session handling. If URL rewriting is used, then the session ID should be appended to the URLs, including hyperlinks that require access to the session and also the responses from the server. The encodeURL() method encodes the specified URL by including the session ID in it, or, if encoding is not needed, returns the URL unchanged. The encodeRedirectURL() method encodes the specified URL for use in the sendRedirect() method of HttpServletResponse. This method also returns the URL unchanged if encoding is not required. URL rewriting must be consistently used to support clients that do not support or accept cookies to prevent loss of session information. Sample questions 6 Question 1: Which of the following will ensure that the session never gets invalidated automatically? • A, C, and E Explanation: A <session-timeout> value of 0 or less means that the session will never expire, so choices A and E are correct. The <session-timeout> element is a sub-element of session-config. The setMaxInactiveInterval() method of HttpSession specifies the number of seconds between client requests before the servlet container will invalidate this session. A negative value (not 0) is required to ensure that the session never expires, so choice C is also correct. How should you design a class whose objects need to be notified whenever they are added to or removed from the session? Security Secure communication is essential to protect sensitive data, including personal information, passed to and from a Web application. Here you'll explore the important security concepts and configurations to overcome the security issues in servlet-based Web applications. Security issues Authentication is the means by which communicating entities verify their identities to each other. The username/password combination is usually used for authenticating the user. Data integrity proves that information has not been modified by a third party while in transit. The correctness and originality is usually verified by signing the transmitted information. Auditing is the process of keeping a record or log of system activities, so as to monitor users and their actions in the network, such as who accessed certain resources, which users logged on and off from the system, and the like. A Web site may be attacked to extract sensitive information, to simply crash the server, or for many other reasons. A denial-of-service attack is characterized by an explicit attempt by hackers to prevent genuine users of a service from accessing a Web site by overloading the server with too many fake requests. Authentication mechanisms A Web client can authenticate a user to a Web server using one of the following mechanisms: Form-based authentication Form-based authentication allows a developer to control the look and feel of the login screens. The login form must contain fields for entering a username and password. These fields must be named j_username and j_password, respectively. should be used, the realm name that should be used for this application, and the attributes that are needed by the form. It has three sub-elements: <auth-method>, <realm-name>, and <form-login-config>. The <realm-name> element specifies the realm name to be used; this is required only in the case of HTTP basic authorization. The <form-login-config> element specifies the login page URL and the error page URL to be used, if form-based authentication is used. <login-config> <auth-method>BASIC</auth-method> <realm-name>student</realm-name> </login-config> Security constraints A security constraint determines who is authorized to access the resources of a Web application. security-constraint The <security-constraint> element is used to associate security constraints with one or more Web resource collections. The sub-elements of <security-constraint> are <display-name> <web-resource-collection>, <auth-constraint>, and <user-data-constraint>. web-resource-collection The <web-resource-collection> element specifies a collection of resources to which this security constraint will be applied. Its sub-elements are <web-resource-name>, <description>, <url-pattern>, and <http-method> as described here: • <web-resource-name> specifies the name of the resource. The following configuration specifies that the POST() method of MarksServlet will be subject to the security constraints of the application: <security-constraint> <web-resource-collection> <web-resource-name> marks </web-resource-name> <url-pattern> /servlet/MarksServlet </url-pattern> <http-method>POST</http-method> </web-resource-collection> </security-constraint> auth-constraint The <auth-constraint> element specifies which security roles can access the resources to which the security constraint applies. Its sub-elements are <description> and <role-name>. The following code indicates that users belonging to the role "teacher" would be given access to the resources that are protected by the security constraint: <auth-constraint> <description>Only for teachers</description> <role-name>teacher</role-name> <auth-constraint> To specify that all roles can access the secure resources, specify the asterisk (*) character: <auth-constraint> user-data-constraint The <user-data-constraint> element specifies how the data transmitted between the client and the server should be protected. Its sub-elements are <description> and <transport-guarantee>. <user-data-constraint> <description> Integral Transmission </description> <transport-guarantee>INTEGRAL</transport-guarantee> </user-data-constraint> Sample questions 7 Question 1: • A. BASIC • B. DIGEST • C. FORM • D. CLIENT-CERT • E. None of the above Correct choice: • D Explanation: Like the basic authentication type, HTTP digest authentication authenticates a user based on a username and password. It is more secure; the user information is encrypted before it's sent to the server. Hence choice B is incorrect. • A. NONE • B. AUTHORIZED • C. INTEGRAL • D. AUTHENTICATED • E. CONFIDENTIAL Correct choice: Choice C implies that the Web application requires the data transmission to have data integrity, whereas choice E implies that the Web application requires the data transmission to have data confidentiality. Choice A implies that the application does not need any such guarantee. Plain HTTP is used when the value is set to NONE. HTTPS is used when the value is set to INTEGRAL or CONFIDENTIAL. Choices B and D are incorrect because there are no such values for the <transport-guarantee> element. Thread-safe servlets Typically, the servlet container loads only one instance of a servlet to process client requests. A servlet instance may receive multiple requests simultaneously, and each time the service() method is executed in a different thread. In this section, we discuss what issues can arise when multiple threads execute servlet methods and how to develop thread-safe servlets. Multi-threaded model The multi-threaded model, which is used by default, causes the container to use only one instance per servlet declaration. By using a separate thread for each request, efficient processing of client requests is achieved. The figure below illustrates the multi-threaded model for servlets. One client request arrives for servlet1 and two for servlet2. The container spawns one thread for executing the service() method of servlet1 and two for the service() method of servlet2. All the threads execute simultaneously and the responses generated are SingleThreadModel interface A very convenient way of ensuring that no two threads will execute concurrently in the servlet's service() method is to make the servlet implement the SingleThreadModel interface. The SingleThreadModel interface does not define any methods. The servlet container guarantees this by either synchronizing access to a single instance of the servlet or by maintaining a pool of servlet instances and dispatching each new request to a free servlet. The figure below illustrates the situation when servlet2 implements the SingleThreadModel interface. Two client requests arrive for servlet2. Here the container uses a different instance of servlet2 to service each of the two requests. However, this technique has its own disadvantages. If access to the servlet is synchronized, the requests get serviced one after the other, which can cause a severe performance bottleneck. Maintaining multiple servlet instances consumes time and memory. Even though multiple threads cannot enter the service() method simultaneously, in this case thread safety issues are not completely taken care of. Static variables, attributes stored in session and context scopes, and so on are still being shared between multiple instances. Also, instance variables cannot be used to share data among multiple requests because the instances serving each request might be different. Local variables Local variables are always thread safe because each servlet has its own copy of these variables, so they cannot be used to share data between threads because their scope is limited to the method in which they are declared. Instance variables Instance variables are not thread safe in the case of the multi-threaded servlet model. In the case of servlets implementing SingleThreadModel, instance variables are accessed only by one thread at a time. Static variables. Context scope The ServletContext object is shared by all the servlets of a Web application, so multiple threads can set and get attributes simultaneously from this object. Implementing the SingleThreadModel interface does not make any difference in this case. Thus the context attributes are not thread safe. Session scope The HttpSession object is shared by multiple threads that service requests belonging to the same session, so the session attributes are also not thread safe. Just as the case with context attributes, the threading model has no impact on this behavior. Request scope The ServletRequest object is thread safe because it is accessible only locally within the service() method, so the request attributes are safe, irrespective of the Sample questions 8 Question 1: Which of the following variables in the above code are thread safe? • A. i • B. session • C. ctx • D. req • E. obj • F. res Correct choices: • A, C, D, and F Explanation: The static variable i is thread safe because it is final (cannot be modified), or else it would not have been safe. Request and response objects are scoped only for the lifetime of the request, so they are also thread safe. Session and ServletContext objects can be accessed from multiple threads while processing multiple requests, so they are not thread safe. However, in this case, the ServletContext object is synchronized, so it can be accessed only by one thread at a time. obj is not thread safe because even though the ServletContext object is synchronized, its attributes are not. They need to be synchronized separately. Hence choices B and E are incorrect and choices A, C, D and F are correct. • A and D Explanation: only one thread is executing the servlet's method at a time. So what will happen for multiple requests? In that case, the container may instantiate multiple instances of the servlet to handle multiple requests, so option A is correct and B is incorrect. JavaServer Pages JavaServer Pages (JSP) technology is an extension of the Java Servlet API. JSP pages are typically comprised of static HTML/XML components, custom JSP tags, and Java code snippets known as scriptlets. Even though JSP pages can contain business processing logic, they are mainly used for generating dynamic content in the presentation layer. Separation of business logic from presentation logic is one of the main advantages of this technology. Directives A JSP directive provides information about the JSP page to the JSP engine. The types of directives are page, include, and taglib (a directive starts with a <%@ and ends with a %> ): • The page directive is used to define certain attributes of the JSP page: <%@ page import="java.util.*, com.foo.*" %> • The include directive is used to include the contents of a file in the JSP page: <%@ include file="/header.jsp" %> • The taglib directive allows us to use the custom tags in the JSP pages: <%@ taglib uri="tlds/taglib.tld" prefix="mytag" %> Declarations JSP declarations let you define variables and supporting methods that the rest of a JSP page may need. To add a declaration, you must use the <%! and %> sequences to enclose your declarations, starting with a <%! and ending with a %>: Here the variable sum is initialized only once when the JSP page is loaded. Scriptlets Scriptlets are fragments of code that are embedded within <% ... %> tags. They get executed whenever the JSP page is accessed: <% int count=0; count++; out.println("Count is "+count); %> Expressions An expression is a Java expression that is evaluated when the JSP page is accessed and its value gets printed in the resultant HTML page. JSP expressions are within <%= ... %> tags and do not include semicolons: The above expression prints out the value of the variable count. Standard actions JSP actions are instructions that control the behavior of the servlet engine. The six standard JSP actions are jsp:include, jsp:forward, jsp:useBean, jsp:setProperty, jsp:getProperty, and jsp:plugin. We will discuss actions in more detail in the following sections. Comments A JSP comment is of the form <%-- Content to be commented --%>. The body of the content is ignored completely. JSP documents JSP files can now use either JSP syntax or XML syntax within their source files. However, you cannot intermix JSP syntax and XML syntax in a source file. JSP files using XML syntax are called JSP documents. All JSP documents have a <jsp:root> element within which all the other elements are enclosed: You can see that the <jsp:scriptlet> tag is used for scriptlets, the <jsp:expression> tag is used for expressions, the <jsp:declaration> tag is used for declarations, and the <jsp:text> tag is used to embed text within a JSP document. The page directive is represented as <jsp:directive.page> and the include directive is represented as <jsp:directive.include>. import The import attribute of a page directive is used to import a Java class into the JSP page. For instance: session The session attribute can have a value of true or false. It specifies whether the page should take part in an HttpSession. The default value is true. For instance: errorPage The errorPage attribute can be used to delegate the exception to another JSP page that has the error handling code. For instance: isErrorPage The isErrorPage attribute specifies whether the current page can be the error handler for other JSP pages. The default value is false. For instance: language The language attribute specifies the language used by the JSP page; the default value is "java." For instance: extends The extends attribute specifies the superclass of the generated servlet class of the JSP page. The default value of this attribute is vendor-specific. For instance: buffer The buffer attribute gives the minimum size of the output buffer before the content is sent to the client. For instance: autoFlush The autoFlush attribute specifies whether the data in the buffer should be sent to the client as soon as the buffer is full. The default value is true. For instance: JSP lifecycle: • Translation • Compilation • Loading the class • Instantiating the class • jspInit() invocation • _jspService() invocation • jspDestroy() invocation Translation In this phase, the JSP page is read, parsed, and validated. If there are no errors, a Java file containing the servlet class is created. Compilation The Java file created in the translation phase is compiled into a class file. All the Java code is validated and syntax errors are reported in this phase. jspInit() The jspInit() method is called only once in the life of the servlet. It is this method that we perform any initializations required for the servlet. _jspService The request and response objects are passed to this method when each client request is received for the JSP page. JSP scriptlets and expressions are processed and included in this method. jspDestroy() The jspDestroy() method is called when the servlet instance is taken out of service by the JSP engine. Any cleanup operation, such as releasing resources, can be performed in this method. After this method is called, the servlet is unable to serve any client requests. The nine implicit objects in the JSP API and their purpose are listed in the following table: method. For instance, the following scriptlet code uses a conditional statement to check whether a user's password is valid. If it is valid, the marks are printed using an iterative statement. <% if(passwordValid) { %> Welcome, <%= username %> <% for(int i=0; i<10; i++) { %> Printing <%=marks[i] %> <% } } %> Be careful not to leave out the curly braces at the beginning and end of the Java fragments. Sample questions 9 Question 1: What will be the result of accessing the following JSP page, if the associated session does not have an attribute named str? <%! String str; public void jspInit() { str = (String)session.getAttribute("str"); } %> • A. "null" is printed • B. NullPointerException is thrown • C. Code does not compile • D. None of the above Correct choice: The JSP engine declares and initializes nine objects in the _jspService() method. These implicit object variables are application, session, request, response, out, page, pageContext, config, and exception. Because they are declared locally to the _jspService() method, they are not accessible within the jspInit() method, which means this code will not compile. If this code was within the jspService() method, it would have compiled without errors and printed "null." Hence choices A, B, and D are incorrect, and choice C is correct. This declaration will create an instance variable x and initialize it to 0. Then in the service() method, you modify it to 10. Then you declare a local variable named x and give it the value 5. When you print x, it prints the local version of value 5. When you say this.x, you refer to the instance variable x, which prints 10. Hence choices A, C, and D are incorrect, and choice B is correct. If the relative URL starts with "/", the path is relative to the JSP application's context. If the relative URL starts with a directory or file name, the path is relative to the JSP file. The included file can be a JSP page, HTML file, XML document, or text file. If the included file is a JSP page, its JSP elements are translated and included (along with any other text) in the JSP page. Once the included file is translated and included, the translation process resumes with the next line of the including JSP page. For instance, the following JSP page includes the content of the file another.jsp: <html> <head> <title>JSP Include directive</title> </head> <body> <% This content is statically included.<br /> <%@ include file="another.jsp" %> </body> </html> The including and included pages can access variables and methods defined in the other page; they even share the implicit JSP objects. However, the file attribute of the include directive cannot be an expression. For instance, the following code is not allowed: The file attribute cannot pass parameters to the included page, so the following code is illegal: The include directive is typically used to include banner content, date, copyright information, or any such content that you might want to reuse in multiple pages. Dynamically included pages do not share the variables and methods of the including page. The syntax for the jsp:include element is: The relative URL can be absolute or relative to the current JSP file. Here is an example, demonstrating the use of the <jsp:include> action: Note that the value of the page attribute can be an expression that evaluates to a String, representing the relative URL, as shown here: Because the <jsp:include> element handles both types of resources, you can use it when you don't know whether the resource is static or dynamic. <jsp:forward> action The mechanism for transferring control to another Web component from a JSP page is provided by the jsp:forward element. The forwarded component, which can be an HTML file, a JSP file, or a servlet, sends the reply to the client. The syntax is: The remaining portion of the forwarding JSP file, after the <sp:forward> element, is not processed. Note that if any data has already been sent to a client, the <jsp:forward> element will cause an IllegalStateException. <jsp:include <jsp:param <jsp:param </jsp:include> As you can see in this example, the values passed can be static or dynamic. requested. Sample questions 10 Question 1: This will include the content of Helloworld.jsp within the current JSP file. Select the right choice. • A. True • B. False Correct choice: When you include a file using the include directive, the inclusion processing is done at translation time. But request-time attribute values are evaluated at request time, not translation time. Therefore, the attribute value of the file cannot be an expression, it must be a string literal value. Also remember that file attribute values cannot pass any parameters to the included page, so the following example is invalid: Which of the following can be used to include the file another.jsp in the file test.jsp, assuming that there are no errors? File 1: test.jsp // line 1 <%= str%> %> File 2: another.jsp • A. <jsp:directive.include • B. <%@ include • E. <jsp:include Correct choice: • C Explanation: Here, another.jsp does not declare the variable str, so it cannot compile on its own. Note that when a JSP file is dynamically included, it is compiled separately, so the variables are not shared between the including files and the included one. In this case, dynamic inclusion is not possible, so choices C and D are incorrect (D also has an invalid attribute). Choice A is incorrect because XML syntax and JSP syntax cannot be used on the same page. Choice B is incorrect because the valid attribute for the include directive is file and not page. JavaBeans components JavaBeans components (or beans) are Java classes that are portable, reusable, and can be assembled into applications. JSP pages can contain processing and data access logic in the form of scriptlets. However, if there is a lot of business logic to be handled that way, it makes the JSP page cluttered and difficult to maintain. Instead, you can encapsulate the processing logic within beans and use them with JSP language elements. Any Java class can be a bean, if it adheres to the following design rules: • For each readable property of data type "proptype," the bean must have a method of the following signature: • For each writable property of data type "proptype," the bean must have a method of the following signature: public setProperty(proptype x) { } In addition, the class must also define a constructor that takes no parameters. For instance, the following class encapsulates user information and exposes it using getter and setter methods. For instance, the following tag declares a bean of type UserBean and of id user, in application scope: The value of the id attribute is the identifier used to reference the bean in other JSP elements and scriptlets. The scope of the bean can be application, session, request, or page. The id attribute is mandatory, while scope is optional. The default value of scope is page. The other possible attributes are class, type, and beanName. A subset of these attributes needs to be present in the <jsp:useBean> action in one of the following combinations: • class • class and type • type • beanName and type Using class attribute The following tag uses the class attribute. This causes the JSP engine to try and locate an instance of the UserBean class with the id user, in the application scope. If it is unable to find a matching instance, a new instance is created with the id user, and stored in the application scope. This will cause the JSP engine to look for a bean of the given type within the mentioned scope. In this case, if no existing bean matches the type, no new bean instance will be created and an InstantiationException is thrown. Let's discuss the equivalent servlet code generated for beans declared in different scopes. In the servlet, objects of type HttpServletRequest, HttpSession, and ServletContext implement the request, session, and application scopes, respectively. Within the service() method, the equivalent servlet code would be as follows: UserBean user=(UserBean)request.getAttribute("user"); If(user==null) { user=new UserBean(); request.setAttribute("user",user); } Now consider the code if the bean is declared in the session scope: HttpSession session=request.getSession(); UserBean user=(UserBean)session.getAttribute("user"); if(user==null) { user=new UserBean(); session.setAttribute("user",user); } ServletContext context=getServletContext(); UserBean user=(UserBean)context.getAttribute("user"); If(user==null) { user=new UserBean(); context.setAttribute("user",user); } The name attribute refers to the id of the bean and the property attribute refers to the bean property that is to be set. These are mandatory attributes. The value attribute specifies the value to be specified for the bean property. The param attribute can be the name of a request parameter whose value can be used to set the bean property. It is obvious that the value and param attributes would never be used together. The following code sets the name property of UserBean to the value Tom: To set the value of the name property using the request parameter username, we do the following: Assume that the request parameter has the same name as the bean property that is to be set. In this case, the above code can be changed like this: Now let's see the code to set all the bean properties from the request parameter values: If there is no matching request parameter for a particular property, the value of that property is not changed. This does not cause any errors. The following code causes the value of the bean property "name" to be printed out: <% if(user.isLoggedIn()) { %> <% } %> Here, we forward the user to the home page if he is already logged in, and to the error page if he is not. Sample questions 11 Question 1: A user fills out a form in a Web application. The information is then stored in a JavaBeans component, which is used by a JSP page. The first two lines of code for the JSP page are as follows: Which of the following should be placed in the position //XXX to parse all the form element values to the corresponding JavaBeans component property (assuming that the form input elements have the corresponding variables -- with the same name -- in the JavaBeans component)? • A. param="*" • B. param="All" • C. property="*" • D. property="All" • E. None of the above Correct choice:: Which of the following uses of the <jsp:useBean> tag for a JSP page that uses the java.sun.com.MyBean JavaBeans component are correct? /> • B. <jsp:useBean • C. <jsp:useBean • D. <jsp:useBean • E. <jsp:useBean Correct choices: • B and C Explanation: class="className" class="className" type="typeName" beanName="beanName" type="typeName" type="typeName" Custom tags JSP technology uses XML-like tags to encapsulate the logic that dynamically generates the content for the page. Besides the standard JSP tags, it is possible for the JSP developer to create custom tags, which encapsulate complex scripting logic. Using custom tags instead of scriptlets promotes reusability, flexibility, and clarity of the JSP page. Tag libraries JSP custom tags are distributed in the form of tag libraries. A tag library defines a set of related custom tags and contains the tag handler objects. These handler objects are instances of classes that implement some special interfaces in the javax.servlet.jsp.tagext package. The JSP engine invokes the appropriate The tag library needs to be imported into the JSP page before its tags can be used. The uri attribute refers to a URI that uniquely identifies the tag library descriptor (TLD) that describes the set of custom tags associated with the named tag prefix. The prefix that precedes the custom tag name is given by the prefix attribute. You cannot use the tag prefixes jsp, jspx, java, javax, servlet, sun, and sunw, as these are reserved by Sun Microsystems. You can use more than one taglib directive in a JSP page, but the prefix defined in each must be unique. Tag library descriptor file names must have the extension . tld and are stored in the WEB-INF directory of the WAR or in a subdirectory of WEB-INF. We'll now discuss the possible values of the uri attribute. The value of the uri attribute can be the absolute path to the TLD file as shown below: For instance, the following mapping can be used to map the short name /mylib to /WEB-INF/mylib.tld. <taglib> <taglib-uri>/mylib</taglib-uri> <taglib-location> /WEB-INF/tld/mylib.tld </taglib-location> </taglib> We can also give the path to a packaged JAR file as the value for the uri attribute. In this case, the JAR file must have the tag handler classes for all the tags in the library. The TLD file must be placed in the META-INF directory of the JAR file. The classes implementing the tag handlers can be stored in an unpacked form in the WEB-INF/classes subdirectory of the Web application. They can also be packaged into JAR files and stored in the WEB-INF/lib directory of the Web application. Empty tag A custom tag with no body is called an empty tag and is expressed as follows: <prefix:tag /> For instance, the following tag gets the username from the request parameter and prints an appropriate welcome message. <test:welcomeyou> <% String yourName=request.getParameter("name"); %> Hello <B> <%= yourName %> </B> </test:welcomeyou> For body tags with attributes, the processing of the body by the tag handler can be customized based on the value passed for the attribute: <test:hello loopcount=3> Hello World ! </test:hello> Here, the tag processes the body iteratively; the number of iterations is given by the value of the loopcount attribute. Nested tags A tag can be nested within a parent tag, as illustrated below: <test:myOuterTag> <H1>This is the body of myOuterTag</H1> <test:repeater repeat=4> <B>Hello World! </B> </test:repeater> </test:myOuterTag> The nested JSP tag is first evaluated and the output becomes part of the evaluated body of the outer tag. It is important to note that the opening and closing tags of a nested tag and its parent tag must not overlap. Sample questions 12 Question 1: <taglib> <taglib-uri>/myTagLib</taglib-uri> <taglib-location>/location/myTagLib.tld</taglib-location> </taglib> How would you correctly specify the above tag library in your JSP page? The taglib directive is used to declare a tag library in a JSP page. It has two attributes: uri and prefix. The value of uri is the same as the value of the <taglib-uri> element in the deployment descriptor, where it has been mapped to the location of the library's TLD file. If this mapping is not used, then the uri attribute must directly point to the TLD file using a root relative URI such as Which of the following XML syntaxes would you use to import a tag library in a JSP document? • A. <jsp:directive.taglib> • B. <jsp:root> • C. <jsp:taglib> • D. None of the above Correct choice: In XML format, the tag library information is provided in the root element itself: <jsp:root xmlns: ..... </jsp:root> The attribute value pair xmlns:test="sample.tld" tells the JSPEngine that the page uses custom tags of prefix myLib and the location of the tld file. Hence choices A, C, and D are incorrect and choice B is correct. The <uri> element uniquely identifies the tag library; its value can be specified for the uri attribute in the taglib directive for the library. The JSP engine implicitly creates a mapping between the uri and the actual location of the file: <taglib> <tlib-version> 1.0 <tlib-version> <jsp-version>1.2 <jsp-version> <short-name> test <short-name> <uri> </uri> <tag> <name> welcome</name> <tag-class> whiz.MyTag</tag-class> <body-content> empty</body-content> <attribute> <name>uname</name> <required> true</required> <rtexprvalue> false</rtexprvalue> </attribute> </tag> </taglib> Defining tags Each tag is defined by a <tag> element. The mandatory elements <name> and <tag-class> specify the unique tag name and the tag handler class, respectively. If a tag accepts attributes, then the <tag> element should have one or more <attribute> sub-elements. We can indicate that an attribute is mandatory by specifying true value for the <required> element. If a value is not supplied for the attribute when the tag is The <body-content> element can have one of the following values: empty, JSP, or tagdependent. For tags without a body or empty tags, we specify the value for this element as "empty." All the tag usage examples shown are valid for empty tags. <test:mytag /> <test:mytag <test:mytag></test:mytag> For tags that can have valid JSP code (can be plain text, HTML, scripts, custom tags) in their body, we specify the value for <body-content> as "JSP." The following code illustrates the use of a tag with JSP code in its body: <test:hello loopcount=3> Hello World ! </test:hello> When the <body-content> tag has the value "tagdependent," the body may contain non-JSP content like SQL statements. For instance: <test:myList> select name,age from users </test:myList> When the <body-content> tag has the value "tagdependent" or "JSP," the body of the tag may be empty. Tag handler methods defined by these interfaces are called by the JSP engine at various points during the evaluation of the tag. Tag interface The Tag interface defines the basic protocol between a tag handler and JSP container. It is the base interface for all tag handlers and declares the main lifecycle methods of the tag. BodyTag interface The BodyTag interface extends IterationTag by defining additional methods that let a tag handler manipulate the content of evaluating its body: The PageContext class has the following methods to access the three JSP implicit objects: request(), session(), and application(). Sample questions 13 Question 1: • A. doStartTag() • B. doAfterBody() • C. doEndTag() • D. release() Correct choice: Depending on the return value of the doStartTag() method, the container calls the doEndTag() method. doEndTag() decides whether to continue evaluating the rest of the JSP page or not. It returns one of the two constants defined in the Tag interface: EVAL_PAGE or SKIP_PAGE. A return value of Tag.EVAL_PAGE indicates that the rest of the JSP page must be evaluated and the output must be included in the response. A return value of Tag.SKIP_PAGE indicates that the rest of the JSP page must not be evaluated at all and that the JSP engine should return immediately from the current _jspService() method. The setBodyContent() method is called and the bodyContent object is set only if doStartTag() returns EVAL_BODY_BUFFERED. The container may reuse a tag instance if a custom tag occurs multiple times in a JSP page. The container calls the release() method only when the tag is to be permanently removed from the pool. This method can be used to release the tag handler's resources. The setPageContext() method is the first method called in the lifecycle of a custom tag. The JSP container calls this method to pass the pageContext implicit object of the JSP page in which the tag appears. The doAfterBody() method is the only method defined by the IterationTag interface. It gives the tag handler a chance to reevaluate its body. In this section, you'll learn about the five important J2EE design patterns covered in the SCWCD exam. Value Objects In an Enterprise JavaBeans (EJB) application, each invocation on a session bean or an entity bean is usually a remote method invocation across the network layer. Such invocations on the enterprise beans create an overhead on the network. If the server receives multiple calls to retrieve or update single attribute values from numerous clients, system performance would be degraded significantly. A Value Object is a serializable Java object that can be used to retrieve a group of related data using just one remote method invocation. After the enterprise bean returns the Value Object, it is locally available to the client for future access. If a client wishes to update the attributes, it can do it on the local copy of the Value Object and then send the updated object to the server. However, update requests from multiple clients can corrupt the data. Model-view-controller Consider an application that needs to support multiple client types like WAP clients, browser-based clients, and so on. If we use a single controlling component to interact with the user, manage business processing, and manage the database, it affects the flexibility of the system. Whenever support for a new type of view needs to be added, the whole application will need to be redesigned. Also the business logic will need to be replicated for each client type. The model represents business data and operations that manage the business data. The model notifies views when it changes and provides the ability for the view to query the model about its state. Typically, entity beans would play the role of model in the case of enterprise applications. The view handles the display styles and user interactions with the system. It updates data presentation formats when the model changes. A view also forwards user input to a controller. In J2EE applications, the view layer would include JSP and servlets. A controller dispatches user requests and selects views for presentation. It interprets user inputs and maps them into actions to be performed by the model. In a standalone application, user inputs include text inputs and button clicks. In a Web application, users communicate by sending HTTP requests to the Web tier. Session beans or servlets would represent the controller layer. Business Delegate In a J2EE application, the client code needs to utilize the services provided by the business components. If the presentation tier components are made to access the business tier directly, there are some disadvantages. Whenever the business services API changes, all the client components would need to be altered accordingly. Also, the client code needs to be aware of the location of the business services. The Business Delegate object helps to minimize coupling between clients and the business tier. This object encapsulates access to a business service, thereby hiding the implementation details of the service, such as lookup and access mechanisms. If the interfaces of the business service changes, only the Business Delegate object needs to be modified and the client components are not affected. Using the Business Delegate can free the client from the complexities of handling remote method calls. For instance, this object can translate network exceptions into user-friendly application exceptions. The Business Delegate may cache business service results. This improves performance by reducing the number of remote calls across the network. The Business Delegate object is also called client-side facade or proxy. Front Controller In the presentation layer of a Web application, multiple user requests need to be handled and forwarded to the appropriate resource for processing. The navigation steps vary according to the user actions. Also, the resources need to ensure that the user has been authenticated and is authorized to access the particular resource. Front Controller is a controlling component that holds the common processing logic that occurs within the presentation tier. It handles client requests and manages security, state management, error handling, and navigation. The Front Controller centralizes control logic that might otherwise be duplicated, and dispatches the requests to appropriate worker components. As a component that provides the initial single point of entry for all client requests, it is also known as Front Component. Multiple Front Controllers can be designed for different business use cases, which together manage the workflow of a Web application. The coupling between the business tier and the database tier can cause difficulties in migrating the application from one data source to another. When this happens, all the business components that access the data source need to be altered accordingly. To overcome these dependencies, the business tier can interact with data sources through a Data Access Object (DAO). The DAO implements the access mechanism required to work with the data source. The business component that relies on the DAO uses the simpler and uniform interface exposed by the DAO for its clients. By acting as an adapter between the component and the data source, the DAO enables isolation of the business components from the data source type, data access method, and connectivity details. Thus the data access logic is uniform and centralized, and database dependencies are minimized by the use of this pattern. Sample questions 14 Question 1: Your Web application that handles all aspects of credit card transactions requires a component that would receive the requests and dispatch them to appropriate JSP pages. It should manage the workflow and coordinate sequential processing. Centralized control of use cases is preferred. Which design pattern would be best suited to address these concerns? • A. MVC • B. Business Delegate • C. Front Component • D. Value Object • E. Facade Correct choice: Front Component or Front Controller is the design pattern best suited to handle the given requirements. The Front Controller is a component that provides a common point of entry for all client requests. It dispatches the requests to appropriate JSP pages and controls sequential processing. The control of use cases is centralized and a change in the sequence of steps affects only the Front Controller Component. The requirements only specify that workflow should be controlled, so MVC is not the right choice. (If asked about controlling and presenting the data in multiple views, however, MVC should be chosen.) Hence choices A, B, D, and E are incorrect and choice C is correct. Consider a Web application where the client tier needs to exchange data with enterprise beans. All access to an enterprise bean is performed through remote interfaces to the bean. Every call to an enterprise bean is potentially a remote method call with network overhead. In a normal scenario, to read every attribute value of an enterprise bean, the client would make a remote method call. The number of calls made by the client to the enterprise bean impacts network performance. Which of the following design patterns is most suited to solve the above problem? In the scenario explained above, a single method call is used to send and retrieve the Value Object. When the client requests the enterprise bean for the business data, the enterprise bean can construct the Value Object, populate it with its attribute values, and pass it by value to the client. When an enterprise bean uses a Value Object, the client makes a single remote method invocation to the enterprise bean to request the Value Object instead of numerous remote method calls to get individual bean attribute values. Hence choices A, B, and D are incorrect and choice C is correct. well in the exam. Applying and experimenting with new concepts can reinforce what you learn and in turn build your confidence. The sample exam questions given at the end of each chapter in this tutorial provide insight into what you can expect in the actual exam. I hope this tutorial has been beneficial in your preparation for the SCWCD exam, and I wish you the best of luck on your exam. Resources Learn • Take " Java certification success, Part 1: SCJP " by Pradeep Chopra ( developerWorks, November 2003). • Here you can find the DTD for the Servlet 2.3 deployment descriptor. • You can also refer to the JSP Documentation. • Here are some useful JSP tutorials from Sun: • JSP Tutorial • JSP Short Course • JSP Java Beans Tutorial • Learn how custom tags are developed and used. Check out the following links: • Tag Libraries Tutorial • Jakarta Taglibs Tutorial • These SCWCD certification guides will help you focus on the exam topics: • SCWCD Certification Study Kit (Manning Publications, July 2002) by Hanumant Deshmukh and Jignesh Malavia • Professional SCWCD Certification (Wrox Press, November 2002) by Daniel Jepp and Sam Dalton
https://www.scribd.com/document/6841690/scwcd
CC-MAIN-2019-43
refinedweb
10,916
53.31
QandA/survey.html(starting with the course root). .cshrcfile. source ~/.cshrcto get those changes accepted in the same session. .javafile to a .classfile, you use javac, the Java compiler. .classfile, you use java, the Java interpreter. mainroutine, which is the routine that the interpreter executes. public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } // main } // HelloWorld HelloWorld.java. javac HelloWorld.java java HelloWorld publicsays that our class is publically accessible. Later, we'll see why and how you might want to have private classes. classtells the Java compiler that we're defining a class. HelloWorldnames the class. In general, the name of your class should correspond to the name of the file that defines the class (and you should only define one class per file). publicsays that our mainfunction is publically accessible. staticsays that the function belongs to the class as a whole, and need not be associated with a particular object of the class. voidsays that the function has no return value. mainnames the function. mainfunction takes the same set of parameters, even if it may not use them. String[]indicates that the parameter is an array/vector of strings. argsnames that parameter. printlnmethod of the outobject provided by the Systemclass, using the string Hello World!as a parameter. (define (main args) (write "Hello World!")) Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors. Source text last modified Tue Oct 7 16:50:28 1997. This page generated on Wed Nov 5 12:38:52 1997 by SiteWeaver. Contact our webmaster at rebelsky@math.grin.edu
http://www.math.grin.edu/~rebelsky/Courses/CS152/97F/Outlines/outline.02.html
CC-MAIN-2017-51
refinedweb
286
62.24
One in five Nature readers — mostly scientists — say they up their mental performance with drugs such as Ritalin, Provigil, and Inderal. […] when asked how they felt about professional thinkers using drugs to enhance their cognitive performance, nearly 80% said it should be allowed. While this report reaches a different part of science, the usage of these drugs can be utilized by software developers alike. What is your opinion on this, somewhat new, development? Ask OSNews: Use of Brain-Boosting Drugs in Software Industry? About The Author Eugenia Loli Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker. Follow me on Twitter @EugeniaLoli 96 Comments 2008-04-10 10:44 pmczubin don’t take them anymore though. But that’s mostly because I love doing computer science. I don’t need anything to keep me going for hours on an end But I can give you first hand experience, they simply allow you start on your work instead of running around screaming to do something more fun. And if it can help any hyperactive person for a short period in need, I’m all for it. Still anyone without focus problems should best stay away from them 2008-04-10 11:50 pmOliver >I was giving Ritalin when I was a child… because I am what I am.. namely, hyperactive. There are even countries which tread such things with electric shock therapy. So where is the point? It’s to _some_ degree nonsense. Of course there are certain hallucinogens as _last_ resort, like the mentioned methylphenidat. But just for fun to try something new or to extend some abilities? It’s crazy and people tolerating or promoting it are crazy too. Drugs _are_ drugs with some exceptions for really ill people. 2008-04-11 12:51 amdgarcia42 Of course there are certain hallucinogens as _last_ resort, like the mentioned methylphenidat Just to point out, Methylphenidate is a Central Nervous System stimulant, not a hallucinogen. (Just because a small segment of people have hallucinations as a side effect doesn’t make it a hallucinogen, as that’s not its primary action. I got hallucinations when I shot-gunned 12 redbulls one night – going to peg that as a hallucinogen? Edited 2008-04-11 00:53 UTC 2008-04-11 5:30 amkaiwai remember when I was at school, kids who were hyperactive were just seen as kids ‘with a lot of spirit’ or ‘very excitable’ – there was never any need to drug up them up to the eye balls with medicine. Good old fashion running around at the park, playing rugby, bull rush and other past times worked all that excess energy out of their system; and we had as much ‘additives’ and ‘sugars’ in our foods. Its (sugar and additives) been shown that time and time again, that these have NO EFFECT on children. It is all about discipline and how the parents act. Heck, I would call myself an incredibly crap parent, but apparently when I look after kids, they are well behaved after them being with me; what do I do? I treat them like little adults. I don’t talk to kids in childish language, I give them responsibility, and make them think about what they’ve done wrong rather than simply the typical politically correct ‘time out’. I don’t take them anymore though. But that’s mostly because I love doing computer science. I don’t need anything to keep me going for hours on an end Yeap, its all about finding what YOU are passionate at. Who gives a shit about what everyone else thinks; do what you are passionate about. You’ll find that if you’re passionate about something, you’ll be happy to put in the long hours without any need to take any drugs. Then again, it goes back to this *STUPID* idea created by the politically correct that “you can be anything you want” – WRONG! if you don’t have the natural ability or inclination towards something, you will suck no matter how hard you work at it. It is delusional thinking that some how a person with an IQ of 100 can ever be a rocket scientist. People have talents, people have a natural ability in a particular area. Don’t create this hocus-pocus bullcrap that some how you can turn someone from an average employee into a manager. I’ve seen it happen, people who have been in organisations for 10 years, no inclination towards management – being pushed into the position of being a manager! its stupidity in action! For me, I *KNOW* I suck at maths, and no matter how many hours I spent trying to study it, it never gelled. Now I do know what I am good at – abstract thinking which helps in the areas which I am interested in, Religious Studies and Philosophy. To some how *THINK* that drug taking is the solution for failing to understand that you don’t have talents in that certain area, its pathetic and it lacks commonsense. Know your limitations, know what you’re capable of doing, and face that reality – stop trying to a rocket scientist when you abilities are geared towards being a landscaper. Edited 2008-04-11 05:34 UTC 2008-04-11 8:52 amorfanum Bury that Axe…. It appeared that at school I had ‘no natural talent’ for languages, and was advised against going to university because I wasn’t really up to it. Guess what, things change, experience opens up new paths, and I now hold a PhD in comparative German-British history from a Russell Group university (which kind of requires that you know die deutsche Sprache to a fairly intimate degree). I got First Class marks from the get go, from the BA onwards. Would you rather live in a caste system? That’s pretty good at making sure people know what they can or cannot become, at dictating what apparent ‘natural talents’ are present in any individual. I am no Superman but I sure as hell won’t listen to homilies from other people telling me how good I am at something. I’ll find out for myself, thanks all the same. I may just surprise myself (again), that way. Edited – PS Admins, the last two posts I have submitted have turned up immediately with two votes against them. It’s nice to be appreciated but unless this is a new weighting system I have missed information on, you may want to check out the site software, thanks. Edited 2008-04-11 08:55 UTC 2008-04-11 9:47 amdagw Then again, it goes back to this *STUPID* idea created by the politically correct that “you can be anything you want” – WRONG! if you don’t have the natural ability or inclination towards something, you will suck no matter how hard you work at it. I disagree and have seen several examples to the contrary. For example one person I knew showed no real aptitude for math and failed first year mathematical analysis god knows how many times at university. But for whatever reason he wanted to be a mathematician so he refused to give up and kept working on it. And while he’ll probably never be a great mathematician, he at least got good enough to get his Masters and be offered a PhD position at a fairly prestigious university. Don’t underestimate the power of sheer bloodymindedness. Passion and interest are far more important that natural aptitude. As a counterpoint to the above story I met several people at university who figured they’d go for a math degree, not because they where passionate about it, but because they had a natural aptitude for math, got straight A’s through highschool without having to study and figured to math at university would be the easy for them. Long story short, most of them failed most of their exams and had dropped out by the end of the first year or second year. In fact one of those people was me. I found math and physics easy throughout high school, yet dropped out of physics at university with horrible grades because I found I had no passion or interest for the subject. So I gave up on academia and spent a few years doing low level sysadmin and programming jobs instead. Later through I series of events I found my passion for math rekindled and headed back to university where I this time around managed to get my Masters in mathematics without any significant problem, because this time I really wanted it and enjoyed the subject. Now I do know what I am good at – abstract thinking Funny you should say that, because if I was to make a list of things you had to be good at to be good at math, abstract thinking would easily top the list. So as such I don’t think it’s that you suck at math per se. it’s probably more likely a combination of bad teaching and a lack of passion and interest from your side. Which is perfectly cool, because if everybody was good at math I’d never get a job. 2008-04-11 2:27 pmbryanv Bravo! You’re spot on. I was diagnosed “mild to moderate case of ADD” as a kid. You should have seen the look on the “Doctors” face when I looked him straight in the eye as a 6th grader and said, “Bullshit. You’re just jealous that my brain works faster than yours. Now you’re going to drug me so I’ll fit into your little box labeled ‘normal’. I don’t want to be normal. I’m better than normal, my brain is faster than yours, and you’d rather drug me than admit that you’re slow.” I was put on Cylert, which gave me the most debilitating headaches I’ve ever had, made me a virtual zombie, and left me feeling ‘slow’ _all_the_time_. The strangest thing was that my critical thinking and reasoning skills went to hell in a handbasket, and fast. I was in the advanced math classes, and started failing tests. Not just one or two, -all- of them. Even though I was (for once in my life) doing my homework. I started experimenting with the medicine, and was able to draw correlation between taking the medicine for a few days, getting headaches, and failing tests. Funny thing, if I didn’t take the medicine for a few days I’d quit doing my homework, but I’d get high B’s and A’s on all the tests… I found Newsweek magazine articles that listed the most reported side-effect of that medicine was migraine headaches, and reduced reasoning abilities. I left it out on the counter for my parents to see. The next day, I flushed the pills in a grand act of defiance right in front of them. I never took another ‘pill’ for “ADD”. That lovely mythical bullshit explanation of a ‘disorder’ that doesn’t exist, and behavior or thought-patterns that are in many ways beneficial. I do self-medicate with caffeine to some extent. Okay, I’m a caffeine addict. I force myself through withdrawal every few months so I can start back on smaller doses of coffee… :-p The caffeine helps me focus on uninteresting material with the same level of rigor that I can focus on things I find highly interesting. And let me tell you, when I find something interesting, just -try- and keep me from thinking about it. Productivity wise, caffeine is a god-send for those mundane, boring tasks. 2008-04-11 2:46 pmbm3719. 2008-04-11 4:11 pmkaiwai. Do you even know what Religious Studies and Philosophy are about – in terms of attending a public university – University of Canterbury? 2008-04-10 10:54 pmRIchard James13 Are you talking about drugs in general usage for medical conditions, like anaesthetic for operations. Or are you talking about using drugs to alter your abilities as stated in the article. There is a big difference between the two. I certainly support the first, many people need drugs to survive insulin, penicillin etc. I’m not so sure I support the second, I think it is dangerous to meddle with things we don’t fully understand. I don’t think it gives people a greater advantage, some people are born differently and thus have different advantages in different areas. These drugs don’t give a better advantage than having been born with better genes. - 2008-04-11 2:47 am6c1452 Well, yes. Our present average lifestyle is unnatural, as it happens. And, interestingly enough, the unnatural parts tend not to be very healthy. Coffee is bad for you. Soda is bad for you. Smoking is bad for you (smoking has no positive side effects whatsoever). Not sleeping during the night is bad for you. Looking at a CRT (or LCD) very much is bad for you. Not getting enough sleep is bad for you. Obesity is bad for you. You’re going to have a long, uphill struggle if you want to argue that boosters have no detrimental side effects. A reasonable argument might be that an acceptable product or behavior is one the utility of which exceeds its health detriments. EDIT: A “natural” lifestyle would be hunter-gatherer, and nobody is arguing for going back that that. The health benifits of a few of our unnatural advances (agriculture, medicine, germ theory) easily outweigh all the detrimental practices combined, which is why a smoking couch potato still has a life expectancy about twice that of any population on earth until quite recently. Edited 2008-04-11 03:00 UTC - 2008-04-11 11:41 amnevali I think you are missing the point.. actually I know so We are not talking about drugs as in heroine but drugs as in medicine. You can get Ritalin in your local pharmacy, it is used to calm kids… Heroin is, last I checked, otherwise known as diamorphine—rather widely used in the medical world. The only difference between drugs and medicine is the intent. To the people saying “drugs are bad!!!11â€: do you drink alcohol? coke? coffee? eat food containing MSG? what about foods containing tryptophan? See, the lines are very very blurry. One person’s “natural†is another’s â€unrefinedâ€. When the active ingredient and the effect is the same, it matters very little. And, for the record, (in response to the earlier comments) smoking has positive effects; it just lacks medical benefits. - - 2008-04-11 7:32 pmStephenBeDoper “Doctor Leech: I think you’re in luck though. An extraordinary new cure has just been developed for exactly this kind of sordid problem. Edmund: It wouldn’t have anything to do with leeches, would it? Doctor Leech: I had no idea you were a medical man.” - We seems to always need more.. more money, nicer cars, more attractive partners, bigger houses, more elaborate vacations, slimmer bodies, bigger muscles, better facial features, better sex. Drugs to make us perform better mentally seems to be a logical next step. Or, we could just relax a little… 2008-04-10 10:55 pmWorknMan Or, we could just relax a little… Meanwhile, your competition in the job market (probably a hungry immigrant) will keep burning the midnight oil, studying and learning while you’re off relaxing. How long do you plan to stay relevant anyway? Note: I’m not advocating the use of any kind of drugs here, just saying don’t sleep for too long. 2008-04-11 12:24 am 2008-04-11 12:51 am - 2008-04-12 6:16 pmrenhoek Yes and LSD expands one’s horizons in all trains of thought. It’s just not something you sit at a computer and program with because the slightest distraction will have you staring into the pixelation of a font face for hours. and that’s why steve jobs got the font faces perfect and bill gates f–ked it up. 2008-04-11 12:08 pmThomas2005 LSD is great for brainstorming; not so great for pumping out code. I am not saying a person can’t code while tripping too hard in their own home, but I do not think it would go so well at the office. - 2008-04-11 7:36 pmsakeniwefu There is no proven relation between LSD and other hallucinogenic drugs and any disease, mental or otherwise. Not even addiction. Shamans all over the world have used hallucinogenic substances of different origins since the beginning of time and that they were the shamans means that hallucinations had no negative effects and probably allowed them to gain an edge over the unenhanced townsfolk. Who knows how many early inventions were more the result of drug-induced lucidity than anything else. But even if their only benefit is the fun, who are you or the government to ban any adult from using them? If the adult does something stupid, let the law sort him out. 2008-04-11 11:39 pmdimosd There is no proven relation between LSD and other hallucinogenic drugs and any disease, mental or otherwise. Not even addiction. 1) LSD is not addictive 2) Ecstasy is, but that’s because of its amphetamine properties. 3) I have spent some time talking with 50+ ex-hippies as well as 30+ ex-ravers. I know my facts because I have seen them first place. I can honestly tell you that I was scared from what I’ve seen. Once is not safe enough. My experience happens to agree with science. Common sense is not *necessarily* wrong. I’m sorry if this comes out as a dispute because I don’t mean it this way. Please do yourself a favor and stay away of the mind altering s**t. I was once 18 myself you know and thought the same way you do. Most likely you won’t just take my word and that’s the tragedy of life. Edited 2008-04-11 23:51 UTC This really isn’t something new — I’ve been aware of and tracked, off and on, nootropics (see for a summary) for over two decades. Unfortunately, the “media” has now discovered them, and groups will crawl out of the woodwork to oppose them. Personally, I think that in general it’s a wonderful idea and encourage legally unfettered research into them. As with anything, there are risks as well as benefits; these risks certainly need to be considered. However, the risk/benefit equation differs from individual to individual, and thus no general policies should be created about their use. Much of the modern world has become extremely risk-adverse, to the detriment of progress and the loss of benefits. I expect that the usual cast of idiots in government, religious groups, and the mainstream media will do everything they can to eliminate progress and persecute those who would seek it. 2008-04-10 11:27 pmPhloptical Ok, so then where is the end game? You’re going to trust a drug some company is going to give you because they want the software title out 6 weeks quicker? And, news flash….corporations don’t give a hairy rats behind about their employees, they’re in it to make you work as hard as possible for as little pay as possible. And if that means jacking their employees up on the latest dope du-jour, with little regard as to well being and safety, then oh well. “There’s more slaves, I mean “employees”, where he/she came from.” God help Apple employees, if this drug thing goes through. Look, I’m not against a little of the magic lettuce, every now and then…but not to increase productivity so my company’s stock goes up 1 point. Drugs are supposed to be for recreation and relaxation, when is the world going to figure this out? - 2008-04-11 7:47 pmStephenBeDoper God help Apple employees, if this drug thing goes through. You know what they say: RDF is a gateway drug. - - 2008-04-11 8:01 amdagw Software developers…isn’t their “drug” usually Coca-Cola or energy drinks? Back in the heady dot com days of insane salaries and even more insane deadlines, I knew of several developers who did amphetamine to get through those all night coding sessions. Apparently a small amount, like a quarter gram, was just the thing to get you coding away for an extra 8 hours after you body told you it had to go to sleep. So no, caffeine is far from the only software developer drug. I’ll stick to coffee and the occasional drinking binge [1]. [1] I tacke breigh-boostering druggs and tehy rilly seam too helpe me alots! 2008-04-11 12:48 amlooncraz Me too! I already max out at 5.66 hours of sleep, and have only 6 hours of energy to burn ( I’m in a daze the rest of the time, trying to figure out where I am and what I’m doing ). Any drug that can break these limits is useful! I find caffeine extremely mild, cocaine made me to ‘up,’ and mary jane causes even more dazing. LSD is too uncomfortable ( though I can code and code and code until I get stuck in my reflection which is present in the turned-off monitor ). I have, instead, created a regimen.’ ) I have done this now for a few weeks, and I have greatly improved energy levels and stamina. I also no longer need to take any sleep aids!! Also, please note that I drink massive amounts of fluids and the ibuprofen and Tylenol are in the mix to alleviate pain due to severe back problems. Normally I drink Orange Juice or some other juice, or if I can’t drink that I will drink Mountain Dew / Coke / Vault / Root Beer ( when I need to watch caffeine intake levels ). I rarely exceed 32 oz of soda in a day, and normally it is only around 20 oz. But, back to the article, I openly welcome research and unbiased studies into performance enhancing chemicals/drugs. Of course, we should focus on natural blends, my regimen allows me to feel ginkgo and taurine now, so it is working for me. Not to mention that I now have energy after working for the last 12 hours :-). Yeah!! –The loon - 2008-04-11 4:05 pmlooncraz Apirin, Tylenol, ibuprofen, caffeine, nicotine and marijuana. I’d be really interested to know what your blood pressure and prothrombin time are. Also, it wouldn’t hurt to run that list by a doctor. The Aspirin / Tylenol / Ibuprofen / Caffeine all come from the same source: a migraine preventative schedule. I failed to mention that, I guess. My blood pressure is 105/68, pulse is 62. It is only not this when I am in a workout, where the blood pressure reaches 120/75 and the pulse can get to the 90s. I have no clotting issues thanks to the Aspirin, which alleviates what little I get from smoking ( I’m a light smoker, not even making 1-pack/day, normally around half or less ( I’m too busy to smoke that much ) ). Red wine probably helps the whole equation as well. Having added colloidal silver to the mix I’m interested in what my next blood-work will look like. Ahh, too bad I don’t have insurance, paying for everything kinda sucks, but it’s still been cheaper than having insurance ( best I found was about $400/month because of pre-existing conditions I need covered to make insurance worth-while ( i.e. multiple prolapsed discs in the back ( count: 6-10, not sure if it has changed ) with degeneration, sciatica, and severe migraine with pain lasting up to 3 or 4 days, exhaustion following it — I’m out for up to a week ). My regimen allows me to avoid my headaches and much of the back-pain while directly tackling the problems causing the pain. So far I’ve actually managed to heal three of my discs, without surgery ( though I did the epidural thing (twice), which I won’t do again ), and the pain related to my sciatica has improved greatly ( not to mention the improvement in neurological effects ). The trick is in the amount and type of fluids I drink. If I lapse I could get into trouble if I were to follow that regimen exactly. Otherwise I’m doing great, have had much improvement. –The loon Oh, BTW, the peace pipe has been paramount in healing my nerves. But I hate getting too high, so I rarely smoke as much as I should, normally maxing out at 4 tokes in an evening. Edited 2008-04-11 16:07 UTC - 2008-04-11 9:42 amtimefortea :)) 2008-04-11 12:37 pmkrc_ Check out the following before your next dose of colloidal silver:… 2008-04-11 3:44 pmlooncraz I’ve read all of that, and I’ve read the studies. Argyria is produced by protein-bonded silver, and requires very high doses ( such as all cases reported, the users were DRINKING 16 oz of 450 PPM solutions ( which aren’t colloids ) of silver nano-particles suspended in filtered water ). My intake is 5 ml 20 ppm electrically suspended silver, a real colloid ( MesoSilver ) every other day or so. I also don’t drink any quantity of it, I use it as a mouth wash / gargle and allow some sub-lingual absorption. I became interested in the stuff after getting strep throat, and wanting to avoid antibiotics because they tend to make sicker rather than better. I was gargling with salt water, and I got this stuff the second day of symptoms. I gargled once with it, and ( to my surprise, actually ) my throat immediately changed back to its normal color, though I could feel some infection farther down. So I swallowed 15 ml ( in small increments ) and avoided drinking or eating anything for 15 minutes, then I drank some orange juice and a glass of water. Gone! Honest, completely gone! Not to mention how it repaired my gums! I got it for free, but I bought the second bottle. All of my acne cleared up after 3 days ( day 1: 5ml / day 2: 2.5ml ) and my irritable bowl is now in good working order ( after some weeks at low dose ( 1ml/daily 2.5 ml every other, 5ml on weekends ) to prevent any organ overload due to flushing out “wall-less” bacteria/viruses ). The silver is the only thing I added to get these effects, and my acne comes back after a week of not taking it, but I did not get sick or anything, so the silver isn’t weakening my defenses. Colloidal silver still must be used with care, body-weight adjusted, and timed to not be taken with certain proteins already present in the blood ( i.e., not taken 1-3 hrs before ) to help prevent the possibility argyria ( if the doses themselves don’t do it ). Oh, and most of those people with argyria ( if not all ) actually say they developed it only after using it externally, but I take no chances either way. –The loon 2008-04-11 3:42 pmpolaris20’ ) Either this is a troll post, or you’re trying to see what major organ gives out first. As for the article, I guess I’m glad I’m in infrastructure and virtualization instead of development, because I would never do lifespan shortening crap like that for work. My life is family and friends, not my career. 2008-04-11 3:50 pmlooncraz The intake of everything is timed, and I guess I should’ve mentioned that I don’t follow the regimen to the letter on most days. Vit E & C are recommended as well as the Men’s daily multi-Vit. Graviola is simply powdered leaves ( edible, no less ). Most things are in synergy and are balanced with my diet. And I forgot to make it obvious that the Ibuprofen/Asprin/Tylenol/Caffeine all comes from one pill used for migraine prevention. Those intakes cause vitamin E, A, D, and Calcium depletion ( and potassium, but I eat enough in my diet ), though at levels not high enough to be of any concern. My regimen is the result of careful study and planning ( and testing, of course ). –The loon Personally I wouldn’t do it but I don’t think the goverment should be allowed to ban any substance as long as the risks are known and no one should be barred from ingesting anything they want. The drug war is futile and people are are going to do what they want to do anyway. This has been proven time and time again. 2008-04-11 1:13 amsbergman27 Hasn’t always been my experience. 2008-04-11 3:17 amSoulbender You need to do it more often than every 7 years But of course it’s not always the case but I still rather have sex than popping a pill. Even if it doesnt revitalize me it’s still a lot more fun. 2008-04-11 3:48 amsbergman27 You need to do it more often than every 7 years Perhaps you’re right, Doctor. 2008-04-11 4:32 amDrillSgt “Perhaps you’re right, Doctor.” LOL..thanks for sharing that one. I am not sure of the drug use at all. History repeats itself, no? Some of the greatest minds of our times were addicted to drugs. One argument would be it did help them to be creative I guess. Sigmund Freud was addicted to cocaine, as was Thomas Jefferson and Ben Franklin. LSD was originally used by Dr. Timothy Leary to aid in freeing and opening ones mind. Marijuana is legally used in some places as a pain reliever and relaxant. None of the above mentioned people can it be said any drugs helped. The fact of the matter is if alcohol, caffeine, and tobacco were just being discovered they would be outlawed already as well. Regardless of the above, any mind altering substance, including ritalin, is dangerous, like it or not. In actuality it holds one back by making one “docile”, so they stick with the crowd rather than learn even more. I forgot to mention, before there was ritalin, there was Albert Einstein, who had ADHD and flunked mathematics…. Just something to ponder. 2008-04-11 1:21 amlooncraz Forgot about that aspect of daily regimen 🙂 Nary a day will pass (well… half a day) before an endorphin release is triggered. 😉 But that is what the Yohimbe / Epimedium / Gota Kola complex is for. I take mild amounts for blood flow stimulus to increase oxygenation ( I have become a smoker somehow.. yuck, how in hades?!? ). And I use Graviola primarily as a cancer preventative and mild tranquilizer ( to lessen my energy-wasting hyper-active moments ). Timing is everything! And as such, I more or less time endorphin release. –The loon 2008-04-11 3:33 amsbergman27 Stop smoking! Get at 9 servings of a wide variety of fruits and vegetables per day. (Shoot for all the colors of the rainbow.) Bonus points for spinach, berries, and cruciferous vegetables. And perhaps a cup of cocoa. (Preferably cocoa not processed by alkalai. Spares the polyphenols.) Eat fatty fish regularly. But choose fish low in mercury. Wild Alaskan salmon is easily available fresh, frozen, canned, or in vacuum sealed pouches, and has low levels of mercury, and high levels of EPA and DHA Omega-3 fats. That’s about the most evidence-based regimen you’re going to find. Edit: BTW, if you insist upon smoking, take care to avoid high levels of beta carotene, as found in supplements. There is evidence to indicate that it substantially *increases* the incidence of lung cancer in smokers. It is not known to have such an effect on nonsmokers, however. Edited 2008-04-11 03:38 UTC People already use legal performance enhancing drugs, for example, caffeine. My view is that it’s another risk/reward ratio that people are going to have to work out, but should by no means be illegal. As far as companies forcing you to take them… I don’t think a company should be allowed to fire you for any non-directly work related reason. I don’t think, for example, being female on a building site should be a sackable offence. However, they may find that another “doped up” candidate performs better than you and choose them over you. Tough luck, I guess. This is how human society works. Just as, generally, men might get more jobs on building sites because they’re stronger and it’s a physically intense job. Driving a car, for example, is likely far more risky than most of these drugs are, yet many jobs list driving as a requirement. Some people might argue that WiFi fries brains and refuse to work in any office with WiFi. Their choice, but it’s going to lock them out of a fair few office jobs. Society isn’t fair, unfortunately. Currently people are sometimes blessed with better brains than others and get better jobs. In the future, some people will either need to accept that there is an increased risk for increased pay, or go and work in a more menial job that doesn’t require brain enhancing drugs. Personally, I don’t think a drug is particularly special. It’s just a way of inducing a net-positive psychological effect. I support regulation of addictive drugs, simply because they can tear apart families, screw up childhoods and so on, and sometimes people can make bad choices. Drugs – Nootropic or otherwise – are technology. Their use should be up to the individual. Individuals ought to be responsible for the consequences as well. No bullshit lawsuits, in other words, if some of these turn out to be problematic in the long run. I don’t like the idea of people making this decision for me. A person’s performance and behavior can and ought to be judged and sanctioned appropriately, but those who are not on caffeine, nicotine, antidepressants or otherwise are too often junkies of a far more insidious thing – self righteousness. I personally don’t think most of these drugs have been studied sufficiently for me to experiment with them, but I certainly do take calculated risks with moderate caffeine consumption from time to time.. 2008-04-11 5:51 amDoc Pain To clear any myths and misconceptions, drugs are medicine with side effects. And yes, there are real medicine in the world, you just need to know where to look I think you can see this differently. In many cases, the substances refered to as “drug” and / or “medicine” are the same. It’s just about the the intention – who takes them, and why. I may speak from a very personal experience. Because of ICD-10 G47.1, .2 and .4 I had to take Vigil (aka Provigil, Moadfinil). This was medicine, because it changed by “abnormal” behaviour into a “normal” one. If a completely healthy man would take Vigil to extend his working periods, he would change his “normal” behaviour into something “abnormal” again – that would mean drug abuse. Especially in cases of “trained” drug users, you hear them talking about “using the drug” or “controlling the drug”, but usually this contradicts to the concept that drugs force you to use them, they take away your control. Symptoms of drug evasion are what your body uses to show you this. Most medicine and drugs have something in common: Side effects. So I’d recommend the authority to ordinate medicine to a doctor with many experiences and a healthy common sense. That’s what drug users usually seem to lack. Refer to “THX 1138” for a nice utopia. 🙂 Therefore, I know this article is about drugs … aka crap. But if there were medicine that does boosting, then i’m all for it. Medicine differences from drugs by the argumentation I’ve given above. First, you use drugs to feel better, but later on, you are forced to take them to feel nearly normal. And the side effects… okay, I think we’re all educated enough to know about the danger with drugs, hmkay? 🙂 Coming back to the topic: Medicine may help persons to be active in some sectors of IT industry. Drugs do not automatically enable you to be successful in your job. 2008-04-11 8:31 pmStephenBeDoper I am 100% for medicine, and not drugs, unless it is a life threatening situation. Problem is, that’s a false dichotomy. - 2008-04-11 11:15 pm6c1452 drug 1. Pharmacology. A chemical substance used in the treatment, cure, prevention, or diagnosis of disease or used to otherwise enhance physical or mental well-being. medicine 1. any substance or substances used in treating disease or illness. This took 30 seconds with dictionary.com EDIT: Since you’re obviously dying to be asked, why don’t you tell us about the magical herbal remedy for which you want to change the definition of medicine. Edited 2008-04-11 23:20 UTC - - 2008-04-12 6:54 amStephenBeDoper You’re joking, right? This is the only definition / distinction you’ve offered: drugs are medicine with side effects. The really obvious question that statement begs is: what existing medicines *don’t* have any side-effects? Besides placebos, that is. 2008-04-11 9:54 pmabraxas. Please elabortate because that sounds like the biggest bunch of crap I have ever heard. All drugs have some side effects. There is no difference between “medicine” and “drugs” other than their legality. Take OxyContin for example, it might as well be heroin in a prescription bottle. There are other “medicines” that have been derived from illegal plants like the cocoa plant and marijuana. The only difference is that it has been packaged and sold to you at an outrageous price compared to the real deal. - 2008-04-12 7:19 amStephenBeDoper I really doubt that evasive posturing will win anyone over to your argument. - 2008-04-12 5:09 pmsbergman27 Seems to me you guys are focusing on the wrong facets. 1. Does the drug have clinically proven benefits? 2. Are those benefits proven for the target population, or some other population? 3. What short term side effects, if any, does it have? 4. What is known of its long term effects? 4a. Does it remain effective? 4b. How confident can we be that we really know the long term effects, both good and bad? 5. How much risk is the subject willing to take to gain the possible benefits? It’s really all the same questions that should be weighed in deciding, for example, whether to start treatment with any medication. Do the possible benefits outweigh the possible risks. The problem is that the unknowns and risks are usually of a magnitude that, in the absence of a clear medical problem or deficiency, they outweigh the possible benefits. There is no need to split hairs over charged terms like “medicine” vs “drug”. An objective cost/benefit analysis, which takes into consideration both short and long term factors, is much more to the point. The key term being “objective”, which may be a difficult state for the subject to attain; We tend to be biased, and we tend to want things now. This is where the opinion of a third party, trained in the medical field, and familiar with the scientific literature and practical application (from a medical standpoint) is invaluable. In fact, in the final analysis, I doubt that what OSNews readers decide about the issue in general has any relevance or meaning at all. Each case is unique, and should be treated as such. The decision being made after careful consideration by the relevant parties. Personally, while I make a great effort each day to maintain good diet, exercise, and quality sleep time, brain health being a primary motivation, I don’t feel particularly tempted to seek out pharmaceutical aid at this time. 2008-04-13 12:10 amShannara Here’s two that are common for babies teething and gas: 1. Hyland’s Homeopathic Teething Gel (No Anesthetic, this no chance of baby death.) 2. “Safeway” Infants’ Simethicone Drops (Gas Relief). I just realized, that people who are not parents may not have used any common medicine available. My apologies for assuming you are a parent. I will be very shortsighted here – not because I think I am, but because most will think I will be Generally I am against drugs/medications/whatever if not taken for some real medical reason. Willing to “enhance their cognitive performance” is not a medical reason, and I don’t like the idea. Moreover, I am also something you could call a scientist, and I wouldn’t like to see my co-workers take such drugs, and I wouldn’t like to work with such fellas. You see, if you can’t have a good idea without the drugs, there’s no way on earth you’ll have it with it. Of course, if you happen to have that idea while being medicated, it’s easy to say it was because of the drugs Which I find stupid. If one really can’t focus on a specific task at hand for a reasonable time, that might just be a case where medical reasons could explain the necessity for such drugs. But that’s not average joe’s task to decide. BTW, I usually find that I have pretty good ideas while having beer [well, at least I find them good while drinking those beers )) ] so I should just go to work tomorrow and say I’ll be taking beer on a regular basis from now on because I feel it enhances my cognitive performance whether children under age 16 should be allowed […] 86% of respondents said they should not. But a third of respondents said they would feel pressure to give such drugs to their children if other children at school were taking them All I have to say to that is: fcuking morons. Idiots shouldn’t have children. Are they sure those things don’t cause brain damage ? 2008-04-11 9:59 amtimefortea So you are saying that you drink beer for a real medical reason? 2008-04-12 6:01 pm Throw a pill at everything, regardless of if it’s actually a problem or not. I would question this just because of the drugs listed… particularly Inderal which is a SEDATIVE and has a host of health risks associated with it. Seriously a beta-blocker is the LAST thing you should be taking if you need to CONCENTRATE. I THINK that what was meant is Adderall, which is a amphetamine and a close relative to Ritalin, which is in itself only a half-step away from Methamphetamine. That would make the list be three forms of bennies – oh yeah, good thinking there. (though if people are indeed making that mistake, then it’s even funnier) The irratablity hours after taking them probably makes home-life miserable, nothing like the shakes and jitters if you mix it with the other commonplace stimulants coders are notorious for like caffiene and sugar. I’m just SO certain the insomnia associated with these drugs make a person so productive a week after their abuse… and let’s not forget the increased risk of becoming diabetic which again, goes SO well with the caffiene and junk food coder diet. Frankly, if you are going to sit around taking amphetamines, you might as well get the cheap over the counter DHEA or ephedra sold as diet supplements – since they are damned near the same thing. The question becomes are these people actually more productive, or do they in thier altered mind-state just THINK they are more productive… at which point it’s kind of like asking a drunk if they think they are too impaired to drive. Though I’m willing to bet all of these jokers popping pills would probably be WAY more productive if they took the time to slow down, eat right, and get a decent nights sleep instead of staying up until 3AM playing WoW and watching Adult Swim when they need to be at work 8AM the next day. Edited 2008-04-11 05:54 UTC The world is full of very intelligent people. The enormous body of art and science that humans have amassed is a constant reminder of this. Don’t let the fact that there are brilliant people everywhere make you feel stupid. Those who market “smart drugs” are exploiting this insecurity just as surely as cosmetics companies exploit the insecurity of women who only see the most beautiful of the beautiful in media. It is nothing but a confidence scam. There is only one way to make yourself smarter–sustained effort at mental activity that challenges you. If you keep working on difficult skills you will eventually get better at them. There is absolutely no substitute for effort. If it were easy nobody would think you were smart for being good at it. Age and experience will give you the confidence and insight to apply these smarts you’ve earned in ever more creative ways. Soon, others will be so impressed with your abilities that they will wish they could pop a pill to be as smart as you. Edited 2008-04-11 08:13 UTC “Mental enhancing” substances destroyed the lives/fried the brains of at least 2 good persons I know. It’s time the bust the myth of a safe wonder magic potion that makes everything better. There’s no such thing. There are only cruel, naive interventions to normal brain neurochemistry. Sorcerer’s Apprentice syndrome. Supporting optimal brain functional through nutritients, exercise (mental and physical), fun, hanging out with interesting people pays out a lot more, IMHO, but it’s not as easy. Even coderz drug of choice caffeine (hell, I love it) gives you something (think fast, concentrate better) and gets back something in return (you ended up writing 100000 lines of… Java? 🙂 All real-life examples I know rather enjoy brain-busting drugs like beer while programming. Especially the *-nix-crowd seems to have a special relationship with beer, even the girls! I’m not judging anyone, it is just my experience from a couple of places I worked in the media- and software-industry. Also, the media-guys drink much more and also use plenty of other soft-drugs when available. Right now I am revising PHP-code for a large web-app that has been WUI (written under the influence) by a friend. I can’t say I like everything I see there. There is obviously some point where the quality of code suddenly drops sharply with higher levels of intoxication. Weirdest line of code: if ($somevar) {} else {foobarfunction($somevar0);} Obviously the braincells that knew how to do simple if-statements were already passing out on the sofa. Pretty embarrassing if this drunk-3am-in-morning-code gets into the final product, if you ask me.. 2008-04-11 12:36 pmsbergman27 Right now I am revising PHP-code for a large web-app that has been WUI (written under the influence) by a friend. Obviously, the choice of PHP was MUI. Which Python framework are you rewriting it in? Friends don’t let friends use PHP. Edited 2008-04-11 12:37 UTC - 2008-04-11 9:37 pmsakeniwefu I agree. I find low alcohol intake helps me concentrate in code by killing other thoughts or worries that get in the way. One/two beers is best. When stuck, it is magic. Work as a software developer. After looking for an ‘edge’ (in regards to mental clarity + energy) without going illicit/illegal route, I had doctor prescribe Provigil, which was done because of the ‘downer’ effects of some of my blood pressure meds. Personally, I absolutely LOVE the stuff. At normal dose, I can remain in head-down-pure-focus mode for extended periods of time. I can easily work all night with an extra dose. Of course, I don’t do it often, but it’s been a boon for me. We could take drugs to enhance our productivity… …or, we could all just have Robert of SkyOS fame do all our software development and go take a super-long vacation to relax. I used to do amphetamines and other uppers to get big projects slammed out or to go the long haul on a complex problem. You just have to be responsible, which I’m sure to many people is a big shock. 2008-04-11 3:44 pmdimosd I used to do amphetamines and other uppers to get big projects slammed out or to go the long haul on a complex problem. You just have to be responsible, which I’m sure to many people is a big shock. On the other hand, I doubt most people that get addicted actually expected to get addicted. It was a big shock for them as well. I just slam a couple of Red Bulls through out the day. It’s perfectly legal and gives me that awesome $2 dollar crack rush before I bottom out because of the sugar. Then I get the caffeine shakes and headaches. Then I slam another. Kind of reminds me of the Helter Skelter song. “When I get to the bottom I go back to the top of the slide Where I stop and turn and I go for a ride” Interesting this subject comes up because a few days ago I was wondering what really is the difference ethically between athletes using performance enhancing drugs and someone sucking down coffee, caffeine, or cigarettes to open their mind up prior to taking a test. I’m not referring to long term effects on the body when those substances are abused but merely the ethics behind them. The only answers I could find were two: 1) A mental skill challenges only one – yourself. Athletics not only is against yourself but also other people’s skill. 2) Society has deemed it perfectly acceptable to use certain enhancers to stimulate the brain while disallowing them for athletics. Orange juice for me… And proper medicine for headache, flu and other illness only when when needed… Even coffee might be too much for me – I don’t even like its taste, and it often only turns my stomach upside down and disturbs my concentration. As to real drugs, well, the definition of a drug or of a dangerous drug depends on current medical knowledge, laws and cultural habits. I remember my friend – who is a doc – used to say that if coffee was a new invention, you probably wouldn’t be able to buy it without a doctor’s recipe… Too bad that in my home country coffee is served almost everywhere. Yuck… Edited 2008-04-14 00:37 UTC Drug is a drug, it shouldn’t be allowed in my opinion.
https://www.osnews.com/story/19615/ask-osnews-use-of-brain-boosting-drugs-in-software-industry/
CC-MAIN-2021-43
refinedweb
8,574
70.94
Take an n × n matrix A and a vector x of length n. Now multiply x by A, then multiply the result by A, over and over again. The sequence of vectors generated by this process will converge to an eigenvector of A. (An eigenvector is a vector whose direction is unchanged when multiplied by A. Multiplying by A may stretch or shrink the vector, but it doesn’t rotate it at all. The amount of stretching is call the corresponding eigenvalue.) The eigenvector produced by this process is the eigenvector corresponding to the largest eigenvalue of A, largest in absolute value. This assumes A has a unique eigenvector associated with its largest eigenvalue. It also assumes you’re not spectacularly unlucky in your choice of vector to start with. Assume your starting vector x has some component in the direction of the v, the eigenvector corresponding to the largest eigenvalue. (The vectors that don’t have such a component lie in an n-1 dimensional subspace, which would has measure zero. So if you pick a starting vector at random, with probability 1 it will have some component in the direction we’re after. That’s what I meant when I said you can’t start with a spectacularly unlucky initial choice.) Each time you multiply by A, the component in the direction of v gets stretched more than the components orthogonal to v. After enough iterations, the component in the direction of v dominates the other components. What does this have to do with Fibonacci numbers? The next number in the Fibonacci sequence is the sum of the previous two. In matrix form this says The ratio of consecutive Fibonacci numbers converges to the golden ratio φ because φ is the largest eigenvalue of the matrix above. The first two Fibonacci numbers are 1 and 1, so the Fibonacci sequence corresponds to repeatedly multiplying by the matrix above, starting with the initial vector x = [1 1]T. But you could start with any other vector and the ratio of consecutive terms would converge to the golden ratio, provided you don’t start with a vector orthogonal to [1 φ]T. Starting with any pair of integers, unless both are zero, is enough to avoid this condition, since φ is irrational. We could generalize this approach to look at other sequences defined by a recurrence relation. For example, we could look at the “Tribonacci” numbers. The Tribonacci sequence starts out 1, 1, 2, and then each successive term is the sum of the three previous terms. We can find the limiting ratio of Tribonacci numbers by finding the largest eigenvalue of the matrix below. This eigenvalue is the largest root of x3 – x2 – x – 1 = 0, which is about 1.8393. As before, the starting values hardly matter. Start with any three integers, at least one of them non-zero, and define each successive term to be the sum of the previous three terms. The ratio of consecutive terms in this series will converge to 1.8393. By the way, you could compute the limiting ratio of Tribonacci numbers with the following bit of Python code: from scipy import matrix, linalg M = matrix([[1, 1, 1], [1, 0, 0], [0, 1, 0]]) print( linalg.eig(M) ) Update: The next post generalizes this one to n-Fibonacci numbers. 2 thoughts on “Power method and Fibonacci numbers” I left out some details to simplify the exposition above. For example, I used “convergence” loosely. To get a convergent sequence, you’d need to normalize the vectors. Otherwise the length of the vectors might go to zero or diverge to infinity. What you have here is a homogeneous second order linear difference equation which can be solved without using any matrices and associated eigenvalues. See
https://www.johndcook.com/blog/2015/09/05/power-method-and-fibonacci-numbers/
CC-MAIN-2019-18
refinedweb
632
62.58
The entity type 'MyType' cannot be added to the model because a query type with the same name already exists. public class MyContext : DbContext { public DbQuery<MyType> MyTypes { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { //Exception is thrown here //needed b/c table is not named MyTypes modelBuilder.Entity<MyType>() .ToTable("MyType"); } } Change DbQuery to DbSet. Query Types are used for Views, among other things. public class MyContext : DbContext { //DbSet not DbQuery public DbSet<MyType> MyTypes { get; set; } } Apparently you and I had the same problem on the same day :) My issue was that I had my view set up as DBset: public virtual DbSet<VwVendors> VwVendors{ get; set; } But my mapping was set up as follows: modelBuilder.Query<VwVendors>() .ToView("vw_Vendors"); My fix was to do the opposite of what you did. I had to change DbSet to DbQuery. Your answer helped me get mine :D
https://entityframeworkcore.com/knowledge-base/57205049/a-query-type-with-the-same-name-already-exists
CC-MAIN-2020-40
refinedweb
148
54.12
I recently added comments to a new Django site that I’m working on. Comments pose an interesting problem as they can have a number of “parents”. In my case, the parent might be a user’s “Want” a respondent’s “Have” or possibly another “Comment.” In the process of researching the best way to architect Wantbox’s comments app, I read about “polymorphic associations“, “exclusive arcs” and Django’s ContentType framework. Using this knowledge, I contemplated recreating the comment wheel, since I wanted my comment form to just be a simple “Stack Overflow-type” comment-only field and not the larger “WordPress-type” name/email/website/comment. As I explored Django’s comments framework deeper, I realized that recreating another comment app was a waste of my time and my end product would be far less feature rich than Django’s bundled commenting system. Below are my modifications which allowed me to quickly and easily twist Django comments into what I needed. My Django Comment Modifications: To customize the default comment form and comment list display, I created a “comments” directory in my root “templates” directory and simply overrode the two default comment templates “form.html” and “list.html”. My custom “/templates/comments/form.html”: {% load comments i18n %} {% if user.is_authenticated %} <form action="{% comment_form_target %}" method="post"> {% csrf_token %} {% if next %}<input name="next" type="hidden" value="{{ next }}" />{% endif %} {% for field in form %} {% if field.is_hidden %} {{ field }} {% else %} {% if field.name != "name" and field.name != "email" and field.name != "url" %} {% if field.errors %}{{ field.errors }}{% endif %} {{ field }} {% endif %} {% endif %} {% endfor %} <input class="submit-post" name="post" type="submit" value="{% trans " /> </form> {% else %} I'm sorry, but you must be <a href="javascript:alert('send to login page')">logged in</a> to submit comments. {% endif %} Which is only slightly different from the default Django comments form.html, primarily suppressing the display of the not-wanted and not-required “name”, “email” and “url” input fields. My custom “/templates/comments/list.html”: <div class="comment_start"></div> {% for comment in comment_list %} <div class="comment"> {{ comment.comment }} (from <a href="javascript:alert('show user profile/stats')">{{ comment.user }}</a> - {{ comment.submit_date|timesince }} ago) </div> {% endfor %} In the template where I want to invoke the comments form, I first call {% load comments %} and then {% render_comment_form for [object] %} to show the form, or {% render_comment_list for [object] %} to generate a list of the comments on the object (replace [object] with your appropriate object name). So easy. This solution is working great for me, and still giving me all the other “free” stuff that comes with django comments (moderation, flagging, feeds, polymorphic associations, helpful template tags, etc…). The moral of this story: don’t recreate the wheel when an hour or two of research can give it to you for free. NOTE: This blog post is based on my Stack Overflow answer to Ignacio’s question “How to extend the comments framework (django) by removing unnecesary fields?“ 3 Responses to “Customizing Django Comments: Remove Unwanted Fields” Hey i did exactly this but the validation errors keep showwing -as in the ones which say – the fiels are required what do I do to remove those? yay! usefull! How can i add styling for bootstrap for this?
http://mitchfournier.com/2010/08/12/customizing-django-comments-remove-unwanted-fields/
CC-MAIN-2020-05
refinedweb
535
54.73
Hide Forgot Spec URL: SRPM URL: Description: HylaFAX(tm) is a enterprise-strength fax server supporting Class 1 and 2 fax modems on UNIX systems. It provides spooling services and numerous supporting fax management tools. The fax clients may reside on machines different from the server and client implementations exist for a number of platforms including windows. This is my first package, and I am seeking a sponsor. There are too many issuse with your specfile to list them one by one. Just to name a few: - don't use "%define" for name and version - don't use "%define fc_rel", use "disttag" instead, see - don't use epoch if not necessary - don't use "%define initdir /etc/rc.d/init.d". If you really need it, it should be "%{_initrddir}". - Source0 needs an absolute URL (http://...) - License field not valid, see - remove "Packager:", see - BuildRoot should be "%{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)", see - are you sure about the BuildRequires/BuildPrereq? - "Requires: rpm >= 3.0.5" ist stupid and can be dropped - try to use "%configure" instead of "./configure" and use $RPM_OPT_FLAGS correctly and not with make, see - no need to pass default options to configure (like PAGESIZE) - change "--with-AWK=/usr/bin/gawk \" to "--with-AWK=%{_bindir}/gawk \" - same for vgetty and mgetty which will become %{_sbindir}/[v|m]getty - doesn't use parallel make, see - use "%defattr(-,root,root,-)" instead of "(-,root,root)" - remove generic INSTALL from doc section. Not needed when installed from RPM. - macro usage inconsistent: {initdir} vs. {_initdir} which should be %{_initrddir} anyway, see - empty %pre section - /sbin/ldconfig in %post and %postun is superflurious since the package doesn't put shared libs into the linkers path. - "chkconfig --del" belongs into %preun, see - Requires(post)/(preun) missing for the scriptlets, see - no changelog at all, see I'm not sure if we need to create a system user with a fixed uid/gid and use fedora-usermgmt. BTW: There already is an outstanding review for hylafax, See bug #145218, but it seems like the reporter has lost interest. AFAIK metamail no longer is required for hylafax. I have seen Bug 145218. It does appear that the reporter lost interest. This request should replace that one. HylaFAX no longer has any dependency on metamail. (It really never did, anyway.) I'll address the SPEC file issues promptly. Thanks.. (In reply to comment #5) > Okay, I've made changes as suggested, please review the updated files: I'm going to do a formal review ASAP. From what I see most things look good now and the package builts in mock, so builddeps are ok. Nevertheless there are a few rpmlint warnings and errors. Let's talk about that later... >.) Ok, sounds reasonable. > Fedora puts mgetty and vgetty in /sbin and not /usr/sbin, and so %{_sbindir} > cannot be used in those cases. Right, my bad. And I was wrong about ldconfig, too. Of course there are shared libs, I did not read the file section close enough. > I don't understand this: "I'm not sure if we need to create a system user with a > fixed uid/gid and use fedora-usermgmt." and so I don't know how to respond to it. To be honest: I don't understand fedora-usermgmt ether, there have been endless, controversial discussions lately. The wiki says it's optional, so IMO we should stick with that if everybody agrees. BTW: Looking forward to see (or contribute?) capi4hylafax once this package has been released. Okay, I've returned the ldconfig to their previous places and have updated those files listed above. If fedora-usermgmt is optional then I opt to take the easier route and ignore it. :-) As for capi4hylafax - I've never used it myself. I'm a developer for HylaFAX and IAXmodem, but I've never had occassion to use or work with capi4hylafax at all. So, unfortunately I cannot promise anything with respect to capi4hylafax, but hopefully once HylaFAX is in Fedora Extras someone else can take that next step for CAPI users. I was thinking about being that "somebody eelse", but that CAPI stuff is so horribly broken sometimes I don't dare maintaining it. ;) REVIEW: > $ md5sum hylafax-4.2.5.5-1.src.rpm > 373044fd59ff14554ccda17b2fcca028 hylafax-4.2.5.5-1.src.rpm Good - MUST Items - package and specfile naming according to guidelines - package meets guidelines - license ok - license field in spec matches actual license - license included in source and correctly installed in %doc - spec written in American English - spec is legible - BuildRequires ok, no duplicates and none of the listed exceptions - no locales to worry about - ldconfig correctly called for shared libs in %post and %postun - relocatable - package own all directories it creates - packages doesn't own files or directories already owned by other packages - %files section ok, no duplicates - permissions ok, correct %defattr - clean section present and ok - macro usage consistent - code, not content - no large docs - %doc section ok, docs don't affect runtime - no headers, static libs or pkgconfigs to worry about - no libtool archives Good - SHOULD Items - package builds im mock (Core 5 i386) - package seems to work as usual, nevertheless I can't really test I here right now in absence of a modem - scriptlets match examples from wiki - package uses disttag Needswork - MUST Items - rpmlint errors and warnings: > $ rpmlint hylafax-4.2.5.5-1.fc5.src.rpm | sort > E: hylafax no-%clean-section > This is "[ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT" confusing rpmlint. I suggest you use the regular "rm -rf $RPM_BUILD_ROOT" (don't worry about the buildhosts ;)) or ignore this message, the %clean section in your spec is valid. > $ rpmlint hylafax-4.2.5.5-1.fc5.i386.rpm | sort > E: hylafax executable-marked-as-config-file /etc/cron.daily/hylafax > E: hylafax executable-marked-as-config-file /etc/cron.hourly/hylafax > safe to ignore, but IMO the cronjobs shouldn't be "noreplace" since they don't store any configuration. > E: hylafax executable-marked-as-config-file /etc/rc.d/init.d/hylafax > the initscript should not be a config file. It shouldn't be "noreplace" ether, since it won't be replaced on updates then. > E: hylafax explicit-lib-dependency libtiff > remove libtiff from Requires, rpm will find that dependency itself > E: hylafax invalid-soname /usr/lib/libfaxserver.so.4.2.5.5 libfaxserver.so > E: hylafax invalid-soname /usr/lib/libfaxutil.so.4.2.5.5 libfaxutil.so > safe to ignore in our case > E: hylafax non-executable-script /var/spool/hylafax/bin/notify.awk 0444 > is this on purpose? > E: hylafax non-readable /var/spool/hylafax/etc/hosts.hfaxd 0600 > due to ownership of uucp, safe to ignore > > these are ok, but there are other dirs o worry about. Take a look at /var/spool/hylafax or /var/spool/hylafax/bin: These should be owned by root. IMO all dirs should be owned by root as long as they don't need to be writable by uucp or this doesn't affect the runtime of the program. > E: hylafax script-without-shellbang /usr/sbin/faxsetup.linux > E: hylafax script-without-shellbang /var/spool/hylafax/bin/dictionary > These files should have start with something like "#! /bin/bash". Fix this upstream ;-) > W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxserver.so > W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxutil.so > Usually these files should go into a seperate hylafax-devel package. From the review guidlines: "MUST: If a package contains library files with a suffix (e.g. libfoo.so.1.1), then library files that end in .so (without suffix) must go in a -devel package." But I doubt there's much sense rolling a package with only two symlinks inside. > W: hylafax non-conffile-in-etc /etc/hylafax/faxcover_example_sgi.ps > Safe to ignore. > W: hylafax no-version-in-last-changelog > append "- <version>-<release>" to every changelog entry, e.g. * Tue Apr 11 2006 Lee Howard <faxguy at howardsilvan dot com> - 4.2.5.5-1 > W: hylafax service-default-enabled /etc/rc.d/init.d/hylafax > Please change "# chkconfig: 345 95 5" to "# chkconfig: - 95 5" in hylafax.rh.init. see - License: You can change the license back to "BSD-Style" or "BSD like" or something like that, I have seen other packages even in Core with that too. To me the COPYRIGHT looks basically BSD, but I leave it up to you. Just make sure that the COPYRIGHT is correctly included in %doc. - Source does not match upstream: the one included in your rpm > c4de725b0a2721df02880bf77809d3bd hylafax-4.2.5.5.tar.gz taken from the URL in Source0 > 6d9886532cbf2c21675ecb802b5ef115 ../downloads/hylafax-4.2.5.5.tar.gz > The source always must match with Source0 from the spec. - package does compile on current Core 5, I'm attaching a log. Nevertheless it builds in Core 5 mock. NEEDSWORK Created attachment 127683 [details] rpmbuild --rebuild hylafax-4.2.5.5-1.src.rpm (In reply to comment #8) Please forget what I wrote about the cronjobs and the initscript and leave them as they are "%config(noreplace)". Okay, I'm hoping this took care of the problem. Now please try: Spec URL: SRPM URL: Sorry it took so long. The good thing about this is that I had enough time to really test your package. Hylafax works prefectly here together with a capi4hylafax package I rolled. Still I don't have a analog modem to test. GOOD: - Source matches upstream now $ md5sum ~/downloads/hylafax-4.3.0.1.tar.gz (from osdn.sf.net) 30f6e56629f6a0ff40846be30a4f4ab8 /home/chris/downloads/hylafax-4.3.0.1.tar.gz $ md5sum ~/Desktop/hylafax-4.3.0.1.tar.gz (from SRPM) 30f6e56629f6a0ff40846be30a4f4ab8 /home/chris/Desktop/hylafax-4.3.0.1.tar.gz - License field ok - explicit dependency on libtiff removed BAD: - still can't build this on my Core 5 machine, only mock succeeds. pkg-config still looks in /usr/local/.. - rpmlint errors in detail: RPM (build in mock from your srpm): $ rpmlint hylafax-4.3.0.1-1.fc E: hylafax invalid-soname /usr/lib/libfaxserver.so.4.3.0.1 libfaxserver.so E: hylafax invalid-soname /usr/lib/libfaxutil.so.4.3.0.1 libfaxutil.so ok for me, I don't see a reason for changing this and for splitting out 2 symlinks into a seperate devel-package. E: hylafax non-executable-script /var/spool/hylafax/bin/notify.awk 0444 is this on purpose? If not fix it upstream. ok due to ownership of uucp, safe to ignore. Nevertheless I suggest the other dirs in /var/spool/hylafax to be owned by root. E: hylafax script-without-shellbang /usr/sbin/faxsetup.linux ignore or fix upstream W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxserver.so W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxutil.so still no reason for a sepate -devel package IMO W: hylafax incoherent-version-in-changelog 4.2.5.6-1 4.3.0.1-1.fc5 make sure the version field and changelog are matching. Please insert a blank line after every changelog entry. ;) W: hylafax non-conffile-in-etc /etc/hylafax/faxcover_example_sgi.ps safe to ignore, otherwise mark this file as %config SRPM: rpmlint ../hylafax-4.3.0.1-1.src.rpm E: hylafax no-%clean-section IMHO you should replace [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT with a simple rm -rf $RPM_BUILD_ROOT Your version is safer for other systems but this is a fedora package after all. Anyway: I realized I'm not allowed to review your package. You are a first timer, I'm not a sponsor. "The primary Reviewer can be any current package owner, unless the Contributor is a first timer." So I have added the review to the FE-NEEDSPONSOR tracker. You will have to wait fore someone to sponsor you. Sorry, there's nothing I can do for you ATM. Can you please post the Core 5 build errors? I have built this on a Core 5 system myself before posting here, so I'm not sure what the error could be. What is the best way to attract the attention of a sponsor? Is waiting all that I can do? Thanks. (In reply to comment #13) > What is the best way to attract the attention of a sponsor? Basically, you need to convince one of the sponsors that: - you have a genuine interest in FE - you have a good grasp of FE packaging policies - you are responsive To try to demonstrate these facts, you can: - offer more packages to review - look through other people's packages and offer useful advice on their packaging. You cannot formally approve a package yet, but you can help bringing other packages in good shape for a formal approval. > Is waiting all that I can do? Idly waiting is not a good way to find a sponsor. (In reply to comment #13) > Can you please post the Core 5 build errors? Still the same as in comment #9. > What is the best way to attract the attention of a sponsor? Is waiting all that > I can do? I'm afraid yes. You could add comments to other reviews to proof your knowledge and understanding of the guidelines, to show you are worth being sponsored. But IMO no one has a doubt about that. I suggest you wait a bit. If nothing happens, feel free to ask on fedora-extras-list. Please try these once again... it should build now for you in your FC5 environment. I had to remove %{?_smp_mflags} from the make call. Spec URL: SRPM URL: At that ... I develop and maintain both hylafax () and iaxmodem (), and I don't have the kind of time to be spending it reviewing other projects or offering more projects than those that I already develop and maintain - merely to attract the attention of a needed sponsor. I am willing and able to maintain the Fedora Core Extras HylaFAX distribution, but I really am not in a position to do much more than that for Fedora. If I must do more in order to get this project sponsored, then I guess that I'm not the right person for this project in Fedora. Christoph, if you are able, and if you'd like to take this project you are welcome to it. I can assist you in whatever you may need. Otherwise I'm pretty much going to have to wait for a sponsor - either through patience or through nagging people on-list. Thanks. ping -- I wanted to look at this three times but each time was unreachable :-/ -- is this just me having bad luck? If not: can you upload the files somewhere else? Please try this mirror: Sorry, I lost track on this one. Christoph, what's the current status? I really would like to co-maintain hylafax together with Lee. But there are still 3 things to be done before: ;) 3. We need an official co-maintainership policy for extras. I know you and FESCO are working on it, but AFAIK there's nothing official yet. Please correct me if I'm wrong. (In reply to comment #20) > ;) Can you do the review as well? Then I'll take a last quick look and sponsor him and we both keep an eye on his commits ;-) > 3. We need an official co-maintainership policy for extras. I know you and FESCO > are working on it, but AFAIK there's nothing official yet. Please correct me if > I'm wrong. Well, it's in the works but it's a huge task and even the minimal stuff (the initial CC-list from owners.list) currently doesn't work properly. But that hopefully gets fixed soon OK, I will do another formal review ASAP (I'm pretty busy ATM). There are still some things in the specfile I'm not happy with, mainly the changelog and the permissions of some dirs. Adding FE-NEEDSPONSOR blocker. Hello, I'm also not an official reviewer, but I ran rpmlint on the source RPM and it returned: E: hylafax configure-without-libdir-spec A configure script is run without specifying the libdir. configure options must be augmented with something like --libdir=%{_libdir}. I think you just need to add "--libdir=%{_libdir}" to your configure command. HylaFAX's configure script isn't generated by autotools and does not have a --libdir=%{_libdir} option. I'm quite confident that the configure call is correct as it is already in the spec file. Hi Howard, nice to see you are still around. Are you willing to maintain Hylafax with me, if co-maintainership works? I have to admit that I forgot this package, I had a lot of work with my xfce-plugins for 4.4. I'm going to do a complete review this week, I promise. After that Thorsten will sponsor you. Yes, I am willing to maintain HylaFAX with you. The %changelog is very out-of-date. > %define faxspool %{_var}/spool/hylafax Since /var is hardcoded pretty much everywhere, e.g. in the initscript, using %{_var} here doesn't add any safety. Just use /var unless it were possible to propagate the value of %{_var} into all relevant files. > %install > [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT > %clean > [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT Both checks are not needed by default and are not reliable. Simply "rm -rf $RPM_BUILD_ROOT" has been used in thousands of packages for many many years. > %makeinstall -e \ If %configure is not used, using %makeinstall makes no sense either. Prefer "make install ..." just like it's correct for most other packages. >>> # Starting from 4.1.6, port/install.sh won't chown/chmod anymore if the current # user is not root; instead a file root.sh is created with chown/chmod inside. # # If you build the rpm as normal user (not root) you get an rpm with all the # permissions messed up and hylafax will give various weird errors. # # The following line fixes that. # [ -f root.sh ] && sh root.sh <<< If this is true, there are packaging errors left somewhere. This comment in the spec file doesn't sound right at least. The rpm must build as normal user and must not rely on chown/chmod. Make sure the %attr(...) settings are complete for any files that really need them and without depending on any execution of chown/chmod. *If*, however, (and I believe you do) you only intended to justify why "root.sh" must be executed when installing Hylafax from tarball as non-root user, the comment is just misleading/confusing. > $ rpm -qlvp hylafax-debuginfo-4.3.0.3-1.i386.rpm > (contains no files) This is because in root.sh (and maybe elsewhere, too) the executables are stripped, which should not be done, and which makes the debuginfo package useless. > drwxr-xr-x uucp uucp 0 /var/spool/hylafax/bin > -rw-r--r-- uucp uucp 14072 /var/spool/hylafax/etc/lutRS18.pcf Wouldn't root:root ownership suffice? I've updated the spec file and the SRPM. Please see: Sorry it took so lang once again. There are still some issues with this package that I don't think it makes much sense to do a review at this point. The main reason is that SRPM and Spec from comment #29 don't match (see below). 1. debug package is still empty: $ rpm -qpl hylafax-debuginfo-4.3.0.11-1.fc6.i386.rpm (contains no files) Why is there no debug info? If there really is none the debuginfo package should not be built. Please see for more info. 2. Changelog is still out of date. Please update the changelog for all releases and describe the changes you made. This makes it easier for us to track changes. In order to avoid confusion please increase the release for every new package, even during the review. If we are going to maintain this package together I'd like you to move the changelog to the end of the spec and insert a blank line between every entry for legibility. 3. Ownership of /var/spool/hylafax/bin is still uucp:uucp $ ls -l /var/spool/hylafax/ ... drwxr-xr-x 2 uucp uucp 4096 21. Okt 00:33 bin ... This is already fixed in the specfile, but the SRPM is not up to date. We are reviewing SRPMS, not specs, so I can only do a review if you update your package and fix these errors. I still would like root to own more dirs in /var/spool/hylafax, e. g. config and dev. Christoph, thank you very much for your time. I have updated the spec file and the srpm and they match each other this time, too. I've made changes as you have suggested. Can you please clarify whether this package is HylaFAX or HylaFAX+? Paul/Darren, Lest we require ourselves to say that x.org and XFree86 are not both X, let us not try to define that HylaFAX+ and even your own company's "HylaFAX Professional Edition" are not both HylaFAX also. Indeed, as we all (including you) already know, the software being promoted here is the sourceforge flavour of HylaFAX, also known as HylaFAX+. Lee. Sorry for the long delay. Howard, the new package doesn't work at all, the permissions are completely screwed up. Have you tested the package yourself before you submitted it? $ rpmlint hylafax-5.0.0-1.fc invalid-soname /usr/lib/libfaxserver.so.5.0.0 libfaxserver.so E: hylafax invalid-soname /usr/lib/libfaxutil.so.5.0.0 libfaxutil.so E: hylafax non-executable-script /usr/sbin/edit-faxcover 0444 E: hylafax non-executable-script /usr/sbin/faxaddmodem 0444 E: hylafax non-executable-script /usr/sbin/faxcron 0444 E: hylafax non-executable-script /usr/sbin/faxsetup 0444 E: hylafax non-executable-script /usr/sbin/faxsetup.linux 0644 E: hylafax non-executable-script /usr/sbin/hylafax 0444 E: hylafax non-executable-script /usr/sbin/probemodem 0444 E: hylafax non-executable-script /usr/sbin/recvstats 0444 E: hylafax non-executable-script /usr/sbin/xferfaxstats 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/archive 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/common-functions 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/dictionary 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/faxrcvd 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/mkcover 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/notify 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/pcl2fax 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/pdf2fax.gs 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/pollrcvd 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/ps2fax.gs 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/tiff2fax 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/tiff2pdf 0444 E: hylafax non-executable-script /var/spool/hylafax/bin/wedged 0444 E: hylafax script-without-shebang /usr/sbin/faxsetup.linux W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxserver.so W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxutil.so W: hylafax incoherent-version-in-changelog 5.0.0 5.0.0-1.fc6 W: hylafax non-conffile-in-etc /etc/hylafax/faxcover_example_sgi.ps W: hylafax non-executable-in-bin /usr/sbin/edit-faxcover 0444 W: hylafax non-executable-in-bin /usr/sbin/faxaddmodem 0444 W: hylafax non-executable-in-bin /usr/sbin/faxcron 0444 W: hylafax non-executable-in-bin /usr/sbin/faxsetup 0444 W: hylafax non-executable-in-bin /usr/sbin/faxsetup.linux 0644 W: hylafax non-executable-in-bin /usr/sbin/hylafax 0444 W: hylafax non-executable-in-bin /usr/sbin/probemodem 0444 W: hylafax non-executable-in-bin /usr/sbin/recvstats 0444 W: hylafax non-executable-in-bin /usr/sbin/xferfaxstats 0444 Christoph, As the proposed package is HylaFAX+, I'd request that it be renamed such. Thanks, Paul > Have you tested the package yourself before you submitted it? Yes, I've been using it repeatedly. Here's what I see when I run rpmlint on the built RPM: [root@dhcp031 i386]# rpmlint hylafax-5.0.1-1.i386.rpm W: hylafax incoherent-version-in-changelog 5.0.0 5.0.1-1 E: hylafax invalid-soname /usr/lib/libfaxutil.so.5.0.1 libfaxutil.so E: hylafax invalid-soname /usr/lib/libfaxserver.so.5.0.1 W: hylafax devel-file-in-non-devel-package /usr/lib/libfaxserver.so E: hylafax non-standard-dir-perm /var/spool/hylafax/pollq 0700 E: hylafax executable-marked-as-config-file /etc/rc.d/init.d/hylafax (In reply to comment #35) > Christoph, > > As the proposed package is HylaFAX+, I'd request that it be renamed such. I agree. I think the source should be named hylafax+<version>.tar.gz, too. (In reply to comment #36) > > Have you tested the package yourself before you submitted it? > > Yes, I've been using it repeatedly. How have you been using this package if /usr/sbin/hylafax is not executable? >Here's what I see when I run rpmlint on the built RPM: > > [root@dhcp031 i386]# rpmlint hylafax-5.0.1-1.i386.rpm This is obviously not the same package, not the same release. It's not even the same version as the package you have submitted in comment #31. > This is obviously not the same package, not the same release. It's not even > the same version as the package you have submitted in comment #31. The software development is moving much faster than progress on this review request.. Thus Apache's webserver is found in a package named "httpd". However, other distributions of Apache's webserver are found in packages named differently, such as "apache-httpd". This also makes complete sense to me because it provides the distribution a means to differentiate between different http servers that it may provide. I do not know if Fedora provides webservers other than Apache's, but assuming it does not, then using the package name of "httpd" for Fedora makes complete sense as well, since it is the only http server being provided. The upstream repository will remain named as it). (In reply to comment #38) > The software development is moving much faster than progress on this review > request. I am very sorry about that. Please try to see it from my point of view: One reason for this review proceeding so slow is that it's so confusing: Packages don't match the spec file, there was hardly any changelog information at the beginning, lots of rpmlint errors, ... > This is an easy one. Why not fix it _before_ submitting the package? As I already said: Keeping the changelog up to date makes it easier for me to follow the changes. >]# Strange. Mine looks different, see comment #34. I have been building this package several times locally and in mock, always with the same results: Files under /usr/sbin are not executable. Can you upload your binaries somewhere? > >. This is Core, not Extras. Packages in Core don't necessarily follow the FE Packaging and Naming Guidelines and haven't got through a review. The apache package doesn't follow the naming guidelines. It not on me to judge if it makes sense or doesn't, but picking out an exception from the rule is not a good reason for making more exceptions from that rule. > The upstream repository will remain named as it is. I don't have to judge on this too, but IMO this is bad: Having two source archives with the same name and potentially even the same version, but with different content inside is confusing. Once downloaded it is very hard for people to distinguish which version they). Maybe someone else one day will submit another review for hylafax(.org). Christoph, I understand the confusion on this review. I am truly sorry for it, and I wish that I could have somehow known ahead of time how to prevent the confusion... because I certainly would have. The mismatches between packages and specs and such has to do with the development pace and my focus on software development rather than RPM packaging, so I guess it's a chicken-and-egg kind of problem. I've uploaded the hylafax-5.0.0-1.i386.rpm file that I was using to here: Yes, I know the filename is different - that could not be avoided. But the file data is the same. As far as package/repository naming goes... I understand the httpd naming manner, and I completely understand why it is named that way. Certainly it may not meet the Extras criteria naming rules - neverless, it still makes sense to me and is not confusing, and in fact I probably would have followed the same naming convention in their shoes. I do not see it as an exception to common-sense - although, yes, it may be an exception according to Extras naming rules. Certainly the Extras rules can be a subset of common-sense. For other examples - not of package naming, specifically - but for naming in general... postfix and sendmail both have "sendmail" executables (among other competing executable names). Similarly, mgetty-sendfax has a "sendfax" executable that competes with an identically named executable from HylaFAX (which is why HylaFAX isn't in Core in the first place, as the RedHat 5.2/6.0 maintainers decided to favor mgetty-sendfax and do away with HylaFAX rather than implement a "switching" mechanism as they have done for sendmail/postfix). All of this makes sense to me - and indeed I can see why it would confuse some - but if one understands that, realistically, the purpose in the naming conflicts is a perfect manner of clue-sticking the user that they're looking at conflicting packages, just the same as if they were looking at two packages from the same repository but of different versions. The HylaFAX+ repository is aptly named "hylafax" because it is, after all, HylaFAX. HylaFAX+ version numbers have always been different from the version numbers at HylaFAX.org. Certainly it is not the only HylaFAX repository, but realize that the hylafax.org repository is, itself, a fork - there almost always have been different repositories (even among the earliest contributors). To say that HylaFAX+ is not HylaFAX is to say that when Alan Cox patches the Linux kernel for RedHat that it no longer is Linux. The sourceforge HylaFAX project is known as HylaFAX+ for those people that have a tough time understanding the issues that I am discussing, and certainly it makes it easier than always saying "Sourceforge HylaFAX project". That said, you really won't find anyone out there desireous to run both HylaFAX+ and HylaFAX.org for practical reasons. Realize that Darren's (Paul's) purpose here isn't really to assist the users who will be using HylaFAX (in that they may become upset to find themselves using HylaFAX+ instead of HylaFAX.org software). Rather, his purpose here is to take measures to prevent users from seeing, as I do, that HylaFAX+ is as much HylaFAX as the software found at HylaFAX.org or SGI-HylaFAX or his own commercial "HylaFAX Enterprise Edition". If he really, truly, believed what he is trying to say here then he wouldn't have named his own product with "HylaFAX".+". > As for the package name, it matters not to me if it is called "hylafax" or "hylafax+". There does not seem to be any objection to renaming the package to be included in Fedora Extras "hylafax+", so that has answered my question and addressed my concern. Not only would it help prevent possible confusion between different software projects, it would be consistent with Lee's own sourceforge website and mailing lists. I agree that it's important to distinguish this package from the hylafax software that has been available from since 1997, especially in the event of it's submission to Fedora Extras. Whether hylafax+ is a fork has been discussed ad nauseum on the hylafax mailing list; if Lee would like to discuss it further that seems to be a more appropriate place than this ticket. Paul I've updated the SPEC and the SRPM: rpmlint gives me these warnings/errors: E: hylafax invalid-soname /usr/lib/libfaxutil.so.5.0.4 libfaxutil.so E: hylafax invalid-soname /usr/lib/libfaxserver.so.5.0 subdir-in-bin /usr/sbin/faxmail/image E: hylafax script-without-shebang /usr/sbin/faxsetup.linux E: hylafax executable-marked-as-config-file /etc/cron.daily/hylafax E: hylafax subdir-in-bin /usr/sbin/faxmail/application/octet-stream E: hylafax subdir-in-bin /usr/sbin/faxmail/image/tiff E: hylafax non-standard-dir-perm /var/spool/hylafax/archive 0700 E: hylafax subdir-in-bin /usr/sbin/faxmail/application/pdf E: hylafax subdir-in-bin /usr/sbin/faxmail/application Please let me know what you would like me to do from here. Thanks Howard, I will look at the files later this weekend, I promise. There was something seriously wrong with you latest package (the ones I built and the binary you gave me), the permissions were messed up and files were not executable, the %attr statements don't work as expected. I'll need to investigate this further. Please stay tuned. Hey Christopher. I am assuming you are reviewing this package... I am setting fedora-review flag to ? If this is incorrect, please set it back to " " and reassign to nobody@fedoraproject.org. This seems to be stalled completely. Christoph: Are you still interested in reviewing this? Lee: Are you still interested in submitting this? Hello Jason. Yes, I am still very interested in submitting this. I'm not quite sure what the stall is for. Getting packages into Fedora Extras seems quite impossible :-) Well, judging from the quantity of packages that get in every day, I'd say it's far from impossible but for a first-time maintainer you sure took on a really tough package that has needed a ton of work. And this package still has quite a few issues that need to be worked out, as I get nearly 400 complaints from rpmlint. So I guess it's up to Christoph at this point to complete his review. Jason, would you mind sharing your rpmlint output? As discussed here, my rpmlint output looks as expected, relatively clean. Created attachment 158304 [details] rpmlint output No problem; here's the full output from my package build scripts. (In reply to comment #45) > > Christoph: Are you still interested in reviewing this? Yes, but this really is a hard one, especially as there is confusion (about the name) and regressions. As I said before, the 2 latest packages don't work at all because of the not executable files. (In reply to comment #46) > Getting packages into Fedora Extras seems quite impossible :-) Lee, I'm very sorry about that, of course this is partly my fault. On the other hand: The binary package you gave me in comment #40 simply doesn't work and I don't think this is because of machine but of the package itself. I promise (once again) to look at this more deeply tomorrow. Ok, lets talk about the errors that need to be fixed upstream first: E: hylafax binary-or-shlib-defines-rpath /usr/bin/faxalter ['/usr/lib64']: Same for all other binaries. This is a no-go for fedora, see Although we could fix this in the spec I think it should be done properly upstream. E: hylafax subdir-in-bin /usr/sbin/faxmail Same for the subdirs of /usr/sbin/faxmail. IMO a no-go to because it is a violation of the FHS. Should be in /usr/lib/ I guess. E: hylafax invalid-soname /usr/lib64/libfaxserver.so.5.0.4 libfaxserver.so E: hylafax invalid-soname /usr/lib64/libfaxutil.so.5.0.4 libfaxutil.so this is related to W: hylafax devel-file-in-non-devel-package /usr/lib64/libfaxserver.so W: hylafax devel-file-in-non-devel-package /usr/lib64/libfaxutil.so I brought this up on fedora-extras-list back in October 2006, see and especially Michaels mails W: hylafax undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.0.4 _Z11vlogWarningPKcP13__va_list_tag Much more undefined symbols, need to be fixed too. W: hylafax unused-direct-shlib-dependency /usr/lib64/libfaxserver.so.5.0.4 /lib64/libcrypt.so.1 Same for a couple more libs and same with libfaxutil too. Sorry, I did not have the time to look into the spec more deeply today, but I think we should focus on fixing these first because the packaging bugs are easier. If you have a questions about a specific rpmlint error, run "rpmlint -I <error>". I've fixed the rpath matter. (HylaFAX source-builds allow the user to specify a non-standard library location. If the user specified anything other than /usr/lib then rpath would be used for portability purposes. This didn't take 64-bit systems with /usr/lib64 into account, and so I've now modified the source repository upstream to not use rpath when /usr/lib64 is the library location. It will be in the next release.) I've moved the /usr/sbin/faxmail items to /usr/lib/fax/faxmail instead. I've removed the DSO symlinks and have linked against the versioned DSO files directly. I honestly don't understand the undefined-non-weak-symbol warnings. Is this something that's apparently only happening on 64-bit? I think that I've remedied the unused-direct-shlib-dependency warnings. I'll have to double check that. More to come... After making those changes here is the rpmlint output that I see on an x86_64 system (I'll hang a new SRPM and SPEC file soon and will give the URLs here when I do)... (Yet, I'm *still* not seeing any undefined-non-weak-symbol errors.) /hylafax/faxmail/application/octet-stream E: hylafax executable-marked-as-config-file /etc/hylafax/faxmail/application/pdf E: hylafax executable-marked-as-config-file /etc/hylafax/faxmail/image/tiff (In reply to comment #52) > > I honestly don't understand the undefined-non-weak-symbol warnings. Is this > something that's apparently only happening on 64-bit? Yes, your guess was correct. (In reply to comment #53) > After making those changes here is the rpmlint output that I see on an x86_64 > system (I'll hang a new SRPM and SPEC file soon and will give the URLs here when > I do)... Make sure to include the following changes to fix the permission issues: 144,145c144,145 < %{_bindir}/* < %{_sbindir}/* --- > %attr(755,root,root) %{_bindir}/* > %attr(755,root,root) %{_sbindir}/* 171c171 < %attr(-,root,root) %{faxspool}/bin/* --- > %attr(755,root,root) %{faxspool}/bin/ > (Yet, I'm *still* not seeing any undefined-non-weak-symbol errors.) run "rpmlint hylafax" on the installed package Okay, here's where I'm at. I won't post another SRPM and SPEC file just yet... let's see if you can help me through these last rpmlint warnings/errors or if we can agree that they're not blockers. I'm not sure what to do about these, exactly. These are all executable scripts that are meant to be configurable by the administrator, and we don't want to overwrite the configured script on upgrades. E: hylafax non-readable /var/spool/hylafax/etc/hosts.hfaxd 0600 This is intentional. Only the owner should be able to read this file. Again, these are intentional. Only the owner should be accessing these directories. (All activity goes through the server. Local users don't access these files directly.) E: hylafax script-without-shebang /usr/sbin/faxsetup.linux This file, faxsetup.linux, is a shell script stub. It is included (via ".") from the invoked faxsetup script.. [root@dhcp006 SPECS]# rpmlint hylafax | We've covered all of these already above.. Here are the corresponding build lines: /usr/bin/g++ -shared -fpic -Wl,-soname,libfaxserver.so.5.1.6 -o libfaxserver.so.5.1.6 \ UUCPLock.o ServerConfig.o ClassModem.o FaxModem.o Class0.o Class1.o Class10.o Class1Ersatz.o Class1Poll.o Class1Recv.o Class1Send.o Class2.o Class20.o Class21.o Class2Ersatz.o Class2Poll.o Class2Recv.o Class2Send.o CopyQuality.o G3Decoder.o G3Encoder.o MemoryDecoder.o HDLCFrame.o ModemConfig.o NSF.o /usr/bin/g++ -shared -fpic -Wl,-soname,libfaxutil.so.5.1.6 -o libfaxutil.so.5.1.6 Array.o BoolArray.o Dictionary.o Obj.o PageSize.o RE.o REArray.o REDict.o StackBuffer.o Str.o StrArray.o StrDict.o Dispatcher.o IOHandler.o Sys.o SystemLog.o Timeout.o Fatal.o AtSyntax.o DialRules.o FmtTime.o Sequence.o TimeOfDay.o FaxDB.o TextFormat.o Class2Params.o FaxParams.o SendFaxJob.o SendFaxClient.o TypeRules.o Transport.o InetTransport.o UnixTransport.o SNPPClient.o SNPPJob.o cvtfacility.o fxassert.o \ -ltiff -lz -L../regex -lregex Please notice that -lm is not being used here as the rpmlint warning message seems to indicate. Now, -lm is used elsewhere in the build, but not for these DSOs, but even if I remove the -lm from everywhere in the build (it seems unnecessary) this rpmlint warning still occurs. The header file math.h is included in a few source files, but not in any source file that is used to build libfaxutil. So something is very confusing to me here. Any insight would be appreciated. Thanks. (In reply to comment #55) > . Correct, ignore the warning. > as I already said in comment #12 > E: hylafax non-readable /var/spool/hylafax/etc/hosts.hfaxd 0600 Ignore, see comment #12 > Ignore > E: hylafax script-without-shebang /usr/sbin/faxsetup.linux > > This file, faxsetup.linux, is a shell script stub. It is included (via ".") > from the invoked faxsetup script. Ok, ignore. >. Ok. > [root@dhcp006 SPECS]# rpmlint hylafax | sort >. ldd -r -u /usr/lib64/libfaxserver.so.5.0.4 -- snipped _lots_ of undefined symbols -- Unused direct dependencies: /usr/lib64/libjpeg.so.62 /lib64/libz.so.1 /lib64/libcrypt.so.1 /lib64/libutil.so.1 /lib64/libm.so.6 I haven't looked at this deeper yet, it is not necessarily a blocker, see Could you please take a look at the undefined symbols again? As there are no major blockers you can submit a new srpm for review. Hope we aren't getting any SELinux troubles. What Fedora Version are you running with hylafax? Do you have SELinux enabled? (In reply to comment #37) > (In reply to comment #35) > I agree. I think the source should be named hylafax+<version>.tar.gz, too. Christoph, since you are in agreement with changing the name to hylafax+ and so is Lee: >+". When do you think this will be done? We're looking at submitting HylaFAX for review but would like to clear up this confusion first. Thanks, Paul We are workig on it right now and I'm optimistic, that we can finish this review soon. Howard: Please upload the SRPM you send me some days ago to a location where it is accessible for everybody so others can participate in the review, too. Remember: Fedora is about openness, so everything should happen in public. Christoph, I'm in complete agreement about the openness. Understand that in order to put public the SRPM and SPEC file the same as I have in the past that I have to cut a new release upstream. Because there have been some rather serious changes to the DSO build on Linux I have to take some time to test on non-Fedora Linux distros. Please allow me some time (a few days) to do that. Thanks. Okay, I've uploaded new spec and src.rpm files. They can be gotten here: Christoph, are you still interested in this? Seems this takes much to much time; see also Maybe someone else is willing to review this? I can give advices until Christoph But build is failing on F-8 x86_64: ldconfig: Can't create temporary cache file /etc/ld.so.cache~: Permission denied make[1]: *** [installDSO] Error 1 So you need to removes the ldconfig call from the makefile.... Since you have set $RPM_OPT_FLAGS (and they are used), you might be abble to remove %define debug_package %{nil} So the resulting binaries get stripped About JBIG-in-TIFF conversion support not found. I wonder if this is expected.. 7068 Segmentation fault $TIFFBIN/tiffcp -c g4 misc/jbig.tif misc/foo.tif > /dev/null 2>&1 You can prevent timestamp changes by adding make install CPPROG="cp -p" \ (In reply to comment #62) >.... Having them at runtime is enough. > Since you have set $RPM_OPT_FLAGS (and they are used), you might be abble to remove > %define debug_package %{nil} > So the resulting binaries get stripped But then we still have an empty and therefore useless debuginfo package. (In reply to comment #61) > Christoph, are you still interested in this? Seems this takes much to much time; > see also Howard, I suggest you focus on this review instead of opening another one at rpmfusion. You will run into the same problems there as you are doing here. > Maybe someone else is willing to review this? Basically I'm still interested in this review and in (co)maintaining this package, But there still is a lot of issues with it. Biggest of all is the naming issue. As I said before this package should be named hylafax+, because when Paul submits hylafax for review we will get in trouble. What about packages sitting on top of hylafax(+), e. g. calpi4hylafax? How do we make sure that we are not getting a version race between hylafax+ and hylafax? I have no idea how to handle this, I asked on fedora-maintainers-list but the discussion never was really finished, see Then we still have no debug information (why?), the unused-direct-shlib-dependency, some undefined non weak symbols and the problems Nicolas mentioned in comment #62. So I think I giving up this review. Sorry, maybe someone else is more successful. I'm still willing to help wherever I can. (In reply to comment #62) > But build is failing on F-8 x86_64: > > ldconfig: Can't create temporary cache file /etc/ld.so.cache~: Permission denied > make[1]: *** [installDSO] Error 1 > So you need to removes the ldconfig call from the makefile I can remove the ldconfig, but for the source installation it is needed, and I have seen other packages run ldconfig from their makefiles. Furthermore, I just did an 'rpmbuild --rebuild hylafax-5.1.11-1.src.rpm' on a Fedora 8 x86 system and it turned out just fine: Wrote: /usr/src/redhat/RPMS/x86_64/hylafax-5.1.11-1.fc8.x86_64.rpm >.... These should be fine this way... however, with the --rebuild that I just did I got: [16] Location of sendmail program: /usr/sbin/sendmail > About JBIG-in-TIFF conversion support not found. > I wonder if this is expected.. > 7068 Segmentation fault $TIFFBIN/tiffcp -c g4 misc/jbig.tif misc/foo.tif > > /dev/null 2>&1 This is somewhat expected to happen. I could hide the error more carefully, but the test is appropriate. We have to test to see if libtiff was built with JBIG support or not. And in this case it was not built with JBIG support and thus is causing the segfault. > You can prevent timestamp changes by adding make install CPPROG="cp -p" \ Understood. I'm not sure that this concerns me very much, unless it does concern you. (In reply to comment #63) > Howard, I suggest you focus on this review instead of opening another one at > rpmfusion. You will run into the same problems there as you are doing here. I feel that I have been focused on this review. I do not understand why progress has not been occurring. I had hoped that at rpmfusion this process would be more attentively supported. I am happy to see this at Fedora or at rpmfusion... I just would like to see it *somewhere*. If I am not doing something that is needed to get this package included then please tell me what is lacking. > Biggest of all is the naming issue. Well, I had thought that we resolved the problem by simply calling this package hylafax+. But, from reading this (lengthy) thread... > ... it seems that there is a bigger problem with provides and conflicts, and it's really not so much to do with naming as it has to do with those. On a theoretical level you could potentially see this same issue come up on many levels in the future. RedHat/Fedora have already had to deal with similar situations before (conflicts), and it's been done in different ways... alternatives, version-numbering, renaming, etc. As for whether or not one or none of those are appropriate in this case is what is at debate, I suppose. I don't really want to get into a big ruckus over this matter, but let me try to explain (again) why I believe that all of that really should not be a factor in this package submission's case... Basically, back in 2005 I took my direct HylaFAX development away from hylafax.org and continued it at Sourceforge due to issues that are publicly discussed elsewhere and are not particularly relevant here. From my perspective I was doing HylaFAX development, and if the hylafax.org wanted my development work they were free to "port" it to their code tree, and I have been very vigilant about porting any developments at the hylafax.org repository into my code tree (certainly I have omitted a few things by deliberate choice). Basically, as I saw/see it, it's not much different than two branches of a CVS codebase. Sure, there are some (perhaps only minor) differences, but it's really not enough to say that they're different things altogether. Let's say that package foo decided to split their CVS repository into a "maintenance" branch and a "development" branch... but for whatever (probaby even silly) reason some people preferred the develoment branch and some people preferred the maintenance branch. If that were to happen would there be some debate at Fedora over which branch of the codebase to use? No, there wouldn't be. The package maintainer would call the shots as far as Fedora is concerned. If the package maintainer chose to stay with the maintenance branch then Fedora would so stay... at least until the maintainer was convinced otherwise. If the package maintainer chose to move to the development branch, then that's what would happen. Many other distributions already have HylaFAX in their package offerings. Some of the package maintainers are using HylaFAX+ code base and some are using hylafax.org code base. And it wasn't that long ago that some were even using SGI code base. Some distributions are offering both packages (usually by different maintainers), some offer only one. My point being that this issue is a silly one that is unnecessarily proving to be a road block. Call it hylafax+ or call it hylafax... there is still neither package in Fedora (and Darren/Paul/Arlington apparently has yet to even start up the reveiew request for his, despite his interjections)... and thus there is no reason to worry about conflicts now. All of that said... step back a bit and observe things from a distanced perspective. HylaFAX+ usually proves to be the upstream for hylafax.org. And, when code work is done at hylafax.org then it serves as the upstream for HylaFAX+. There are a few nuances between them that apparently are permanent, but for the most part the two are similar enough (or will be similar enough) that they're more of a "pseudo-fork" (like CVS branches) than they are true forks. Are these nuances enough to warrant two packages in Fedora? If they are, then that is fine... however, right now there is neither. > Then we still have no debug information (why?) I don't know. I don't even know what debug information is *supposed* to be there. I'm not even sure why it's important or why it's a problem to not have it there. > unused-direct-shlib-dependency, some undefined non weak symbols and the problems I'm fairly sure those things were resolved between then and now. Please refer to the 5.1.11 SPEC and SRPM: SPEC: SRPM: The origin of the empty debuginfos is hylafax stripping all files from inside of its ports/install.sh script if it finds a "strip" program. Try this patch: --- hylafax.spec.orig 2007-11-08 17:59:02.000000000 +0100 +++ hylafax.spec 2007-11-21 09:11:17.000000000 +0100 @@ -1,8 +1,5 @@ %define faxspool /var/spool/hylafax -# The resulting debuginfo package is empty. So we just disable it. -%define debug_package %{nil} - Summary: HylaFAX(tm) is a enterprise-strength fax server Name: hylafax Version: 5.1.11 @@ -41,6 +38,7 @@ %build # - Can't use the configure macro because HylaFAX configure script does # not understand the config options used by that macro +STRIP=':' \ ./configure \ --with-DIR_BIN=%{_bindir} \ --with-DIR_SBIN=%{_sbindir} \ Thank you. That appears to have worked for debuginfo. SPEC: SRPM: Wow, this is still going. I had some time so I built it. It builds fine on rawhide; here's the latest rpmlint output: hylafax.src:66: E: configure-without-libdir-spec This is bogus. hylafax.src: W: invalid-license BSD-like This isn't a permitted license tag; if the license really doesn't correspond to one of those on then talk to spot and have him cook up another tag for you. hylafax.x86_64: E: executable-marked-as-config-file /etc/cron.hourly/hylafax hylafax.x86_64: E: executable-marked-as-config-file /etc/cron.daily/hylafax Not problematic. hylafax.x86_64: W: incoherent-version-in-changelog 5.1.11-1 5.1.11-2.fc9 Release: is 2 but the last changelog is for release 1. You should changelog each release bump. hylafax.x86_64: W: file-not-utf8 /usr/share/doc/hylafax-5.1.11/CONTRIBUTORS This should be passed through iconv. hylafax.x86_64: W: non-conffile-in-etc /etc/hylafax/faxcover_example_sgi.ps Not really sure what to do about this one. Maybe it should be installed with the documentation instead. hylafax.x86_64: E: executable-marked-as-config-file /etc/rc.d/init.d/hylafax initscripts shouldn't be marked as %config. hylafax.x86_64: E: explicit-lib-dependency libtiff You shouldn't have Requires: libtiff; rpm will figure out the library dependency by itself. hylafax.x86_64: E: script-without-shebang /usr/sbin/faxsetup.linux If this is a shell script, it needs a #! line. If it's not, it shouldn't be executable and it certainly shouldn't be in /usr/sbin. pretty odd. What are executables doing under /var/spool? And regardless of where they are, either they shouldn't be executable or they should have #! lines so that they can actually be executed. hylafax.src: W: invalid-license BSD-like Just "BSD" is good enough for me. hylafax.x86_64: W: incoherent-version-in-changelog 5.1.11-1 5.1.11-2.fc9 Okay... I'll try to remember to add a changelog for each release even though the spec file isn't changing materially. hylafax.x86_64: W: file-not-utf8 /usr/share/doc/hylafax-5.1.11/CONTRIBUTORS Fixed upstream. hylafax.x86_64: W: non-conffile-in-etc /etc/hylafax/faxcover_example_sgi.ps HylaFAX uses a configurable cover page template file, and this is an example/sample one. The configurable cover page template file *does* belong in the configuration directory, /etc/hylafax. So I believe that the samples do belong there as well. Can we say that the warning doesn't apply here? hylafax.x86_64: E: executable-marked-as-config-file /etc/rc.d/init.d/hylafax Okay, this is changed/fixed now. hylafax.x86_64: E: explicit-lib-dependency libtiff Okay, I've (again) removed libtiff. (I've had RPM builders, SuSE I think, who wanted it there.) hylafax.x86_64: E: script-without-shebang /usr/sbin/faxsetup.linux This is fixed upstream. all fixed upstream. Okay, please now see: SPEC: SRPM: I wanted to spend some time taking care of some of these old tickets but now I can't get this to build (in rawhide) at all; the build fails with: Using /bin/bash to process command scripts. Missing C++ runtime support for g++ (/usr/lib64/ccache/g++). Compilation of the following test program failed: ---------------------------------------------------------- #include "iostream.h" int main(){ cout << "Hello World!" << endl; return 0;} ---------------------------------------------------------- Usually this is because you do not have a standard C++ library installed on your system or you have installed it in a non-standard location. If you do not have a C++ library installed, then you must install it. If it is installed in a non-standard location, then you should configure the compiler so that it will automatically be found. (For recent gcc releases this is libstdc++, for older gcc - libg++) Unrecoverable error! Once you've corrected the problem rerun this script. libstdc++-devel is installed, so it's not something as simple as that. Perhaps there's a gcc 4.3 incompatibility? I don't know much at all about C++. It builds perfectly fine on Fedora 8 (which uses gcc 4.1.2). Is gcc 4.3 only available in rawhide? Well, it's in Fedora 9 which isn't terribly far away now. BTW, here's the end of the config.log file. Perhaps it will be instructive: ++ cat dummy.C #include "new.h" struct foo { int x; foo(); ~foo(); }; foo::foo() {} foo::~foo() {} int main() { foo* ptr = 0; foo* a = new(ptr) foo; a->x = 0; delete a; return 0; } ++ /usr/lib64/ccache/g++ -o dummy dummy.C dummy.C:1:17: error: new.h: No such file or directory dummy.C: In function 'int main()': dummy.C:12: error: no matching function for call to 'operator new(long unsigned int, foo*&)' <built-in>:0: note: candidates are: void* operator new(long unsigned int) ++ /usr/lib64/ccache/g++ -o dummy dummy.C -lg++ dummy.C:1:17: error: new.h: No such file or directory dummy.C: In function 'int main()': dummy.C:12: error: no matching function for call to 'operator new(long unsigned int, foo*&)' <built-in>:0: note: candidates are: void* operator new(long unsigned int) ++ make -f confMakefile t /usr/lib64/ccache/g++ -D__ANSI_CPP__ -I. -I. -I././regex -I. -I././util -I/usr/include -g -O t.c++ t.c++:1:22: error: iostream.h: No such file or directory t.c++: In function 'int main()': t.c++:2: error: 'cout' was not declared in this scope t.c++:2: error: 'endl' was not declared in this scope make: *** [t] Error 1 gcc 4.3 dropped the pre-ISO backwards compatibility headers. Fortunately, the only thing that appears to actually needs those headers is the configure script. Created attachment 303735 [details] hack let hylafax's configure succeed with gcc-4.3.0 Try this patch. It hacks hylafax's configure to use iostream instead of iostream.h and to let this configure script succeed. But ... blunt question: Does hylafax still have an active upstream? Checking the details of this configure script, I found this configure script to be hardly working at all and to produce questionable/broken results in detail. It's mere luck this package builds at all. Yes, hello (?), an active upstream is here. As for the details of this failing on gcc-4.3, admittedly there's not likely been anyone attempt to build HylaFAX with it until now. As for other parts of the configure script that may be hardly working at all, etc., please do elaborate, and I'll see what I can do about getting them working better. That said, it builds just fine on every platform out there that I've tested it on, and I'm pretty sure that's more than mere luck. (In reply to comment #75) > As for the details of this failing on gcc-4.3, admittedly there's not likely > been anyone attempt to build HylaFAX with it until now. That's apparent :) > As for other parts of the configure script that may be hardly working at all, > etc., please do elaborate, and I'll see what I can do about getting them > working better. Openly said, if I were upstream, I would ditch this configure script and its Makefile.ins underneath and replace with something more standardized. My personal choice would be the autotools. > That said, it builds just fine on every platform out there that I've tested it > on, and I'm pretty sure that's more than mere luck. Some details from my build.log: 1. ... Looks like /usr/lib64/ccache/gcc supports the -g option. ... but not together with the -O option, not using it. This result is plain wrong - The test trying to check for -O -g must have failed. 2. configure script doesn't acknowledge CXXFLAGS and CFLAGS. 3. Checking for PAM (Pluggable Authentication Module) support ... not found. Disabling PAM support This could be a packaging bug inside of the rpm.spec - I don't know. 4. ... Checking ZLIB support. Using ZLIB include files from Using pre-built ZLIB library -lz Done checking ZLIB support. ... No idea what this is meant to mean. Could be a parallel build issue, a configure script failure or else ... 5. ... Checking JBIG-in-TIFF conversion support. ./configure: line 3274: 28995 Segmentation fault $TIFFBIN/tiffcp -c g4 misc/jbig.tif misc/foo.tif > /dev/null 2>&1 JBIG-in-TIFF conversion support not found. ... ?!? Likely a broken check in configure. 6. -I/usr/include in compiler calls: ... /usr/lib64/ccache/gcc -D__ANSI_CPP__ -I. -I.. -I.././regex -I.././regex -I.././util -I/usr/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector ... Though it doesn't have an impact in most cases, this is quite a serious bug, because it impacts include file search order. 7. uid/gid processing: + make install -e FAXUSER=8690 FAXGROUP=492 SYSUSER=8690 SYSGROUP=492 BIN=/var/tmp/hylafax-5.2.2-1.fc9-root-mockbuild/usr/bin SBIN=/var/tmp/hylafax-5.2.2-1.fc9-root-mockbuild/usr/sbin LIBD /bin/bash ./port/install.sh -u 8690 -g 492 -m 755 ... This is all wrong ... the uid/gids you see above are my personal ones!: This new package does build for me. I wonder about these bits from the configure output: Checking for PAM (Pluggable Authentication Module) support ... not found. Disabling PAM support Checking for LDAP (Lightweight Directory Access Protocol) support ... not found. Disabling LDAP support Is it worth adding build dependencies on pam-devel and openldap-devel? I can't say what functionality is missing but generally in Fedora we enable pam and ldap support for most programs. Looks like /usr/bin/gs is the PostScript RIP to use. WARNING, /usr/bin/gs does not seem to be an executable program; you may need to correct this before starting up the fax server. ls: cannot access /usr/bin/gs: No such file or directory ./configure: line 4180: /usr/bin/gs: No such file or directory Does this really matter, since it was explicitly directed to use /usr/bin/gs? Or does a build dependency ghostscript need to be added? There are similar complaints about sendmail and mgetty. PAM and LDAP support in HylaFAX enable authentication mechanisms for the hfaxd protocol that are alternatives to the etc/hosts.hfaxd file-based authentication method. Sure, pam-devel and openldap-devel should probably be build dependencies. The WARNING regarding /usr/bin/gs can be ignored. It's an installation dependency and not a build dependency. Furthermore, the configure-determined value is not built-in to binaries and can be configured by the administrator. Same goes for sendmail as for ghostscript. And mgetty is entirely optional. SPEC: SRPM: Finally getting back to this review; I'll try to get it finished up. Note to FE-Legal folks, this package needs attention for two reasons and perhaps a third as well. Please search for FE-Legal below. Now, to the review. Was the hylafax/hylafax+ issue ever resolved? Do we need FE-Legal to be involved in that? Can you explain why the tarball in the src.rpm does not match the tarball fetched from the Source0: URL? There seem to be rather significant source differences. This kind of thing is not permissible; the sources in the src.rpm must be identical to the upstream sources except in specific limited cases where we must remove something. I note that the tarball in the src.rpm seems to be three days newer than the one upstream.. It doesn't particularly bother me, but the guidelines to specify that you not use a specific sourceforge mirror for the source URL. See (although personally I find I often have to add one just to get things to download, since sourceforge is so incredibly unreliable). I recommend not using the name of the package in the summary, as it tends to look rather redundant in listings. Still, there is a change of case so I won't block the package if you think it really needs to be there.. Your changelog entries are not in one of the acceptable formats. These are parsed automatically, so please follow the formats given in the Changelogs section of and please also include a comment every time you change the release. You need a dependency on the crontabs package if you want to put things in /etc/cron.daily. You call ldconfig in your scriptlets, but you don't have any dependencies on it. When you use the single-line scriptlets (%post -p /sbin/ldconfig) then you don't need them, but when you use multiline scriptlets you have to specify the dependencies manually. Finally, I have significant issues with the amount of stuff this package puts under /var/spool. I don't believe any of the files belong there at all. Executables, certainly not. Unless you can illustrate how the FHS allows such things, I cannot approve this package. The modem config files need to be under /etc; the executables probably belong under /usr/libexec if they're not supposed to be run by the end user. Checklist: X source files do not match upstream. * package meets naming and versioning guidelines. * specfile is properly named, is cleanly written and uses macros consistently. ? summary includes the name of the package. * description is OK. * dist tag is present. * build root is OK. X license field matches the actual license. ? license is open source-compatible. * license text included in package. X changelogs not correctly formatted. * latest version is being packaged. * BuildRequires are proper. * compiler flags are appropriate. * %clean is present. * package builds in mock (rawhide, x86_64). * package installs properly. * debuginfo package looks complete. * rpmlint has acceptable complaints. X final provides and requires: config(hylafax) = 5.2.4-3.fc9 libfaxserver.so.5.2.4()(64bit) libfaxutil.so.5.2.4()(64bit) hylafax = 5.2.4-3.fc9 = /bin/sh /sbin/chkconfig /sbin/service config(hylafax) = 5.2.4-3.fc9 gawk ghostscript libfaxserver.so.5.2.4()(64bit) libfaxutil.so.5.2.4()(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) liblber-2.4.so.2()(64bit) libldap-2.4.so.2()(64bit) libpam.so.0()(64bit) libpam.so.0(LIBPAM_1.0)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libtiff.so.3()(64bit) libutil.so.1()(64bit) libz.so.1()(64bit) mailx sharutils X (missing crontabs for /etc/cron.*) X (missing /sbin/ldconfig dependency for %post and %postun) * %check is not present; no test suite present. I have no way to test this software. X shared libraries are present; ldconfig called properly but dependency on it is missing. X ownership problems for /etc/cron.* * doesn't own any directories it shouldn't. * no duplicates in %files. * file permissions are appropriate. X scriptlets are OK, but ldconfig dependencies are missing. * code, not content. documentation is small, so no -doc subpackage is necessary. %docs are not necessary for the proper functioning of the package. no headers. no pkgconfig files. no static libraries. no libtool .la files. (In reply to comment #81) >. Indeed, this falls a little outside the realm of fair use. Please get rid of it. >. There's no incompatibility there, but please use: License: libtiff and BSD with advertising. Jason and Tom, thank you very much for your assistance here. Here are new SPEC and SRPM files: SPEC: SRPM: As for the hylafax/hylafax+ issue. Nothing has changed there. In all truth they're pretty much the same thing. They are nearly interchangeable. I fully expect those at hylafax.org to continue to claim that HylaFAX+ is a "fork" and needs to be clearly distinguished from their releases, however, HylaFAX+ frequently and routinely serves as the upstream for development for their codebase (and vice-versa). There are some differences, but it's truthfully no more difference than exists between, say, Apache HTTPD from Debian and Apache HTTPD from Fedora.. The tarball in the src.rpm did not match the tarball from the Source0 URL because in going to hylafax-5.2.4-3.src.rpm from hylafax-5.2.4-1.src.rpm (in order to address Fedora 9 issues mentioned above) I made changes to the tarball in the src.rpm without cutting another release of the tarball indicated in the Source0 URL. From now on I'll apply patches instead of changing the tarball on -2, -3, -4, etc releases. I removed /etc/hylafax/faxcover_example_sgi.ps from the upstream repository. Thank you for pointing out the lack of fair use. In future releases I will not point to a specific Sourceforge mirror in Source0 but will instead point to downloads.sourceforge.net. For this release I left it pointed at the specific URL. Thanks for pointing this out. I've removed the name of the package from the summary. I've changed License to "libtiff and BSD with advertising" I've fixed the formatting of the changelog entries (although I neglected to make one for this release). I've added a dependency on crontabs. I've added dependencies on /sbin/ldconfig. As for the stuff going in under /var/spool... I'm afraid that's something that I cannot change. Changing it upstream would be a support nightmare, and so would having the Fedora distribution be so vastly different from the rest of the HylaFAX installations out there. It's been this way for 18 years now, and moving hylafax from /var/spool at this point due to perceived incongruities with FHS is just something that I'm not anxious to do. The benefits of having HylaFAX in Fedora would not likely outweigh the costs of dealing with the subsequent support nightmare that would ensue. That said, let me make my argument as to why everything under /var/spool/hylafax should be acceptable as it is. The purpose of /var is to allow /usr to be read-only. Thus, any files that are subject to change during normal application function are expected to reside in /var. Strictly speaking /var/spool is reserved for files that are to be processed in the future. Thus, the bona-fide spools are totally legit: /var/spool/hylafax/sendq, hylafax/doneq, hylafax/recvq, hylafax/status, hylafax/log, hylafax/info, etc. In fact, the only directories that are not technically spools are hylafax/etc, hylafax/bin, and hylafax/config. While the vast majority of HylaFAX binaries and their respective configuration files are installed outside of /var/spool/hylafax, those three directories (etc, bin, and config) are configuration files and configuration utility scripts, etc., that control how the spools are handled. Due to the way that HylaFAX daemons operate it would be extremely cumbersome (if not impossible - as in the case of a chroot) for these to be elsewhere. They're very much like the configuration files that LPRng had under /var/spool/lpd. So, that's my argument. I hope that it's acceptable. But if it's not... unfortunately there's not much that I am willing to do about it. I do thank you and appreciate your attention. (In reply to comment #83) >. We could easily archive this with a Provides: hylafax. > As for the stuff going in under /var/spool... I'm afraid that's something that I > cannot change. ... One point: I'm pretty sure we will get in trouble with SELinux that way. The package must not contain /usr/lib/debug/ Those files belong into the -debuginfo package only. On Fedora 8, faxsetup gives: Warning: /usr/share/ghostscript/fonts does not exist or is not a directory! The directory /usr/share/ghostscript/fonts does not exist or this file is not a directory. This is the directory where the HylaFAX client applications expect to locate font metric information to use in formatting ASCII text for submission as facsimile. Without this information HylaFAX may generate illegible facsimile from ASCII text. Hello Michael, The package does not contain /usr/lib/debug: [root@bilbo i386]# rpm -pql hylafax-5.2.5-1.fc8.i386.rpm | grep debug [root@bilbo i386]# I've now added ghostscript-fonts to the Requires. Thanks. You want proof? No problem. ;) The spec file contains %{_libdir}/* which also includes /usr/lib/debug recursively. Instead, you want the more explicit %{_libdir}/libfax* or %{_libdir}/*.so.* That your own build does not include debuginfo files means that you don't have the "redhat-rpm-config" installed. Packagers should "yum install rpmdevtools". > ghostscript-fonts $ rpm -q ghostscript-fonts hylafax ghostscript-fonts-5.50-18.fc8 hylafax-5.2.5-1.fc8 It was installed already. It doesn't add /usr/share/ghostscript/fonts but /usr/share/fonts/default/ghostscript: $ rpmls ghostscript-fonts|grep ^d drwxr-xr-x /etc/X11/fontpath.d drwxr-xr-x /usr/share/fonts/default drwxr-xr-x /usr/share/fonts/default/ghostscript I can confirm that /usr/lib/debug is making it into the package, but not on x86_64 where I usually build. I did an i386 build and everything under /usr/lib/debug is indeed included in the main package. I note that recent rpmlint has grown a complaint about the Summary not being capitalized. Might be good to fix that. I am not at all comfortable approving this package with the FHS violations in /var/spool, and the naming issue troubles me as well. You can say that it's really "hylafax" and not "hylafax+", but the bottom line is that the upstream web site very clearly puts the '+' there and makes the distinction between the plussed and unplussed versions clear. Also, when I go to google and enter "hylafax" I get pointed to hylafax.org which uses no '+' anywhere and gets me a different piece of software. Honestly, Fedora has no interest in any argument between the upstream developers of the different branches of hylafax, and if you can't work out a consistent naming scheme amongst yourselves which you present to us then I'm quite happy to say that we'll just wait it out. I'm going to start a thread on fedora-devel to try and get some discussion going. > I can confirm that /usr/lib/debug is making it into > the package, but not on x86_64 where I usually build. On x86_64 %_libdir is /usr/lib64, so /usr/lib/debug is not matched by %{_libdir}/* in the %files section. (In reply to comment #88) I've made a change to %{_libdir}/libfax* (In reply to comment #89) This is a bug in the ghostscript package and not HylaFAX. HylaFAX is extracting the path from the 'gs -h' output. HylaFAX's faxsetup will compensate for this ghostscript bug later during setup and configuration. (In reply to comment #90) I've followed your fedora-devel discussion here: There seems to be a resounding consensus with respect to the FHS matter and a very undecided response to the naming matter. Please allow me to restate my positions on these. My positions are entirely based upon anticipation of user expectations, minimizing support cases, and making everything as intuitive as possible. In fact, these concerns are the only reason for my creating this review request in the first place: if there were a hylafax package in Fedora already I wouldn't have bothered - there would have been no reason for this effort. As HylaFAX is approaching it's 20th birthday and is almost certainly the most-popular open-source fax application for UNIXes, it's silly that Fedora does not have a hylafax package. RedHat 5.2 *did* have hylafax-4.0pl2 (that's the SGI flavor, not from hylafax.org). RedHat apparently decided to drop the hylafax package due to conflicts with the mgetty-sendfax package. (That is interesting because the mgetty author created the conflict... as his approach at the time was to write his own fax software instead of participate with HylaFAX development. And, in particular, he chose to use "sendfax" for one of the commands... and thus created the conflict. One could argue, and I would agree, that "sendfax" is generic enough to be the presumptive way to send a fax with a command. Nevertheless, this seems to be the reason why hylafax was tossed from RedHat 6.0.) Again, I'm okay with calling the package hylafax+ instead of hylafax. I think that would be a mistake, though. And I think having a "hylafax.org" and a "hylafax+" but no "hylafax" (cutting the baby in half) would be an even worse mistake. Why? Because all HylaFAX users, whether they use SGI's flavor, hylafax.org's flavor, iFax's flavor, or HylaFAX+ flavor... they all call that software "HylaFAX". And they're all right, too. If hylafax.org really wanted to submit a package request and maintain it, then I would wholeheartedly acquiesce and say to put that in as a "hylafax" package into Fedora. I would never use it, myself, of course... but my whole reason for creating this package review request in the first place has always been to see a HylaFAX package in Fedora for other users. You don't see me trying to bully-in a hylafax+ package into Debian/Ubuntu for exactly this reason. Both HylaFAXes at hylafax.org and Sourceforge (HylaFAX+) are so similar that it's really inaccurate to describe their differences as a fork. I don't think there's an accurate label, but a certain well-known open-source philosopher once called it a "pseudo-fork" when I brought up this concept to him. He said that this case is very similar to what happens when a distribution packager adds a few patches here and there to suit the distribution schema or to address bugs or even enhancements that appealed to the packager. He indicated that use and functionality were the criterion by which a fork was to be defined... and not so much on code specifics. In other words, we could create a software package called "hello_world" that was coded in shell this way: echo "Hello world!" And then someone came along and rewrote hello_world in this way: printf "Hello world!\n" Are they forks? Maybe there is a reason to prefer one over the other, but they're functionally the same, and there's absolutely no sane reason to try have both of them. Pick one or the other. If a time comes that one is preferred over the other, then simply change, and keep on calling it hello_world. I realize that's an over-simplified example, but I'm trying to illustrate my perspective here... and what I believe is the general user perspective. Users are going to want to do: 'yum install hylafax'. For users who already know that HylaFAX on Fedora is called "hylafax+" or "hylafax.org" then doing 'yum install hylafax+' may be beneficial, but it will create a support matter and a point of confusion for every first-time installer. There seems to be some suspicion on your part, Jason, that I have some nefarious intention "in (what seems to [you] to be) an attempt to gain legitimacy". I think that you've entirely misunderstood what I've done. I coded at hylafax.org for several years. There was an occasional flare-up with respect to opinions on aesthetics of the coding being committed to the repository, and an occasional flare-up with respect to design approaches, but as ugly as those sometimes got the code changes were almost always followed, and they weren't the reason why I moved my coding to Sourceforge. In fact, I set up the Sourceforge site as a download location for HylaFAX... specifically pre-release versions of HylaFAX. You see, I always had been frustrated with the slow release schedule at hylafax.org. Users were regularly and frequently (multiple times per week) requesting that I send them pre-release tarballs or pre-release RPMs. It became very frustrating to answer mailing-list questions over and over again for weeks and months about bugs that were "fixed in the next release". So I finally made the decision to just set up a site where users could just download those pre-releases for themselves. The end-result of this action was a complete rejection by other hylafax.org developers (all of whom were/are employed by iFax). They insisted that references inside of the tarball (code, manpages, and text files) should have no reference to hylafax.org or "The HylaFAX Development Team". The only way that I could sanely accommodate that was to set up a separate code repository. Then I embarked upon a number of months where I was trying to maintain two code repositories (and that never works: it's a real pain, as it had been when those same hylafax.org developers asked me to code in a 4.2 CVS branch while expecting me to also maintain the old 4.1 branch that eventually did nothing for open-source HylaFAX users and never had another release). So basically what happened was that iFax made it increasingly impossible for the largest code contributor, me, to continue to operate at hylafax.org. And, yet, they were as eager as always to absorb the developments I would make at the Sourceforge site. So we now have a situation, then, where there are two versions of HylaFAX that are basically following the same development path... one behind the other. There are some differences, but they're nominal for most users, and functionality and use are identical (the differences follow the hello_world example quite literally although not as simplistically). The HylaFAX+ name came about because some users were quite emphatic that they needed some distintion between "the Souceforge HylaFAX development project" and hylafax.org. In retrospect, I now tend to believe that those clamors were coming from hylafax.org enthusiasts whose intentions were really to strip legitimacy from my efforts. I hope that clarifies my intentions. If they continue to seem nefarious to you, then I sincerely apologize. I truly believe that my actions and intentions have always been done in good faith and with the users' interests in mind. So, again, I'm happy to change the name of the package to "hylafax+" if that's what's really the hold-up here. I disagree with it, but I'm willing to do it if, indeed, that's the hold-up. Unfortunately, I don't think that using "Provides: hylafax" helps any. In an 'rpm -Uvh' procedure (upgrade) this still happens (in extreme multiplicity): file /var/spool/hylafax/etc/dialrules from install of hylafax+-5.2.5-1.fc8.i386 conflicts with file from package hylafax-5.2.0-1.fc8.i386 Maybe someone better with spec files can tell me what I didn't do right. But this kind of thing is *exactly* what must be avoided. Users need to be able to transition between hylafax.org and HylaFAX+ (and Fedora) RPMs without these kinds of headaches. And as for the FHS matter... I hope that you can understand that I cannot change how the upstream package is installed or works in such a dramatic way as it would be to move "bin", "etc", and "config" elsewhere. The resulting support issues would simply be too great. Now, we may be able to get away with symlinking as the fedora-devel tread suggests, but it would be something that would be unique to the Fedora package... unless it were so interchangeable that it would work for all other platforms (e.g. Mac OS, Solaris, BSD, AIX, IRIX, SCO, etc.) too. And even then, I would be reluctant to do it. You see, having it all right there allows the hfaxd client (much like an ftp client) in an administrative mode to get in to the configuration files in those directories (yes, "bin" contains executable scripts which are largely considered configurable) and make adjustments that way. So it could be quite useful to have all of those things there. Certainly it's much more convenient to have them there. Thus, it is my perspective that those directories do very much belong under "var", and there really is no more-appropriate place to put them other than "spool"... in my estimation. Maybe that concept is stretching it a bit - nobody probably uses bin, etc, and config files in that way. But it's a design feature that I don't feel necessarily warrants change. However, if this is the only thing holding up getting this package into Fedora, and if we can make adjustments that prove to be transparent to the end-users, then I can work with that (I'll need some detailed explanations of what to do) in a Fedora-specific way as I've stated above. SPEC: SRPM: On the FHS matter... It would be infinitely easier to move all of /var/spool/hylafax into /var/hylafax instead of trying to move subdirectories "bin", "etc", "config" into other places. Does this make a resolution any easier? SPEC: SRPM: SPEC: SRPM: package built fine in dist-f10 for all supported arches. The historic of this review is huge, I haven't read all for now. So that's just few notes. (probably not a complete review). * As the package is already hylafax, You don't need to have Provides: hylafax * BuildRequires: ... gcc, gcc-c++ - Those aren't needed as they are implicitly added in the BuildDependency and shouldn't be mentioned. (older fedora version will need this indeed). * Conflicts: mgetty-sendfax - We need to find a solution to avoid conflict. and implement a proper alternative. Since this can probably not hit F-9/F-10 * - JBIG library was not found on x86_64 Checking for JBIG library support ... not found. Disabling JBIG support Can the support for this can be enabled ? (it needs to be added as BuildRequires first) *. * rpmlint on installed package: [root@kwizatz Téléchargement]# rpmlint: useless-provides hylafax hylafax.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxutil.so.5.2.8 /lib64/libm.so.6 hylafax.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.2.8 HYLAFAX_VERSION_STRING hylafax.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxserver.so.5.2.8 /lib64/libm.so.6 1 packages and 0 specfiles checked; 3 errors, 3 warnings. -> the cron scripts should probably stay as %config files -> the undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.2.8 HYLAFAX_VERSION_STRING. About: * Conflicts: mgetty-sendfax I meant, when the package will be fixed, the conflicts will remain in version-release older than the one which got the fix. SPEC: SRPM: > * As the package is already hylafax, You don't need to have > Provides: hylafax Removed. > * BuildRequires: ... gcc, gcc-c++ - Those aren't needed as they are implicitly > added in the BuildDependency and shouldn't be mentioned. (older fedora version > will need this indeed). Left as-is to support older Fedora. > * Conflicts: mgetty-sendfax - We need to find a solution to avoid conflict. > and implement a proper alternative. Since this can probably not hit F-9/F-10 HylaFAX was in RedHat 5.2. mgetty-sendfax chose to develop its own "sendfax" command-line fax tool using the same "sendfax" name as HylaFAX (as well as the same /var/spool/fax spool directory - HylaFAX has since changed to /var/spool/hylafax). Because of this conflict RedHat removed HylaFAX beginning at 6.0. The conflict cannot be resolved because it was the mgetty-sendfax developer's intention to create an alternative for HylaFAX. I do not wish to offend Gert Doering, but it is my recommendation to remove mgetty-sendfax from Fedora. (Mostly because the last official release, 1.0, is dated 1998... although betas are available from 2007.) However, as I suspect that my recommendation will not be followed, it is therefore my suggestion to implement a "system-switch-fax" similar to what has been done for sendmail/Postfix. *IF* doing that work will finally get this package into Fedora - with no more hold-ups, then I will gladly go through the effort to develop system-switch-fax and make the necessary modifications to both the mgetty-sendfax package and this package. *HOWEVER*, I do not desire to go through that effort only to find that we're yet hung up on something else. Please advise. > * > - JBIG library was not found on x86_64 > Checking for JBIG library support > ... not found. Disabling JBIG support > Can the support for this can be enabled ? (it needs to be added as > BuildRequires first) The JBIG-KIT package is currently not in Fedora. Other distributions (i.e. Gentoo) do include it. However, there may be some patent encumbrances with respect to JBIG technology, and you may want to pass this with your legal team before including JBIG-KIT into Fedora. > *. The warnings can be ignored. The Requires: should be sufficient, yes. Those packages are not needed for building this package, but they are needed for runtime. > -> the cron scripts should probably stay as %config files > -> the undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.2.8 > HYLAFAX_VERSION_STRING can probably be fixed. Please advise on how it. Yes, the defaults can be changed at runtime. I'm not sure there is an issue here. There is a new HylaFAx release out. As far as I can tell, that's not even the same hylafax. Which underpins the reason why this package will never be approved (by me, at least) without being renamed to hylafax+ as has been repeatly requested in this ticket. Since comment #83 indicates that this won't happen, I don't even know why I still have this ticket assigned to myself. So I'm just unassigning myself and returning this to the review queue. As I do that, I'll make a few notes: The package in comment #99 still builds OK in today's rawhide and rpmlint really doesn't complain about much. In fact, I'll just post it here: hylafax.x86_64: E: executable-marked-as-config-file /etc/cron.hourly/hylafax hylafax.x86_64: E: executable-marked-as-config-file /etc/cron.daily/hylafax hylafax.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.2.9 HYLAFAX_VERSION_STRING hylafax.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxserver.so.5.2.9 /lib64/libm.so.6 hylafax.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxutil.so.5.2.9 /lib64/libm.so.6 The first two and last two are not problematic; I'm not really sure about the third one. It was indicated that this should be easy to fix, but I can't suggest how to fix it. The Conflicts: with mgetty-sendfax is problematic according to. I'd say that the best way out is to use alternatives as recommended by those guidelines, which requires coordination with the owner of the mgetty package (jskala@redhat.com) who should probably be added as a CC if this starts moving forward again. Jason, This is a very long-winded bug report. So let me be clear about issues you bring up. I am VERY happy to change the name of the package to "hylafax+" if that will get it into Fedora. (Understand, that a package name change will cause me no small amount of support effort for the very large existing installation base. See other notes describing inadequacies within RPM for dealing with it.) So, if I change the name of the package to "hylafax+" will you be able to help it get into Fedora? (I want to ensure that my effort in doing so actually results in something.) As for the alternatives suggestion, I'm completely happy to work with jskala@redhat.com who I have now CC'ed on this report. SPEC: SRPM: SPEC: SRPM: SRPM: Hi there, I am looking around for a rpm for latest Hylafax server for Fedora 13 (or 14?) but found Hylafax.org only offer package for Fedora 10. I am not sure if this is right place to ask. I have installed a hylafax server on a Redhat 9.0 and have just recently want to install another one on a F13. Do you think the rpm for Fedora 10 built by hylafax.org is ok for installation on F13? Where could I find more information about this hylafax server installation on Fedora (w/ SELinux). Thanks in advance. paul Lee - I'm not a sponsor, so I can't do much directly. You've clearly put a lot (!) of work into this and I just wanted to give you an "atta boy!". This is a complicated package, for a lot of reasons. Clearly you're still interested, since you continue to attach version updates, but it looks like Jason stopped having fun a while ago. :) My free, unsolicited advice is to hit [good] or irc://irc.freenode.net/fedora-devel [better] to see if you can get a fresh sponsor or rekindle Jason's interest. Paul from Comment #106 - There is no Fedora-supported answer at this time. I would suggest the following links: I appreciate your situation, but please keep this bug on the topic of Lee's packaging review and not support for the software itself. Paul, You can download a pre-built x64 Fedora 13 RPM for HylaFAX+ here: Dave, Thank you for the advice, I will soon pursue a fresh sponsor on IRC. Thanks, Lee. Thanks Dave & Howard. It seems Howard you are maintaining a number of rpm for different systems. Unfortunately, my f-13 system is not 64-bit. Interestingly, I do not see a 5.4.2 version in Hylafax website? Paul, You can always download the SRPM and 'rpm --rebuild' it. For the website see SPEC: SRPM: SRPM: For what it's worth, I did solicit a sponsor on IRC as suggested. Although some brief discussion ensued nothing became of it other than comments similar to, "Oh, that package... I'm not going to touch it." I understand the blocks to be as follows: 1) package naming - there is some objection to using "hylafax" as the package name, however, as yet no prospective sponsor has committed to sponsoring the package inclusion if the package name is changed to "hylafax+". 2) conflict with mgetty-sendfax - in order for package inclusion to occur there will necessarily need to be collaboration in the "alternatives" so that users can switch between hylafax and mgetty-sendfax. (It is apparently unacceptable to have conflicting packages in the repository.) But, again, collaboration is apparently not easy to come-by. For me the biggest problem is the volation of the FHS, e.g. binaries in %{faxspool}/bin/. Christoph, If I resolve the issue with binaries in %{faxspool}/bin/ are you willing to sponsor the package for inclusion? Thanks, Lee. Lee, this is amazing. 4 years and you did not gave up (you still didn't, isn't it?). I will do this review. hylafax.i686: W: no-manual-page-for-binary hylafax hylafax.i686: W: no-manual-page-for-binary faxsetup.linux hylafax.i686: W: no-manual-page-for-binary faxmsg hylafax.i686: W: no-manual-page-for-binary ondelay hylafax.i686: W: no-manual-page-for-binary probemodem hylafax.i686: W: no-manual-page-for-binary lockname hylafax.i686: W: no-manual-page-for-binary typetest I encourage you to write man page for this executables. hylafax.i686: W: wrong-file-end-of-line-encoding /usr/share/doc/hylafax-5.5.0/COPYRIGHT You have there probably DOS/WINDOWS style of end of line. It is preferred to use %global rather then %define Is there some reason to specify /usr/bin/tiffcp instead of libtiff-tools? I'm going to talk to jskala (as he is the same building as me) about the conflicts and virtual providess. I second that change the name to hylafax+ is probably best thing. I briefly read all this BZ (as it is very long) and still does not understand (even after reading #92) why that stuff in /usr/spool/ is there? Can you elaborate why it could not be moved to /etc /usr/bin /var/log etc? And I'm sorry in advance, if you will repeat yourself and I missed it. Hello Miroslav. Thank you for taking this project. I could write up man pages for those executables, but they'd be completely unused because those executables are not meant to be used except as tools by other executables which do already have man pages. I've now changed the EOL codes on COPYRIGHT in the upstream CVS. I've now changed %define to %global. The reason for specifying /usr/bin/tiffcp is because libtiff-tools is a new package that was broken-off from libtiff. Consequently, in order for the SPEC/SRPM to work properly on older Fedora systems before libtiff-tools was broken-off from libtiff, I specify /usr/bin/tiffcp. I am happy to change the package name to hylafax+. HylaFAX hfaxd operates chroot-ed to /var/spool/hylafax and needs access to at least some of the scripts in /var/spool/hylafax/bin and /var/spool/hylafax/etc. So moving /var/spool/hylafax/bin and /var/spool/hylafax/etc to somewhere outside the chroot makes things very problematic. > I could write up man pages for those executables, but they'd be completely > unused because those executables are not meant to be used except as tools by > other executables which do already have man pages. I understand that. And this is not MUST, but only SHOULD item. But still having *some* man page is nice thing. Even if it would be very short man page with something like: "You should not run XXXX manually. This is called internally by YYYY(8). See also YYYY(8)" ad tiffcp - fair enough ad chroot-ed environment - fair enough, I have no objection to content. But I do have objection to directory where it reside. Why you use /var/spool/hylafax? Quoting: "/var/spool contains data which is awaiting some kind of later processing. Data in /var/spool represents work to be done in the future (by a program, user, or administrator); often data is deleted after it has been processed." I would really recommend you to move it to /var/hylafax/chroot (similary as e.g. bind-chroot does). Is is viable? While I'm not averse to creating man pages for every executable installed, I remain unconvinced and highly sceptical that doing so for executables not designed to be run by users would have any meaningful value. Even in a minimal install Fedora has very numerous executables installed which do not have man pages, and as a Fedora use I have never once have wanted to run those executables directly or have been disappointed by there not being a man page for them. Not everything that utilizes files and scripts from the traditional /var/spool/hylafax directory operates within the chroot, and so renaming that "spool" directory to /var/hylafax/chroot is misleading. I would be agreeable to move it to /var/hylafax. I think that this was suggested before in Comment #94. However, both this and your suggestion seems to violate the FHS: "Applications must generally not add directories to the top level of /var. Such directories should only be added if they have some system-wide implication, and in consultation with the FHS mailing list." For what it may be worth, the contents of the "HylaFAX spool" directory are as follows: drwx------ 2 uucp uucp 4096 2010-05-02 20:50 archive drwxr-xr-x 3 root root 4096 2010-07-30 15:41 bin drwxr-xr-x 2 uucp uucp 4096 2011-10-25 11:08 client drwxr-xr-x 2 root root 4096 2010-07-30 15:41 config drwxr-xr-x 2 root root 4096 2010-05-02 20:50 dev drwx------ 2 uucp uucp 4096 2011-10-25 13:01 docq drwx------ 2 uucp uucp 4096 2011-10-25 12:01 doneq drwxr-xr-x 2 uucp uucp 4096 2011-10-25 03:47 etc drwxr-xr-x 2 uucp uucp 4096 2011-10-25 11:08 info drwxr-xr-x 2 uucp uucp 77824 2011-10-25 16:55 log drwx------ 2 uucp uucp 4096 2010-05-02 20:50 pollq drwxr-xr-x 2 uucp uucp 36864 2011-10-25 16:55 recvq drwx------ 2 uucp uucp 4096 2011-10-25 11:08 sendq drwxr-xr-x 2 uucp uucp 4096 2010-05-02 20:50 status drwx------ 2 uucp uucp 4096 2011-10-25 11:06 tmp Additionally, there are FIFOs for each modem and one for faxq created there. The directories archive, client, docq, doneq, info, log, pollq, recvq, sendq, status, and tmp are all true spool directories per the FHS definition you cite. The bin, dev, and etc directories are the ones raising concerns (and namely bin), and they all are there so that they can be accessible to the hfaxd client from within the chroot. The dev directory is there for access to /dev/null. The etc directory is there so that the administrative hfaxd client could manipulate configuration files. The bin directory is only there for needful operations within the chroot. You can maybe think of it as a type of hybrid between lpd and ftp. Imagine a printing client/driver that could send print jobs to the server but also retrieve copies of previous print jobs and change various types of operations in the print server. Recognize that HylaFAX heralds from a time 20 years ago when FHS didn't exist. So it's not like HylaFAX was developed in direct violation of FHS - rather, FHS was developed without consideration of HylaFAX. Both Gentoo and Debian have HylaFAX ports, and both have left /var/spool/hylafax there. (However, in attempting to address the FHS concern Debian has done some cumbersome synchronizing work to duplicate files from /var/spool/hylafax/bin and /var/spool/hylafax/etc into /etc/hylafax - or something like that. I'm not sure how this helps alleviate the FHS concern, though.) My perspective on this is that the FHS just does not have an appropriate categorization for service-level applications that allow client access to scripts and configuration files from within a chroot that 99% of the time is used for spool purposes. However, if moving /var/spool/hylafax to /var/hylafax will resolve the concerns, then I am willing to do that. That argument about Gentoo and Debian is valid for me. IMO if other distributions and especially Debian accepted some structure, we should not be so über dogmatic and follow what users are used to do for 20 years. So I accept that /var/spool. ad man page - I do not force you. I'm just saying it would be nice to have it. This is not requirement to pass this review. I'm just kindly asking. Feel free to ignore me. Hello Lee, I am new maintainer of mgetty in Fedora. I've checked conflict with my package. At first glance it seems like there are two paths which are common for hylafax and sendfax, it's /usr/bin/faxrm binary and it's manpage. I'm afraid that we can't work this out using Alternatives because I believe that each conflicting binary is manipulating another fax job queue. I will investigate further and I will try to find some solution to this issue. Another problem is general confusion caused by both packages installed at the same time. Many binaries have same names and they differ "only" by install location in filesystem. From user point of view it might be very confusing to see "the same" binary twice, one in /usr/bin and the other located in /usr/sbin. If you know already about some possible solution to these issues, please let me know. Please feel free to correct me if I've made some mistake. I am not very familiar with your package so far, but I will be happy to see hylafax included in Fedora because as I read through the comments it seems that you've put some extraordinary effort to get hylafax included. Glad to see the interest in hylafax here. I use hylafax on about a dozen servers and it's great. I also use mgetty/vgetty, but not both on the same computer :-) They each have their uses and should both be in Fedora. I'd love to see hylafax be included in Fedora repos. For the longest time I use the rpm's provided elsewhere linked to by the hylafax project pages. They work well. I just ran into an issue that I just want to note here for posterity: hylafax will require a dep on libtiff-tools for pdf-email-attachments to work. PDF attachments magically disappeared from my servers a long while back and I know now why: libtiff-tools were broken out and the fax2ps program was gone. ping, Michal, did you find some solution? I think only viable solution here, how to overcome the conflicts with mgetty-sendfax is to rename binaries in hylafax. I've looked at Debian packages and they have conflicts between each other and nobody seems to care, but I don't think this is acceptable for Fedora. SPEC: SRPM: With respect to the conflict with mgetty-sendfax... Please forgive my naievety, but what's wrong with leaving "Conflicts: mgetty-sendfax" as it is? IMO that resolves the problem entirely as it clearly tells the user that they can install hylafax OR mgetty-sendfax but not both. I believe I followed Packaging Guidelines[1], when I proposed renaming or prefixing binaries. If this is not acceptable for hylafax, then I think only solution here is approaching Fedora Packaging Committee and making the case there (my opinion is based on facts stated in [2][3]). Please note that I personally don't have a problem with Conflicts: mgetty-sendfax in hylafax spec file, however I want to be sure that we are following guidelines here. References: [1] [2] [3]: Hello Michal, How do we approach the Fedora Packaging Committee for approval on this conflict? HylaFAX users are quite accustomed to using 'sendfax' and 'faxrm', and there are extremely numerous scripts written by those users referring to 'sendfax' and 'faxrm' such that renaming those binaries is simply unfeasible. There are undoubtedly many thousands of HylaFAX installs out there in production systems sending/receiving millions of faxes. HylaFAX has been around since 1991, and when Gert Doering later wrote the "sendfax" part of mgetty-sendfax he did so in-part as an alternative to HylaFAX. In other words, he designed mgetty-sendfax to conflict deliberately. (That's not wrong or shameful at all, but it's significant to recognize that the author's intention was to never have both packages installed simultaneously.) HylaFAX was in RedHat 5.2 and was removed (apparently due to the conflict with mgetty-sendfax - and I can't figure out why they chose mgetty-sendfax over HylaFAX or why the conflict was a problem then), but HylaFAX has substantially more users and is more-actively developed/maintained. Consequently, users of both Fedora and HylaFAX must always install HylaFAX independently from Fedora's repositories. For those of us well-acquainted with that procedure, it's not a problem, but new users who have no distribution preference will be tempted to use Debian or Gentoo where they can expect to find HylaFAX without going outside of the distribution repositories and where they can expect updates, etc, without paying much attention to the HylaFAX mailing lists. Okay, so it appears that we have a go-ahead with the Conflicts. Michal, you'll need to add a Conflicts on your side, too. Miroslav, where do we go from here? It would seem the next step would be to rename this package HylaFAX+, since this is a fork of HylaFAX, and should be clearly labeled thus to avoid confusion with the original HylaFAX found at. -Darren I agree to #130. And since biggest blocker is solved we can focus on minorities. 1) Please convert your init.d script to systemd service. 2) buildroot and its cleaning in %install and %clean is not needed unless you target EPEL5. 3) %files section is too much verbose E.g. this: %attr(755,root,root) %dir %{_sysconfdir}/hylafax can be safely written as: %dir %{_sysconfdir}/hylafax because it will take the attr from buildroot, where you already set it up correctly with mkdir -p -m 755 $RPM_BUILD_ROOT%{_sysconfdir}/hylafax And generaly setting attr to 755 or 644 is not needed because this is default. %defattr(-,root,root,-) is not needed unless you target EPEL5. So your %files section can look like: %files %doc CHANGES CONTRIBUTORS COPYRIGHT README TODO VERSION %{_initrddir}/hylafax %config(noreplace) %{_sysconfdir}/cron.daily/hylafax .... %defattr(-,uucp,uucp,-) %dir %{faxspool} %dir %{faxspool}/archive ... SPEC: SRPM: Miroslav, I'd like the SPEC/SRPM to be buildable on multiple RPM-based distributions... especially as many current and past RedHat, CentOS, and Fedora releases as possible. Can you frame your statements #1, #2 and #3 in Comment #131 accordingly? In other words, can you offer any suggestion on how to make those changes and still have the SRPM build properly on older Fedora releases? Please note the package name change. Thanks, Lee. ad 1) you can have sysv and systemd together. On Fedora 17+ you install systemd everywhere else sysv. You may check for example. 2) ok, you can leave it there. 3) still valid. you may leave initial %defattr for more compatibility, but you still can write it as: %files %defattr(-,root,root,-) %doc CHANGES CONTRIBUTORS COPYRIGHT README TODO VERSION %{_initrddir}/hylafax %config(noreplace) %{_sysconfdir}/cron.daily/hylafax .... %defattr(-,uucp,uucp,-) %dir %{faxspool} %dir %{faxspool}/archive and it will be valid everywhere. another issues: [!]: All build dependencies are listed in BuildRequires, except for any that are listed in the exceptions section of Packaging Guidelines. Note: These BR are not needed: gcc gcc-c++ make hylafax+.src:19: W: unversioned-explicit-provides: script-without-shebang /var/spool/hylafax/bin/genfontmap.ps hylafax+.x86_64: E: script-without-shebang /var/spool/hylafax/bin/auto-rotate.ps hylafax+.x86_64: W: no-manual-page-for-binary hylafax hylafax+.x86_64: W: no-manual-page-for-binary faxsetup.linux hylafax+.x86_64: W: no-manual-page-for-binary faxfetch hylafax+.x86_64: W: no-manual-page-for-binary faxmsg hylafax+.x86_64: W: no-manual-page-for-binary ondelay hylafax+.x86_64: W: no-manual-page-for-binary probemodem hylafax+.x86_64: W: no-manual-page-for-binary lockname hylafax+.x86_64: W: no-manual-page-for-binary typetest hylafax+.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxutil.so.5.5.2 /lib64/libm.so.6 hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 HYLAFAX_VERSION_STRING hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 jbg_enc_out hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsSetErrorHandler hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsDoTransform hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsSample3DGrid hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsCreateTransform hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsClampLab hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsCloseProfile hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsSetDeviceClass hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsAlloc3DGrid hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsAddTag hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 jbg_enc_options hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 jbg_enc_free hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 jbg_enc_init hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsFreeLUT hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsAllocLUT hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsLabEncoded2Float hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsSetColorSpace hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsFloat2LabEncoded hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsCreate_sRGBProfile hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsSetPCS hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 cmsDeleteTransform hylafax+.x86_64: W: undefined-non-weak-symbol /usr/lib64/libfaxserver.so.5.5.2 _cmsCreateProfilePlaceholder If you are unsure what to do with some of these warning, do not hesitate to ask me. (In reply to comment #129) > Michal, you'll need to add a Conflicts on your side, too. Which releases do you plan to target once HylaFax is included? Git repository for a new package has by default only rawhide branch and you have to request others if you want to. I need to know that so I can update my branches accordingly. SPEC: SRPM: Michal, I expected to target Fedora 18. If it's possible to still get included into Fedora 16 and 17 I would like to do that. However, I would imagine that it's acceptable on your side to simply add the Conflicts for all releases. Miroslav, this version now switches from SysV to systemd, as requested. There will likely be some improvements to what I've done, but I think this gets it going. It simplifies the SPEC as suggested. Here is the rpmlint output against the SRPM and the installed package with explanations... # rpmlint /root/rpmbuild/SRPMS/hylafax+-5.5.2-3.fc16.src.rpm hylafax+.src:57: W: configure-without-libdir-spec This is expected because HylaFAX configure does not support libdir. See an earlier comment for more explanation. #. hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/de 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/en 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/es 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/fr 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/it 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/nl_BE 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/pl 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/pt 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/pt_BR 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/ro 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/sr 0644L /bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/tr 0644L /bin/bash These are all shell "snippets" which are included via "." in other scripts. hylafax+.x86_64: E: non-readable /var/spool/hylafax/etc/hosts.hfaxd 0600L This file is supposed to be this way for security purposes. hylafax+.x86_64: E: non-standard-dir-perm /var/spool/hylafax/archive 0700L hylafax+.x86_64: E: non-standard-dir-perm /var/spool/hylafax/docq 0700L hylafax+.x86_64: E: non-standard-dir-perm /var/spool/hylafax/doneq 0700L hylafax+.x86_64: E: non-standard-dir-perm /var/spool/hylafax/pollq 0700L hylafax+.x86_64: E: non-standard-dir-perm /var/spool/hylafax/sendq 0700L hylafax+.x86_64: E: non-standard-dir-perm /var/spool/hylafax/tmp 0700L These are all as intended for security purposes. hylafax+.x86_64: E: script-without-shebang /var/spool/hylafax/bin/auto-rotate.ps hylafax+.x86_64: E: script-without-shebang /var/spool/hylafax/bin/genfontmap.ps These are not shell scripts, they are postscript... and they are correct as-is. hylafax+.x86_64: W: no-manual-page-for-binary faxfetch hylafax+.x86_64: W: no-manual-page-for-binary faxmsg hylafax+.x86_64: W: no-manual-page-for-binary faxsetup.linux hylafax+.x86_64: W: no-manual-page-for-binary hylafax hylafax+.x86_64: W: no-manual-page-for-binary lockname hylafax+.x86_64: W: no-manual-page-for-binary ondelay hylafax+.x86_64: W: no-manual-page-for-binary probemodem hylafax+.x86_64: W: no-manual-page-for-binary typetest This is correct... there are no man pages for these binaries. Someday maybe there will be. hylafax+.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxserver.so.5.5.2 linux-vdso.so.1 hylafax+.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxutil.so.5.5.2 /lib64/libm.so.6 hylafax+.x86_64: W: unused-direct-shlib-dependency /usr/lib64/libfaxutil.so.5.5.2 linux-vdso.so.1 These are not the fault of this package. The linux-vdso.so.1 unused dependency is an issue in Bug 738082, and the libm.so.6 unused dependency is an issue inherited from libtiff, I suspect. >#. I asked on packaging mailing list and it seems that 50% of developers think it should be config and 50% think it should not be config. I think it should not be config, but I will not argue here. And I will not block review for this. > hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dict/tr 0644L /bin/bash > >These are all shell "snippets" which are included via "." in other scripts. Then you should remove that shebang. If it is not meant to direct execution, then it should not have #! at first line. If there is #! on first line, then it should have executable flag. That is rule. >This is correct... there are no man pages for these binaries. Someday maybe there will be. Writing man page is very easy. You can write it in perldoc or in asiidoc, which is more or less just wiki syntax. Here you can see examples for inspiration: Asciidoc: Perldoc: But I will not block review for this. But please consider it. Two senteces, which describe what this command do is just enough. I accept all others waives. So only remaining blocking issue are those files with shebang without executable permission. SPEC: SRPM: I've added the missing man pages, and I've removed the undesired shebangs. Key: [x] = Pass [!] = Fail [-] = Not applicable [?] = Not evaluated [ ] = Manual review needed =====. see #82 [x]: Package successfully compiles and builds into binary rpms on at least one supported primary architecture. [-]: %build honors applicable compiler flags or justifies otherwise. smp flags breaks build libfaxutil [x]: All build dependencies are listed in BuildRequires, except for any that are listed in the exceptions section of Packaging Guidelines. [x]: Package contains no bundled libraries. I see only regexp, but it seems others (nvi, mysql, php, bundle that as well). [x]: Changelog in prescribed format. [-]: Package does not run rm -rf %{buildroot} (or $RPM_BUILD_ROOT) at the beginning of %install. Note: rm -rf %{buildroot} present but not required waiving - submiter want epel5 [x]: Sources contain only permissible code or content. [x]: %config files are marked noreplace or the reason is justified. [!]: Each %files section contains %defattr if rpm < 4.4 Note: %defattr present but not needed : "MIT/X11 (BSD like)", "Unknown or generated", "BSD (4 clause)". 3 files have unknown license. see #82 [x]: Package consistently uses macro is (instead of hard-coded directory names). [x]: Package is named using only allowed ASCII characters. [x]: Package is named according to the Package Naming Guidelines. [x]: No %config files under /usr. [x]: Package does not generate any conflict. Note: Package contains Conflicts: tag(s) needing fix or justification. See from #122 till #129 . [x]: Package contains systemd file(s) if in need. [x]: File names are valid UTF-8. [x]: Useful -debuginfo package or justification otherwise. [-]: Large documentation must go in a -doc subpackage. Note: Documentation size is 112640 bytes in 6 files. [x]: Packages must not store files under /srv, /opt or /usr/local ===== SHOULD items ===== Generic: [x]: Reviewer should test that the package builds in mock. [x]: Buildroot is not present Note: Buildroot neede for el5 [x]: Package has no %clean section with rm -rf %{buildroot} (or $RPM_BUILD_ROOT) Note: %clean neede for el5 [-]:. [-]: Patches link to upstream bugs/comments/lists or are otherwise justified. [x]: The placement of pkgconfig(.pc) files are correct. [!]: Scriptlets must be sane, if used. systemd scripts are incorrect [x]: SourceX tarball generation or download is documented. [!]:. Nice to fix: [!]: Each %files section contains %defattr if rpm < 4.4 Note: %defattr present but not needed I incorrectly said in #131 that is is needed for el5, but el5 has rpm 4.4.2, so you can remove it completly. [!]:) Blockers: [!]: Scriptlets must be sane, if used. systemd scripts are incorrect see and to have unitdir macro you have to buildrequire systemd-units see During installation I get: Updating / installing... 1:hylafax+-5.5.2-4.fc17 warning: /var/spool/hylafax/FIFO created as /var/spool/hylafax/FIFO.rpmnew ################################# [100%] Is this expected? new rpmlint errors: hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/faxcron 0444L hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/faxrcvd 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/pcl2fax 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/xferfaxstats 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/xferfaxstats 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/faxsetup.linux 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/faxsetup.linux 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/hylafax 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/hylafax 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /etc/hylafax/faxmail/application/octet-stream 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/tiff2fax 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/faxsetup 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/faxsetup 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/probemodem 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/probemodem 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/faxaddmodem 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/faxaddmodem 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/edit-faxcover 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/edit-faxcover 0444L /usr/bin/bash hylafax+.x86_64: W: non-executable-in-bin /usr/sbin/recvstats 0444L hylafax+.x86_64: E: non-executable-script /usr/sbin/recvstats 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/pollrcvd 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/tiff2pdf 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/wedged 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/common-functions 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /etc/hylafax/faxmail/application/pdf 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/qp-encode.awk 0444L /usr/bin/gawk hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/pdf2fax.gs 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/rfc2047-encode.awk 0444L /usr/bin/gawk hylafax+.x86_64: E: non-executable-script /etc/hylafax/faxmail/image/tiff 0444L /usr/bin/bash hylafax+.x86_64: E: non-executable-script /var/spool/hylafax/bin/dictionary 0444L /usr/bin/bash I guess that those in /usr/sbin should be marked as executable and tohse in /var/spool should have removed shebang (or marked as executable). SPEC: SRPM: I've removed %defattr for root from %files. I've renamed the source and patch files to be hylafax+ I've modified the scriptlets for systemd. If they still are incorrect please be specific about the problems because I think that they are correct according to I've added a BuildRequires on /bin/systemctl. I did it this way instead of based on systemd-units because there is no systemd-units package in Fedora 18. I don't know why the /var/spool/hylafax/FIFO file was marked as config. I've removed this as I believe it to be an error. As for the new non-executable-* rpmlint errors, I cannot reproduce them here after many, many attempts. Please check again and tell me how to reproduce the problem. This: %if 0%{?fedora} >= 16 BuildRequires: /bin/systemctl %endif is little bit hackish. We need it for definiton of macro %{_unitdir}. Which is provided by the same package, which provides this file. I would rather see there comment, like: # this it to load definiton of macro %{_unitdir} or even better write it as: %if 0%{?fedora} == 16 || 0%{?fedora} == 17 BuildRequires: systemd-units %endif %if 0%{?fedora} >= 18 BuildRequires: systemd %endif And you will know that you can remove the first if-condition as soon as F17 is EOLed. But that is just my POV. == systemd scriplets == ad %postun: you have: if [ "$1" = "1" ]; then please note, that $1 is set to number of installed packages, which may be in some rare situations be greater then one (e.g. in multilib). You will be safe with -ge operator: if [ 0$1 -eq 1 ]; then The same apply for %post ad %post: /bin/systemctl enable and /bin/systemctl start which automatically enable and start those services. Which is not what we want. Installations can be in changeroots, in an installer context, or in other situations where you don't want the services autostarted. Please use simply: if [ $1 -eq 1 ] ; then # Initial installation /bin/systemctl daemon-reload >/dev/null 2>&1 || : fi And let admin do: chkconfig $daemon on if he really want to enable it. > As for the new non-executable-* rpmlint errors, I cannot reproduce them here after many, many attempts. Please check again and tell me how to reproduce the problem. Use my koji scratch build: Install it and see: # ls -l /var/spool/hylafax/bin/pcl2fax -r--r--r-- 1 root root 6575 Dec 10 11:54 /var/spool/hylafax/bin/pcl2fax # head -n1 /var/spool/hylafax/bin/pcl2fax #! /usr/bin/bash >I don't know why the /var/spool/hylafax/FIFO file was marked as config. I've removed this as I believe it to be an error. Now it is removed completly - whole file, not just config flag. Are you sure you will not miss it in runtime? Thank you for being so explicit. I clearly didn't see the -ge in %postun. In my defense with %post, I was merely following what I understood the statement, "If your service should be enabled by default" to be saying. Anyway, I've made the changes as you've suggeted to %post and %postun and have made %preun consistent as well. I have followed your advice for BuildRequires for %{_unitdir}. The /var/spool/hylafax/FIFO file is created when faxq starts. It should never have been put there by the RPM. It is correct. Looking at your koji scratch build I see an inconsistency. From... /usr/bin/bash ../port/install.sh -idb hylafax.sw.server -root / -F /builddir/build/BUILDROOT/hylafax+-5.5.2-5.fc18.x86_64/usr/sbin -m 755 -O dialtest typetest /usr/bin/bash ../port/install.sh -idb hylafax.sw.server -root / -F /builddir/build/BUILDROOT/hylafax+-5.5.2-5.fc18.x86_64/usr/sbin -m 755 -src xferfaxstats.sh -O xferfaxstats Here the HylaFAX install script installs dialtest, typetest, and then xferfaxstats all with mode 755. However, it would appear that the HylaFAX install script did not work as expected... $ rpm -qpl --dump hylafax+-5.5.2-5.fc18.x86_64.rpm | egrep "dialtest|typetest|xferfaxstats" | grep /sbin/ /usr/sbin/dialtest 15672 1355136852 e3512f07f8e6431f999a9574a723570e19af6757bc9c28f72c115cc06d7c1d48 0100755 root root 0 0 0 X /usr/sbin/typetest 11584 1355136852 fc0f20c2d83afe3a7c289061bca02bfb2f64f35e2739072ef06f6d68b1b88932 0100755 root root 0 0 0 X /usr/sbin/xferfaxstats 19346 1355136846 2cee8af27a95214848d005e3e36bfe8b5306a137b47dcb793b9cf8c08289de0f 0100444 root root 0 0 0 X ... as we can see that the mode for xferfaxstats is 444 instead of 755 as it should be per the install.sh invocation. This behavior is different than what I am seeing on Fedora 16 and 17. In an attempt to debug this problem I have downloaded the Fedora 18 beta, but it will not install for me (I can never get past the disk setup/partitioning part). Do you have any suggestion on how I can work on Fedora 18 to debug? 1) You can build scratch builds in Koji even when you are not Fedora packager yet. 2) If you want really build localy (which is really better for debuging) you can try to build in mock: This will install minimal installation in chroot and build package in that chrooted directory. So you can build package for F19,18,17, EL6,5 or whatever and not have it really installed. SPEC: SRPM: Using mock was very helpful, thank you. So, it turns out that because mock runs as an unprivileged user it cannot do chmod and chown as the HylaFAX installer expects. This is why the several %defattr and %attr in the %files section are necessary. Very nice. APPROVED (In reply to comment #144) > Very nice. > > APPROVED I. JUST. CANT. BELIEVE. THIS. It took ~6 years in a row to resolve this. Congrats to all involved! Lee, what's your FAS account - I'll sponsor you if you not sponsored yet. Peter, I'm sponsor and I'm already communicating off-bz with Lee regarding sponsoring. (In reply to comment #147) > Peter, I'm sponsor and I'm already communicating off-bz with Lee regarding > sponsoring. Ok. I just sponsored Lee. Clearing FE-NEEDSPONSOR. New Package SCM Request ======================= Package Name: hylafax+ Short Description: an enterprise-strength fax server Owners: faxguy Branches: f16 f17 f18 el5 el6 InitialCC: Git done (by process-git-requests). hylafax+-5.5.2-6.fc16 has been submitted as an update for Fedora 16. hylafax+-5.5.2-6.fc17 has been submitted as an update for Fedora 17. hylafax+-5.5.2-6.fc18 has been submitted as an update for Fedora 18. hylafax+-5.5.2-7.el5 has been submitted as an update for Fedora EPEL 5. hylafax+-5.5.2-7.el6 has been submitted as an update for Fedora EPEL 6. hylafax+-5.5.2-6.fc18 has been pushed to the Fedora 18 testing repository. hylafax+-5.5.2-6.fc16 has been pushed to the Fedora 16 stable repository. hylafax+-5.5.2-6.fc17 has been pushed to the Fedora 17 stable repository. hylafax+-5.5.2-6.fc18 has been pushed to the Fedora 18 stable repository. hylafax+-5.5.2-7.el6 has been pushed to the Fedora EPEL 6 stable repository. hylafax+-5.5.2-7.el5 has been pushed to the Fedora EPEL 5 stable repository.
https://bugzilla.redhat.com/show_bug.cgi?id=188542
CC-MAIN-2019-13
refinedweb
22,064
58.89
I. I; } Can anyone explain the differences between the px, dip, dp and sp units in Android? px is one pixel. scale-independent pixels ( sp ) and density-independent pixels ( dip ) you want to use sp for font sizes and. dp Density-independent Pixels -.? Press Back when you get the notification and then Next. This time it will find the JDK.? You can force Android to hide the virtual keyboard using the InputMethodManager, calling hideSoftInputFromWindow, passing in the token of the window containing your edit field. InputMethodManager imm = (InputMethodManager)getSystemService( Context.INPUT_METHOD_SERVICE); imm.hideSoftInputFromWindow(myEditText). I've been playing around with the Android SDK, and I am a little unclear on saving an applications state. So given this minor re-tooling of the 'Hello, Android' example: package com.android.hello; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloAndroid extends Activity { /**); } private TextView mTextView = null; } I thought that might be all one needed to do for the simplest case, but it always gives me the first message, no matter how I navigate away from the app. I'm sure it's probably something simple like overriding onPause or something like that, but I've been poking away in the docs for 30 minutes or so and haven't found anything obvious, so would appreciate any help. Cue me looking a fool in three, two, one... You need to override onSaveInstanceState(Bundle savedInstanceState) and write the application state values you want to change to the Bundle parameter like.).? Eclipse is defaulting to Java 1.5, when you want it to use Java 1.6. You have classes implementing interface methods, which in Java 1.6 can be annotated with @Override; however, in Java 1.5, @override could only be applied to methods overriding a superclass method. Go to your project/ide preferences and set the "Java compiler level" to 1.6 and also make sure you select JRE 1.6 to execute your program from Eclipse.
http://boso.herokuapp.com/android
CC-MAIN-2017-26
refinedweb
330
50.84
Opened 9 years ago Closed 9 years ago #15149 closed defect (fixed) Bug in pickling of toric varieties, II Description This is a follow-up to the ticket #15050. Apart from the methods changed in #15050, there are five more methods of ToricVariety_field, namely Todd_class, Chern_class, Chern_character, Kaehler_cone, Mori_cone which use private variables for caching and procedure errors when pickled. This fix follows the same logic as #15050 and rewrites the caching using the decorator cached_method. Change History (14) comment:1 Changed 9 years ago by - Branch set to u/jkeitel/toric_pickling_2/ - Commit set to 15a41647c634421769963f6b6ccabe65c7907789 - Type changed from PLEASE CHANGE to defect comment:2 Changed 9 years ago by - Branch changed from u/jkeitel/toric_pickling_2/ to u/jkeitel/toric_pickling_2 comment:3 Changed 9 years ago by comment:4 follow-up: ↓ 5 Changed 9 years ago by In theory, I would agree that one needs to implement __reduce__ (basically, for all classes that inherit from UniqueRepresentation). But - It is really hard to doctest. Not only must you restore all the data for the hash, but if there is a hash collision then you also need all the data for __cmp__(). - In practice, Python doesn't support circular __reduce__so we'll end up just triggering that bug. comment:5 in reply to: ↑ 4 Changed 9 years ago by In theory, I would agree that one needs to implement __reduce__(basically, for all classes that inherit from UniqueRepresentation). In fact, UniqueRepresentation should be particularly easy to pickle: the weak cache that gets used to see if objects already exist has the parameters used to instantiate them already cached as a key! If the objects in there have sane pickling themselves (and they should, because they're hashable, and hence fairly immutable, so their important properties haven't changed since they have been created, so any potentially circular stuff can go into setstate), a reasonable set of pickling parameters is already there! It may be worthwhile to see if UniqueRepresentation could offer a default reduce that gets a little towards that concept. EDIT: And in fact, CachedRepresentation already implements this! See `sage/structure/unique_representation.py, line 560: def __classcall__(cls, *args, **options): ... if instance.__class__.__reduce__ == CachedRepresentation.__reduce__: instance._reduction = (cls, args, options) ... def __reduce__(self): return (unreduce, self._reduction) So it's already doing that. It also means that the default pickling is throwing away the dictionary, which may be a bit rough. By the way, toric varieties don't inherit from UniqueRepresentation?. Their cohomology rings do, though, so you're probably running into this issue because toric varieties get hashed there (and in similar constructions). - It is really hard to doctest. Not only must you restore all the data for the hash, but if there is a hash collision then you also need all the data for __cmp__(). If our design decisions are being affected by whether something is hard to doctest then we really have cart and horse in the wrong order. Let's see. ToricVariety_field has these creation parameters: fan, coordinate_names, coordinate_indices, base_field You got to be able to decide equality based on these. The last 3 should definitely not introduce circularities in their construction phases. Let's look at fan: cones, rays, lattice Again, those should be the only parameters involved in the construction phase of a fan. The rest can go into setstate. I don't know exactly what things you need to define cones, rays, lattice, but it seems like those should also be strictly non-circularly constructible (any caches you might want to pickle can go into setstate). - In practice, Python doesn't support circular __reduce__so we'll end up just triggering that bug. And every time you're getting that (and we'd hope we run into an error report, because the silent failures are really painful), it's an indication that you got your construction/setstate balance wrong. Given that the very nature of Python (and any normal programming language), any circular reference is always inserted by modifying and existing object(*). That's what setstate in pickling is for. (*) There's of course Simon's example class circ(object): def __init__(self,i)" self.i=i self.d={self:1} which indeed seems like circularity-upon-construction, but the key here is that no circular input data is needed to complete the construction. comment:6 Changed 9 years ago by - Reviewers set to Volker Braun For the record, with this ticket the loads(dumps(variety.Todd_class())) works and without this ticket it fails miserably. Its really the caching that messes us up, because it puts additional circular references into the reduction without any user control. Except for disabling it via ClearCacheOnPickle mixin class or by overriding __reduce__. I still think we should make pickling of caches opt-in and not opt-out, because it is very easy to trip over and hard to doctest different combinations of cached outputs. If you want to pickle some result then just pickle the result, not the object that happens to cache the result. But, in any case, this is not material for this ticket. The solution looks good within the constraints of the current caching system, so positive review. comment:7 Changed 9 years ago by - Status changed from new to needs_review comment:8 Changed 9 years ago by - Status changed from needs_review to positive_review comment:9 Changed 9 years ago by I still think the current patch removes the symptom, but does not solve the underlying reason: ToricVarieties? don't pickle in a way that is robust in the face of circular references. This is because ToricVarieties? aren't sufficiently initialized in the construction phase of their unpickling to make their hash work. I don't know what guarantees pickling/unpickling makes about what happens between objects being constructed and having setstate called on them, but the circular references show that it isn't guaranteed that nothing happens in between. The problem happens to be triggered by circular references, but I don't think we have proof that will even be the only case when it happens. Anyway, circular references aren't forbidden and pickling in general has been designed to have the means to deal with it. The next time someone introduces circular references on toric varieties, we'll have the same problem again. I'm sure ToricVarieties? aren't the only data structures that have this problem. Unless one pays specific attention to this problem (or inherits from UniqueRepresentation?!) one is likely to have this problem. I don't particular disagree with the fact that caches aren't pickled -- that may well be a good idea. However, I do think that ClearCacheOnPickle is a really bad way of achieving that effect: it actually wipes the caches, rather than remove them from the dictionary that is submitted for pickling. That means that doing dumps(toric_variety) changes the performance of subsequent behaviour! EDIT: ClearCacheOnPickle actually does something quite reasonable. If super().__getstate__ produces some exceedingly exotic structures to pickle, it could miss CachedFunction instances in it, or not reproduce the containers not entirely faithfully. So I withdraw my objection to using ClearCacheOnPickle. Why not just put a custom __reduce__ or __getnewargs__ on toricvarieties and solve the problem permanently? If you don't want to pickle caches, you can easily have that as a corollary. comment:10 Changed 9 years ago by - Status changed from positive_review to needs_work I think this is actually an important issue: pickling is really valuable, especially because of the ever increasing importance on parallel processing, which tends to require interprocess communication. I don't think what we need to do is complicated or a lot of work. It's just that people need to get some experience and examples in what to do. I'd happily write a patch, on this ticket, but the git branch stuff is beyond me. Probably something like the following on CPRFanoToricVariety_field already does the trick: def __getnewargs__(self): return (self._Delta_polar,self._fan,self._coordinate_points,self.__point_to_ray, self._names,<whatever you need to get coordinate indices>,self._base_ring) and indeed, you're going to need one of those for pretty much every __init__ you write... In a way, the problem comes from CategoryObject, where __hash__ is defined in terms of repr, so in a way that's where pickling is broken. So perhaps we should have a __getnewargs__(self) there, a la: def __getnewargs__(self): return self._initargs in which case of course we also need something along the lines of def __init__(self,*args,**kwargs): self._initargs=args self._initkwargs=kwargs ##we may need a custom reduce here: how do we deal with kwargs otherise? CategoryObject? already has custom __getstate__ and __setstate__. It's just that its pickle process doesn't seem to have been written with in mind that __hash__ might be needed prior to __setstate__ executing. Incidentally, CategoryObject caches the hash value, so if we have the following reduce method on CategoryObject we might skirt at least the premature hash problem: def __reduce__(self): return <constructor>,(type(self.C),...,self.hash(),),self.__dict__ where <constructor> is some factory that reconstructs the object (essentially via an empty __new__) but then, in addition, sets self._hash. comment:11 Changed 9 years ago by - Status changed from needs_work to needs_info I don't think this ticket itself needs work, but that we need more info on how to solve the problem properly. If we can tackle it on CategoryObject we may not need to do anything on ToricVariety. comment:12 Changed 9 years ago by - Status changed from needs_info to positive_review The only thing in ToricVariety that refers back to itself are caches, if it weren't for that it would be perfectly safe to pickle it by its dict. I don't think its a good solution to have everybody write a custom pickling just to manually exclude caches. Also, all the changes in the attached branch are desirable anyways, so we should include them. Any further discussion of general frameworks for pickling should go elsewhere (perhaps #15156). comment:13 Changed 9 years ago by - Milestone changed from sage-6.0 to sage-6.1 comment:14 Changed 9 years ago by - Resolution set to fixed - Status changed from positive_review to closed So the better solution is: provide CPRFanoToricVariety_field_with_categorywith a reduce method that ensures that attributes such as _dimension_relative(probably all attributes required for computing a hash) get initialized in the construction phase and leave the rest of the dictionary to be initialized at the setstatephase. As you can see from these example, this is required for proper pickling anyway because you would want to work and it's not clear to me that clearing caches is going to resolve that issue. Whether you want to strip the dictionary offerered by reducefor pickling of caches is a separate question.
https://trac.sagemath.org/ticket/15149
CC-MAIN-2022-33
refinedweb
1,805
51.89
much disappointed that such a respectable paper as Economist definitely is, is able to publish such a poor article. First, the patriotic act does not prescribe patriotism, nor any singing, just playing the national anthem at schools once a week. I am not in favor of this but it can hardly be perceived as some hardline nationalism. Second, there is no way of putting Fico among the supporters of Jozef Tiso, and anybody knowing anything about Slovak politics should know that. Fico is on the opposite side and is constantly supporting powers that fought against the regime of 1939-1945. Third, any country is deriving its history from the ancestors and ties of Great Moravia to Slovakia are not any weaker than ties of, say, Gals to present-day France, Romans to present-day Italy or Hungarian kingdom from 1000 to contemporary Hungary. Fourth, criticising the manipulated 1947's trial and subsequent execution of Tiso is surely not the same thing as excusing his war regime or Jews' deportations. - I am far from supporting present Slovak government, but complaints of Slovak Hungarians, that it is an insult to listen to Slovak anthem is, hm, strange. I would much appreciate deeper insight from my country next time. Hello Tarass, I would like to answer your criticism and defend the author of the article. First of all, he/she does not qualify the sort of nationalism that lurks behind the Patriotism Act, so your first comment is to some extent irrelevant. You are obviously right that no anthem singing is prescribed by this law, but you fail to mention, for instance, the requirement of equipping each class with the banner, the national shield and the preamble of the Constitution. I personally find this kind of nationalism hollow and potentially dangerous. The nationalism promoted by this government is creepy - at first, it was the Hlinka Act, then the fuss about the "Old Slovaks", then the Official Language Act, now it is this sham of a legislation... Each step brings us a little farther from standard democratic practice and we resemble the frog that will eventually boil in a slowly heated casserole. Second, I partially bear you out on Fico's stance on Tiso, but I must remind you of the fact that he has been very ambiguous on this subject lately. The only time I remember him speaking of the Slovak 39-45 State was when he advanced the idea of an "ideological reconciliation" (???) on a national basis. He certainly wants to appeal to the nationalist voter and has been very careful not to alienate him in any way. Remember as well that this particular Act was originally Mr Fico's idea. Pro-Tiso or not, Mr Fico is a vehement nationalist. Third, you are right that many nations cherish their myths, but that does not entail that forging the history (and Fico's attempts cannot be described in another manner) is a good idea. What we may deem natural in popular culture, may well be inacceptable in the official interpretation of the history. Take notice of the fact that Fico is trying to introduce a motive that is artifical in our self-understanding as a nation. We do NOT naturally derive our identity from the days of Great Moravia: we, as Slovaks, did not really develop its cultural heritage in any substantial way (as southern and eastern Slavs did). Fico is implying that there is something amiss about this self-conception, that if we do not link ourselves to a glorious kingdom in the past, our identity is in some way deficient. What would you think of a person who is obsessed with the importance of his ancestors to the point that he is prepared to lie about them? I would call him pretty full of complexes. Fourth, I agree with you about Tiso's execution: unfortunately, criticism of his trial and sympathy towards his ways as governor very frequently go hand in hand. And especially so in the case of Matica Slovenska, which has published several apologetic books about him. Fifth, I am ethnically Slovakian, but I do consider the imposition of our anthem to hungarian pupils as an offence. The Slovak anthem, as you know, does not sing the beauties of our land, but rather describes the struggles of our national movement - the movement our hungarian minority has no reason to identify itself with. And after the Official Language Act, after Slota's diatribes about "mongoloid Hungarians" and hungarian "robbers", after Fico's labelling them as a "disloyal minority", they are completely justified to perceive this legislation as just another attack on them. To sum up, you are right that the authour should be more careful about the details, but I am afraid that the gist of his/her article is perfectly pertinent. The situation in Slovakia under this government has taken a nasty turn, whether the children are forced to sing the anthem or not. Why don't you all just give Slovakia a break. They have only been an independent country for 17 years. Prior to that they were part of Czechoslovakia an artificial creation that came about at the end of world war I. They have been subjugated by the Soviets and the Nazis in the last 75 years. Prior to World war I they were controlled by Austro-Hungarian empire. World war I ended that empire. World War I ended 900 years of Hungarian domination over the Slovak people. The Slovaks never had an empire. There were never any Slovak kings or Slovak royal families/ dynasties. The Slovaks never dominated, subjugated or practiced genocide on any group of people. When the Nazis controlled the Slovakia area they were forced to adopt the Nazi policy towards Jews. These policies were no different then France had under Nazi occupation. So why is everyone getting all worked up? Let Slovaks be Slovaks. If they want to sing their national anthems in schools why not let them? They are only trying to define what it means to be a Slovak. Slovakia is just another nation in the EU with open borders. It seems that most of the whining I hear is just a bunch of sour grapes from Hungarians who are still trying to recapture their lost empire. So leave Slovaks alone and let them define who they are as a people let them find their own way. @HungarianJew “Terror and fear is the everyday life in Slovakia.” I thought that was in Iraq and Afghanistan, terror and fear. But then, for hungarians, uttering a couple of slovak words could be a similarly frightful and terrorsome experience. Well I'm Slovakian & proud of my history and past, however I'm not proud of Mr.Fico and his attitude. The problem is he's trying to convert pure feeling to a forced duty which is totally unacceptable!!!! Majority of Slovakians doesn't accept this at all!!! IT'S WRONG! I really hope Fico will fall down this summer! Slovakia need to get back to the track of the success and european admiration from recent years! There are a lot of former Great countries in Central Europe and Balkans, i.e. Great Hungary, Great Romania, Great Bulgary, Great Slovakia, Great Serbia, etc., but only now I found with Great surprize that Great Moravia was in fact a superpower... Well, Juraj, those achievements are nothing but remarkable and there are reasons to be proud of what you have achieved. Only I fail to see why all this pride has to be at the expense of your biggest minority or your neighbouring country. A Hungarian friend of mine was refused first aid in a Slovakian hospital simply because he made the mistake of exchanging a few words in Hungarian with his wife. They were thrown out. I have a hard time seeing how it fits with being a proud Slovak. We’ve all heard of sore losers, but this is a unique case. I think we have not yet seen such sore winners. Is Fico a 21st century version of Reverend Father Tiso, the head of the Fascist Slovak State from 1938-45? I get the impression that the historical precedent is all too clear.............. Ad Confidence 2 part (6) I regret to tell you that the person of Mr Tiso is only disputed by extreme right-wing historians. The Slovak-state authorities robbed 70 000 people of their property, loaded them in cattle railroad cars, sent them to Germany and you suppose they did not know that something very bad was going to happen to them there? Give me a break. (8) I had a good laugh with the idea of SMK leaking our economic secrets to Budapest. Even if there were any, I bet SNS would sell them for half the price! But the most striking sign of your way of thinking are your views on minority schooling for the Hungarian minority. First of all, it is not at all a rare thing in Europe, as you suggest. I recommend you have a look at Spain, Italy, UK or the Switzerland: there are minority universities, minority kindergartens and much more extensive protection of minority rights. Second, your argumentation presupposes that Slovak Hungarians do not pay taxes! Now that is a bit hard to swallow. They pay taxes and they have the right to see the state take their interests into account when spending this money. I doubt that an autonomous university is a good investment, but they seem to be quite happy about it. All in all, your argumentation presents a pattern common in this government: many words and many clever ruses, just to hide your fear of a truly plural democratic society. You want to have a monopoly on the use of notions like “nation” and “patriotism”, so that you can fill them with your poisonous hatred. But you will not succeed, at least not in the long run. This does nothing more than further unveil Slovakia as an insecure nation - with rigid language laws and this type of fervour, one can only be concerned for the minorities that live there. This constant allusion to a proto-Slovak people is highly questionable - it is very clear to all historians that people back then saw no connection to any type of nation state. It is also known that Bratislava (originally 'Presburg' or 'Pozsony') was overwhelmingly inhabited by Germans and Hungarians up until the 20th century. The major problem of Slovak nationalist goverment from Slota to Fico is the unclear Slovak history. Slovaks were setteled in Hungarian Highlands by the Habsbough Maria-Theresia. After from state of Hungary they have got land to live and freedom to use their language. Now from this historical past the nationalsit and antisemitic Fico wants to creat a "Great Slovakia" mith with unkexisted kings and unknown kingdoms. Fico similat to Tiso wants to many statues about "Old Slovaks" and to compite nations with strong historical past like Czechs, Hungarians or Poles. @Econo Guy “This alone would not be a problem but he forgot to give the proper amount of land with the transfer and somehow the assets of those ethnicities stucked to Czechoslovak hands not matter what the German or Hungarian did or where the guy was during the war. Can we call this state organized crime?“ I’m repeating myself, but you still don’t want to understand: 1) Most of Germans came to the Czechoslovak territory during Habsburg rule (especially after Czech protestant nobles were expelled from Bohemia after 1620). Therefore the history only repeated itself and situation was reverted back to the situation as was in the past. 2) Nazi Germany exploited Czech territory during occupation including stealing gold reserves and serving as a supply base of civil production for the whole Reich. Therefore, the confiscated property replaced war reparations and Germans were compensated by the post-war German state. Austrian and Hungarian citizens were compensated by Czechoslovakia during socialistic times and all three countries officially considered the issue as settled. As for the Hungarian-Slovak issue – Hungarians were not consistently expelled from Slovakia probably because the situation was more complicated. I personally feel that Slovaks would have saved all the everyday conflicts that go to present day if a compromise about borders had been made (because they complain that they feel in some villages in southern Slovakia as in a foreign country anyway) and the swap of inhabitants was consistently made right after the WW2. But again – it was a delicate issue and nobody had courage to come with any solution. I only feel sorry for the present day generation that has nothing to do with any past events but has to face the consequences in the form of latent conflicts. ----- "The problem is that those people are trying to preach us about moral, who were much more worse supporter of the Nazis … " Look, reality is only one. And the fact that interwar Czechoslovakia was a place sought by refugee seekers from both western and eastern parts of Europe (in particular Germany and USSR) speaks for itself clearly enough. ---- “I am happy to see this as all countries had this problem. So I would consider Hungary as winner state from now.“ “So did Horthy, except he said no to Hitler's invitation to attack jointly Poland. (You know we have some history with Poland, common kings etc etc...) Unlike the the Czech and Slovak part of Czechslovakia what attacked previously Poland, Hungary helped the Poles to leave Poland through Hungary.“ Had Hungary an exile government that didn’t collaborate during the war? Were you engaged in military operations on both fronts? Benes was the first to make decisions and because he was aware that he represented a substantially smaller nation than colonial powers like France and Britain, he knew that diplomacy and cunning wisdom is the best way to secure good future of the nation. If neighbouring countries like Hungary or Poland clearly stated support to Czechoslovakia, the forces could have been more balanced and they wouldn’t have to face consequences. But they did the opposite – after Hitler grabbed large part of Czechoslovakia in Munich dictate, both Hungary and Poland were like vultures that wanted their part of the territory too – Hungary grabbed southern Slovakia and Transcarpatian Ukraine, Poland grabbed remaining part of Silesia. If Czechs were so imperial past-glory-sick as Hungarians, they would for sure cry for Silesia too, because the whole Silesia was originally part of the Bohemian Crown – most of Silesia was lost by Maria Theresia during Wars of Austrian Heritage and grabbed by Prussia. Enjoy at least a nice map ;-)... But Czechs have a completely different philosophy – a nation can be unique and great even if it is small by inhabitants count and / or territory like e.g. some Scandinavian countries prove. Therefore they do not live in their past (if it does not threaten present days situation) and focus on more important things – science, economy, arts, sports … and also petty domestic current affairs. Your nation obviously has impressive past achievements too and therefore maybe the best way to honour your ancestors is to continue in this past tradition rather than calling for past territorial size of the country that will never return (but within EU it is not much important anyway). It would be beneficial for the whole CE region…. Dear Tarass, I think you don't know Slovakia or you just lie. Many doctors refus any medical assistance to non Slovak speaking people according to Language Laws of "modern" Slovakia. The president of Jewish community of Komarno (Mr. Pasternak) has been refused by a Slovak dentist in Bratislava for his Slovak "dialect" and nonslovak accent. Even in cemeteries the names should be written only in Slovak for all non Slovak ethnic people. Terror and fear is the everyday life in Slovakia. Dezko XVIII. vzdelany why do you measure yourself to us -Hungarians in every possible way? "We built church before Hungarians, we had a king before Hungarians, we had an "empire" be fore Hungarians?" why cannot you be proud to be Slovak simply? there were very few hungarian-slovak wars but many that we fought side by side. Absolutely nothing good will ever come out of a nation trying to rewrite it's history to suit it's nationalistic goals. The rights of minorities often get trampled on in the process. Arnold Toynbee famously said: "Any society that does not know it's history, will be destined to repeat it". The article is just a collection of wrong information. The only correct part is the statement that the opposition is weak. I will react both to the article and some of the above „arguments“ of Martin (who introduces even more wrong information). In part this will be a repetition of the above discussion, in part I will add new information: (1) The schoolchildren are not ordered to sing, schools are ordered to play the anthem once a week in the morning. The anthem is played everyday in many countries of the world. As long as states and their state symbols exist, there is nothing „nationalist“ about playing the anthem to one’s own citizens. Ad „equipping each class with the banner, the national shield and the preamble of the Constitution“ – First there is absolutely nothing wrong about that and the statement that this can be „potentially dangerous“ (sic!) is obviously wrong (the only possible argument, namely that forced things do not „work“, is strictly invalid in this context, because then one could equally argue that a school as such– as a forced „thing“ – causes people to have no education, which is ridiculous). Secondly, the author probably does not know that it is has been a tradition in Slovakia and Czechoslovakia to have a picture of the president and often also the state coat of arms in classes. I do not know whether this has changed over the last 10 years or so, but in any case the current law is nothing new. I am asking then – the previous Czechoslovak and other governments and those having pictures of the president etc. in classes were also „nationalists“? And those in the USA having the flag in school classes – are they also „nationalists“? – or a better question: if a law would prescribe having it then it is natiolism, but when they do it due to tradition or a local order, than its no „nationalism“?? As an interesting aside, the author does not mention (of course) that the Hungarian (!) anthem is played in churches („ordered“ by the priests) in southern Slovakia each time there is a Mass – a unique phenomenon in the world... (2) The „fuss“ about the Old Slovaks. Here, again, the author took somebody’s bait. First, the truth is that there are 2 – 4 historians (and one oppositional newspaper) that has a problem with the term (the term not the reality behind it!), while I know at least 32 historians who support the term. Here the article is blatantly wrong. Next, Czechs and Hungarians before the 10/11th c. tribes have been always (alternatively) called “old Czechs” and “old Magyars” respectively in Slovak historiography – this is more or less tradition. Old Slovaks is nothing but the equivalent professional term. The term “old Slovaks” (a better translation would be “ancient Slovaks” or “Proto-Slovaks”) has existed since the late 19th century (it was virtually “forbidden” in the 1970s and 1980s for purely political and (Prague-)nationalist reasons, therefore some people who do not work with medieval history think it is knew), it is not a new term, the term is nothing but another way to say “direct Slavic predecessors of modern Slovaks”. Many academic sources all over Europe call them directly Slovaks (not “old Slovaks”) and nobody has a problem with it. As you can see, there is absolutely nothing wrong about the term and the term has nothing to do with the prime minister despite wrong claims of his dilletante opponents. And what forging of history are you talking about? Do you seriously believe that connecting Slovaks with Great Moravia is „forging“ of history? Every archeologist will confirm to you that the direct predecessors of Slovaks (it does not matter how you call them) have been living in Slovakia (and Hungary and eastern Moravia) since the 7th century. You confuse a naming issue with historical facts. This is drawing of conclusions from completly wrong premises. And as for „popular culture“ in history: The truth is that Slovakia has the exactly opposite problem - as part of Czechoslovakia people – including myself - have been wrongly taught for decades the „popular“ myth that Slovaks only „arose“ in the Modern Times which is blatantly wrong historically, archaeologically and linguistically. In this context, it should be obvious who „lies“ and who is „full of complexes“ here. Martin, I have written long replying comment, but I think it would take us far away from the main point, I shall rather say, that you, me and the author may find “this nationalism hollow and potentially dangerous” but a paper with such reputation as Economist, which is read and taken very seriously by elite businessmen and politicians around the globe simply cannot afford to build up an article around something so weak. Readers need facts that have to unputtdownable, certainly not by some online nobody like me. I agree that “situation in Slovakia under this government has taken a nasty turn, whether the children are forced to sing the anthem or not” but this does not imply a necessity of an unfair critics. Few comments from my side. I am also disappointed that respectable paper as Economist could publish such poor article. The author is responsible for verification of facts and here he failed. I will try to focus only to facts which are really easy to check. E.g. can author give reference to paragraph "ordering schoolchildren to sing the national anthem"? (By the way, what is wrong about singing anthem?) Or > For example, can today’s Slovaks trace their roots to Great Moravia in the 9th century? I cannot understand why it should be opened question. It is widely accepted fact. Or > "to link modern-day Slovakia to an ethnically pure superpower in the dim and distant past." I have NEVER heard about such opinion ("ethnically pure") in Slovak society. Great Moravia was state of predcessors of modern Slovaks, Moravians and Czechs (for a while). Slovaks have really excellent relationships with Czechs and Moravians, so I don't know why we should use words like "ethnically pure". They are our best friends. Or > But Mr Fico’s critics fret that behind his approach lies an attempt to rehabilitate the Nazi-backed Slovak puppet state of 1939-45 Fact is that direct oposite is true. E.g. Fico and his party always strongly supported all events related to Slovak National Uprising against Nazis, etc. Etc, etc. One interesting aspects of Hlinka's prison term. The International media was full with the imprisonment of Hlinka and made almost martyr from Hlinka as he was sent to the Csillag prison. (High security prison.) But what was the truth? The fact was that Hlinka was not kept in the same prison where the regular prisoners were. He was placed to a light security area what was a separate building. Those people were kept here who were not criminals, but who should receive some penalty for some anti-state guilt, like duellers (who made sword duel), newswriters, and political prisoners. To understand the cruelness of this prison and the brutality of the Hungarians it was ordered that the inmates had the right to feed themselves and they could not keep more than 5 liters of wine in theirs cells and could not invite more than one woman at a time into the cell. Andrej Jancek, who was sentenced in the same case like Hlinka wrote his first impression: "I could not believe my eyes. Whoever spent half year in the catacombs of Ruzomberok (Present day Slovakia) when arrives into such castle feels everything strange. The house has a second floor, the corridors are covered with terracotta, the cleannes is remarkable, normal windows, and while every furniture is simple, the bed is clean and the service - like in a restaurant. The cells are similar 4 by 6 meters 3 meters tall. It is furnished simply, table, two chairs, bed, wardrobe, china basin, ink holder and bottle of water and glass, stove, and a chamber-pot, what is useful from seven at the evening until 6 at the morning. " Ad Confidence No matter how unworthy a cause may be, it will find an intelligent and ardent defender. Thank you for your critique. It is good that The Economist readers can compare both opposing views and draw their own conclusions. Unfortunately, I am in no position to respond to your views adequately, as their exposition is rather extensive. Therefore, I will confine myself to the points where I consider that your otherwise persuasive criticism cannot and must not be upheld. The reason why people oppose this Act is the formalism about patriotism that it entails. Let us ask ourselves the question why there might be something good about nationalism in the first place. To be a member of an ethnic group or a citizen of a nation state is not a virtue in itself - it is the shared values which warrant cherishment of our national identity. We have seen this government infringe these values on a regular basis. Vulgarity, immense corruption, ceaseless lies and ubiquitous incompetence of this government - all that offends a real Slovak patriot. As the media have reported recently, Jan Slota, the champion of this law, was found to know neither the lyrics of the national anthem nor its author. And now it is these people who want to force it on us because they consider we are not patriotic enough. It is just a convenient stick which they can use to beat anyone who attempts to point out the execrable impact their governing has on the Slovak society. It is precisely the lack of real content that makes this kind of nationalism dangerous (sic! and I am not going to back down on this point), because it gives the government the possibility to fill it with whatever propaganda they like. This is exactly what happened in the U.S. under Bush, when bogus patriotism was bolstered in order to silence the critics of the Guantanamo prison facility. We want to avoid exactly that. Our society is diverse, and no one may be branded as a “bad Slovakian” again only because he has differing views on any issue. I am as Slovakian as you might be. Our patriotism should be a constitutional, essentially pluralistic one. The answer, then, is “yes”, the context and the content of a particular kind of patriotism are crucial to its value. That is why the majority of the Slovak public does not countenance this Act - even Fico, who resorts to nationalism only on condition that it is popular, has recognised that it has gone too far and now wants to withdraw the law. (2) The “one oppositional newspaper” you speak about is by coincidence the most important and the most read Slovakian broadsheet. It is outspokenly anti-governmental, I admit, but that is true of the second-most-important Pravda as well. There are, after all, no big pro-governmental media in the country, for the simple reason that urban-based people with higher education (such as a typical journalist would have) do not tend to support this government. There are also many more than “2 - 4” historians who defend the position you speak about, and many of them are lead Slovak experts. As far as ‘Old Slovaks’ are concerned, this term has not been used in any monography on this part of Slovak history published after 1989. Of course, Slavs of that era were our predecessors, as well as they were predecessors of the Czechs, the Moravians, the Slovenes etc. If you call these people “Old Slovaks” it implies you want to claim them all for yourself - that is where the confusion stems from and that is why reasonable people resist this term. The 32 historians you mention are of nationalist vein represented by the well-known Tiso’s defender Durica. However, they have not been able to warrant this term in a relevant opus on the subject. It is again all about hollow nationalist emotions, not science. (3) I have nothing to say on the issue of the textbooks, as I am ignorant of the topic. (4) I am sorry, but your argument is invalid, since what the law literally says in the introduction is that “Special legislation applies to minority languages, unless this law states otherwise” (italics mine). Did you really think you could get away with this witty ruse? This is the link to the entire Official Language Act, if anyone seeks proof for what I am saying: By the way, how comes that English, the most successful lingua franca in the history, does not need any prescriptive codifying? (5) SNS (fortunately) does not have enough MPs to pass a law by itself, so it was not only “their action”. Hlinka on one occasion said: “I am the Slovakian Hitler!” However, I accept that he was not as openly fascist as his successor Mr Tiso.
http://www.economist.com/node/15671556/comments
CC-MAIN-2015-11
refinedweb
4,907
60.04
Problem with SQLite Database and threads "Database is locked" Hi all, I'm having a problem trying to manage my database in a multi-thread application. I have 3 different thread that access the same local SQLite database, one of them is the main thread. The main thread creat the connection to the database using the function: db = QSqlDatabase::addDatabase("QSQLITE"); //db is a global variable db.setDatabaseName(str + "/prove.db"); while the other two threads create a clone of the db: db_2 = QSqlDatabase::cloneDatabase(db,"second"); db_3 = QSqlDatabase::cloneDatabase(db,"third"); In a cronological order, the first database (the one of the main thread) is created, then is created the second and the the third. I access to it from the third thread and then it gives me the problem when I try to acces with the second. The problem is that when I try to access using the second thread (but I don't know if it's a case that happans only with the second thread), I get the error "Unable to fetch row" " database is locked" Am I doing something wrong? The cloneDatabase method should be thread safe, isn't it? if you need more information don't hesitate to ask for them, please. If somebody has any hint it would be appreciated a lot. Thanks in advance - Kent-Dorfman I believe sqlite concurrency only works for pure read operations. I think your clone operations are considered writes, even though they are to different databases. - SGaist Lifetime Qt Champion Hi, How are you passing the main DB connection to your threads ? @davidesalvetti Hi, I don' remember where I found this, but since Qt 5.11, sharing the same connection between threads is not allowed. You have to create a QSqlDatabasefor each thread. The easiest is to add the thread id to the connection name to avoid name collision. Something like this: QSqlDatabase MyClass:("myConnection_%1")); cnx.setConnectOptions("QSQLITE_BUSY_TIMEOUT=1000"); if (!cnx.isValid() || !cnx.open()) { qDebug() << "DB connection creation error!"; } } return cnx; } @SGaist I have a global file that contains the declaration of the main DB, that is included in all the three threads. @Kent-Dorfman so what should I do to avoid this problem? @KroMignon I didn't specified it but I'm using it 5.9.1 with MinGW 32 bit compiler, do you think it can be something that happens also in previous versions? @davidesalvetti I don't remember exactly what the problem is, but when QSqlDatabase()is not created in the same thread in which is it used, then something goes wrong internally. To avoid this kind of issues, starting with Qt 5.11, Qt does not allow to used QSqlDatabase()is another thread as in own. So yes, it is a bad practice to use same QSqlDatabase()in multiple threads. Addendum: when cloning database with QSqlDatabase::cloneDatabase(), don't forget to call open(), as you can see in documentation: Note: The new connection has not been opened. Before using the new connection, you must call open(). @KroMignon thanks for your answer. Yes, I'm opening the connection and debugging I can see that it opens the it. What I didn't understand quite well is: I can't use the same QSqlDatabase() in different threads or I can't connect to the same database in different threads? I'm cloning the main QSqlDatabase so I have different instances of QSqlDatabase, each one on it's own thread, is this a problem? Or using different instances it's correct? @davidesalvetti said in Problem with SQLite Database and threads "Database is locked": I can't use the same QSqlDatabase() in different threads or I can't connect to the same database in different threads? You can/should only use QSqlDatabase()in the thread in which it has been created. All QSqlDatabase() created in main thread, must only be used in main thread. QSqlDatabase()must be create in the thread in which you want to use it. This is why I've create a little function which create a new QSqlDatabase()when I need to dialog with database. So I am always sure the QSqlDatabase()I use is the right one. You should never store locally in your class a QSqlDatabase()instance, only create one when you need it, and destroy it after. This is the recommended usage for QSqlDatabase(). Extract for QSqlDatabase documentation:. @KroMignon The problem is that I have created three different QsqlDatabase() in three different thread, and in every thread I use the QSqlDatabase() created in that thread. In this way it should work but it keeps giving me the problem. But I'll do more tests. Anyway I found a workaround for my personal application, but maybe other people may be interested in a solution. - SGaist Lifetime Qt Champion What workaround is that ? Can you show how you are creating your database object in these threads ? @SGaist The workaround is good only for a few people. Since I always worked with two different threads (the main thread and another thread) that have access to the database, I'm just telling the second thread to do the things that the third thread should do with the database, but obviously is just a solution for my case. this is the way I'm createing the database connection: void T_Analysis::connectDB() { db_2 = QSqlDatabase::cloneDatabase(db,"second"); if(!db_2.open()) { qDebug() << "error"; } else { qDebug() << "okdb_2"; } } void T_Usb::connectDB() { db_3 = QSqlDatabase::cloneDatabase(db,"third"); if(!db_3.open()) { qDebug() << "error"; } else { qDebug() << "okdb_3"; } } Main thread: void MainWindow::connect() { db = QSqlDatabase::addDatabase("QSQLITE"); db.setDatabaseName("Prova.db"); if (!db.open()) { //.... some code } @davidesalvetti Hmm, I am not very confident in your solution. I would create a helper class to create/use right connection according to current thread. Something like this (it is just a skeleton, not sure it is working as it is): #include <QSqlDatabase> #include <QThread> class MyBDConnection { QString m_dbPath; QString m_dbName; Q_DISABLE_COPY(MyBDConnection) public: explicit MyBDConnection(const QString &sqlitePath, const QString &cnxName): m_dbPath(sqlitePath), m_dbName(cnxName) {} QSqlDatabase("%1_%2").arg(m_dbName)); if (!cnx.isValid() || !cnx.open()) { qDebug() << "DB connection creation error!"; } } return cnx; } } And the only create on instance of this class and pass the pointer to each class which need connection to DB.
https://forum.qt.io/topic/103626/problem-with-sqlite-database-and-threads-database-is-locked/14
CC-MAIN-2019-30
refinedweb
1,038
63.29
Difference between revisions of "OTDT/JDTCore" Revision as of 09:13, 23 November 2010 As a central component of the OTDT we are maintaining a branch of the org.eclipse.jdt.core plugin. This page summarizes background, design rationale and consequences of this branch. It is created as input for bug 330534 where the inclusion of Object Teams in the Indigo release train is discussed. Contents - 1 Extending Java - 2 Extensible Java IDE - 3 User Install Experience) - counting only changes in existing files, not added files which live in the objectteams namespace. The diff. OTDT Development Process Since the JDT/Core and OT/J reside in different source code repositories, the branch is not directly handled using builtin support. Yet, over the last 7 years we have developed a routine of staying up-to-date with the latest in the JDT/Core as such: - All modifications to original jdt.core classes are systematically marked with begin-end comments, the original code remains in place (commented out) for comparison during merging. - Merging: At regular intervals a diff containing all changes in the original CVS is created and applied to the OT/J branch. After skipping 3.1 the migration from 3.0 to 3.2 kept one student busy for almost 6 months. We had several years were all changes of a whole year were merged in one big effort, but meanwhile we're down to synchronizing branches at least once per 6-week milestone. This has become routine. - By also actively following the bugzilla inbox of JDT/Core I'm well informed about pending changes and in some cases (like bug 330304) observing JDT/Core lets me detect in a very timely fashion where their patches require additional action for OT/J. - Other than that we follow the normal rules of development at Eclipse. OTDT Testing We regularly run a number of test suites: - original JDT/Core tests. - Actually, we maintain a branches of org.eclipse.jdt.core.tests.compilerand org.eclipse.jdt.core.tests.model. These branches contain a few unavoidable changes, such that the modifications actually document the actual behavioral difference between the original JDT/Core and our branch. More on that later org.eclipse.jdt.core.tests.builderruns without modifications. - Selected test suites from JDT/UI. These tests run from the original sources, with a very small number of adaptations applied using OT/Equinox - A test suite for all language features of OT/J - Further OT specific test suites complementing all of the original jdt suites. One test run currently includes over 50000 test cases. All tests pass and no build is ever published for which this is not true. will currently.
http://wiki.eclipse.org/index.php?title=OTDT/JDTCore&diff=229356&oldid=229353
CC-MAIN-2019-35
refinedweb
445
55.13
render jsf page in javaS K Jan 8, 2011 11:49 AM Hi, I'm not sure whether this is a right forum to ask, I see this is a J2EE design pattern. I wanted to render a JSF(xhtml) page in java class. Does anyone knows about the api or a sample program? I know there is one available in Seam but I'm not using seam in my application. Thanks in advance SK 1. render jsf page in javajaikiran pai Jan 8, 2011 12:24 PM (in response to S K) S K wrote: Hi, I'm not sure whether this is a right forum to ask, I see this is a J2EE design pattern. We have a JSF forum. I've moved this thread there. 2. render jsf page in javaNicklas Karlsson Jan 13, 2011 3:41 AM (in response to S K) Also interested in this. 3. Re: render jsf page in javaStan Silvert Jan 13, 2011 4:35 PM (in response to Nicklas Karlsson) Sorry I missed your post before Nicklas. JSFUnit could certainly do something like that, but since you are only worried about the client-side HTML it would be simpler to just use plain HtmlUnit. If you have a running JSF application to do the rendering then this is pretty easy with HtmlUnit. WebClient webClient = new WebClient(); HtmlPage page = (HtmlPage)webClient.getPage(""); If you don't have a running JSF application it gets a little tougher. You could use a mock HttpServletRequest and HtppServletResponse. Then look at the source code for FacesServlet and see how it uses that to create a FacesContext and render the page. FacesServlet.java is pretty short and relatively easy to understand. So you would basically just do what FacesServlet does. Stan 4. render jsf page in javaNicklas Karlsson Jan 14, 2011 10:50 AM (in response to Stan Silvert) Technically would like to be able to pass a xhtml page to an asynchronous task or such and it would render the template so we're probably talking mocks. Seam 2 had this construct where it used mocks and swapped out the current FacesContext and replaced the output stream with a collecting BAOS if I remember correctly. 5. render jsf page in javaStan Silvert Jan 14, 2011 2:37 PM (in response to Nicklas Karlsson) Yea, it wouldn't be that hard. It would make a nice open source project. Stan 6. render jsf page in javaS K Jan 14, 2011 6:53 PM (in response to S K) Actually, my need was not for Unit test rather for runtime render a jsf page and send the output as an email, here I can use seam function but I didn't use seam in my project. Meanwhile I used a different approach to render the jsf page using FacesContext class, here the point is that you must run with active facescontext instance. You can place the below code anywhere in your file, public String renderView(String template) { FacesContext faces = FacesContext.getCurrentInstance(); ExternalContext context = faces.getExternalContext(); HttpServletResponse response = (HttpServletResponse) context.getResponse(); ResponseCatcher catcher = new ResponseCatcher(response); try { ViewHandler views = faces.getApplication().getViewHandler(); // render the message context.setResponse(catcher); context.getRequestMap().put("emailClient", true); views.renderView(faces, views.createView(faces, template)); context.getRequestMap().remove("emailClient"); context.setResponse(response); } catch (IOException ioe) { String msg = "Failed to render email internally"; faces.addMessage(null, new FacesMessage( FacesMessage.SEVERITY_ERROR, msg, msg)); return null; } return catcher.toString(); } The ResponseCatcher class which extends HttpServletResponse class, /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package test; import java.io.CharArrayWriter; import java.io.IOException; import java.io.PrintWriter; import java.util.Collection; import java.util.Locale; import javax.servlet.ServletOutputStream; import javax.servlet.http.Cookie; import javax.servlet.http.HttpServletResponse; /** * * @author SK */ public class ResponseCatcher implements HttpServletResponse { /** the backing output stream for text content */ CharArrayWriter output; /** a writer for the servlet to use */ PrintWriter writer; /** a real response object to pass tricky methods to */ HttpServletResponse response; private ServletOutputStream soStream; /** * Create the response wrapper. */ public ResponseCatcher(HttpServletResponse response) { this.response = response; output = new CharArrayWriter();//loaded writer = new PrintWriter(output, true); } /** * Return a print writer so it can be used by the servlet. The print * writer is used for text output. */ public PrintWriter getWriter() { return writer; } public void flushBuffer() throws IOException { writer.flush(); } public boolean isCommitted() { return false; } public boolean containsHeader(String arg0) { return false; } /* wrapped methods */ public String encodeURL(String arg0) { return response.encodeURL(arg0); } public String encodeRedirectURL(String arg0) { return response.encodeRedirectURL(arg0); } public String encodeUrl(String arg0) { return response.encodeUrl(arg0); } public String encodeRedirectUrl(String arg0) { return response.encodeRedirectUrl(arg0); } public String getCharacterEncoding() { return response.getCharacterEncoding(); } public String getContentType() { return response.getContentType(); } public int getBufferSize() { return response.getBufferSize(); } public Locale getLocale() { return response.getLocale(); } public void sendError(int arg0, String arg1) throws IOException { response.sendError(arg0, arg1); } public void sendError(int arg0) throws IOException { response.sendError(arg0); } public void sendRedirect(String arg0) throws IOException { response.sendRedirect(arg0); } /* null ops */ public void addCookie(Cookie arg0) {} public void setDateHeader(String arg0, long arg1) {} public void addDateHeader(String arg0, long arg1) {} public void setHeader(String arg0, String arg1) {} public void addHeader(String arg0, String arg1) {} public void setIntHeader(String arg0, int arg1) {} public void addIntHeader(String arg0, int arg1) {} public void setStatus(int arg0) {} public void setStatus(int arg0, String arg1) {} public void setCharacterEncoding(String arg0) {} public void setContentLength(int arg0) {} public void setContentType(String arg0) {} public void setBufferSize(int arg0) {} public void resetBuffer() {} public void reset() {} public void setLocale(Locale arg0) {} /* unsupported methods */ public ServletOutputStream getOutputStream() throws IOException { return soStream; } /** * Return the captured content. */ @Override public String toString() { return output.toString(); } public String getHeader(String string) { return null; } public Collection<String> getHeaders(String string) { return null; } public Collection<String> getHeaderNames() { return null; } public int getStatus() { throw new UnsupportedOperationException("Not supported yet."); } } I did also rendered jsf page which used CDI injected bean in the page. Thanks SK 7. render jsf page in javaStan Silvert Jan 14, 2011 7:13 PM (in response to S K) If you are running in-container, why not use HtmlUnit and just have the two lines of code like I showed earlier? You don't need to be in the context of a unit test to use the HtmlUnit API. HtmlUnit is just a headless browser. Stan 8. render jsf page in javaNicklas Karlsson Jan 15, 2011 2:08 AM (in response to Stan Silvert) So you're saying one could have an application scoped JSF hidden somewhere, bootstrapped with a ServletContext-overridden class and then do the virtual-render-trick by faking what ServletContext does (with mock requests)? That way, e.g. MDB:s could use that virtual-JSF? 9. render jsf page in javaStan Silvert Jan 17, 2011 7:06 AM (in response to Nicklas Karlsson) Yes, it's doable. But again, if there is an app server running somewhere you might as well use HtmlUnit and send real HttpRequests to a real FacesServlet running in a real environment. Stan 10. render jsf page in javaNicklas Karlsson Jan 17, 2011 7:20 AM (in response to Stan Silvert) The advantage of the standalone-JSF would perhaps that it could be separetly configurable. And you wouldn't have to trick around with the FacesContext instance. And it could be run in a truly headless mode. I did a quick run in SE and tried to get it running with a mocked ServletContext (run it a pass through the Mojarra 2 ConfigListener SC initialized event) and then another through the FacesServlet init() but I must have gotten something wrong since the FactoryFinder wasn't that cooperative. Although I think you could work against the lifecycle directly like the FacesServlet does, I think.
https://community.jboss.org/message/580939
CC-MAIN-2015-35
refinedweb
1,285
56.25
Hey, Pretty much a follow up to my previous thread on here but thought it warranted it's own topic. Background: I have 3 models: Collections has_many collections as children belongs_to collection as parent Albums belongs_to collection has_many photos Photos belongs_to album Basically I want to write an instance method for a collection to return the total number of photos of ALL children albums including all those albums in children collections of the calling collection. So for example: Collection 1 Collection 2 Album 1 - 5 photos Album 2 - 6 photos Collection 3 Album 3 - 10 photos Calling photo_count on collection 1 would return 21 and calling it on collection 2 would return 11 I have tried to write it as follows but it's not quite right, I realise I need to use a try method somewhere but i'm just not sure how to finish it off. def photo_count count = 0 albums.each do |album| count += album.photo_count end end Could anyone give me a hand here? Neil Hi Neil, I'd be looking for gems to solve this, checkout ancestry which will let you do albums.descendants.count But, you can probably do something like this if you do want to code it yourself. The tree gems would work fine but only saw those after I had started building it manually which actually wasn't that difficult in the first place. It's not actually the model that is being the tree I want to query, It't the items at the end of the branches I want information from. In my app, the collections act as a tree and the albums just sit at the ends, each albums just has a parent collection id to show what collection it belongs to. Please see the collection/album hierarchy I posted above, sorry, it may have seemed a bit confusing but I need to be able to find out how many photos are in every collection but just unsure how to put everything together to get the right results. It does seem like one of your models isn't necessary.I would have gone with something like this and the ancestry gem. Album 1 Album 2 Album 3 - 5 photos Album 4 - 6 photos Album 5 Album 6 - 10 photos I'd suggest looking at those gems again and implementing one of the their searching algorithms for getting all the descendants.It's the recursive tree part that is tripping you up, once you have that it's easy. Class Collection < ActiveRecord::Base def photo_count albums = descendants.collect! { |x| x.albums } albums.collect! { |x| x.photos }.length end end Or if you can make collections have many photos through albums you could just usedescendants.collect! { |x| x.photos }.length This code is not far off: def photo_count count = 0 albums.each do |album| count += album.photo_count end end If you return the count at the end it should work, i.e: def photo_count count = 0 albums.each do |album| count += album.photo_count end count end This is a common programming pattern, and there's a much shorter way of writing it: def photo_count albums.inject(0) { |sum, album| sum + album.photo_count } end A bit like using "each" instead of writing for loops by hand, much easier! The ultra concise version would be this: def photo_count albums.map(&:photo_count).inject(0, &:+) end Whether that's easier or harder to understand than the original depends on what you're used to Cheers,Tim Just curious: Have you looked into using :counter_cache for doing this?
https://www.sitepoint.com/community/t/counting/17136
CC-MAIN-2015-48
refinedweb
593
59.94
09 February 2011 18:32 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> The USDA’s February World Agriculture Supply and Demand Estimate (WASDE) report forecast corn marketing year ending stocks at 675m bushels, down 70m bushels from the January estimate. The 2010-2011 corn marketing year ends on 31 August, 2011. At the Chicago Mercantile Exchange, the midday March corn price was $6.91/bushel, up 18 cents/bushel form the prior day’s close. The change in corn-ending stocks (the supply minus the amount used) resulted from slight increases in the estimates of corn used for ethanol, sweeteners and starch, the USDA said. “These revisions will add fuel to the speculative fire, likely pushing prices for corn and other commodities higher,” said Renewable Fuel Association (RFA) spokesman Matt Hartwig. “Many will use strong ethanol demand as the rationale to drive the price of corn futures as high as the market will bear,” Hartwig said. “In turn, this will likely cause ill-informed industries and talking heads to pronounce The RFA noted that In other crops, the USDA said soybean supply and use projections for 2010/2011 are unchanged this month, leaving ending stocks at 140m bushels. Soybean oil used for biodiesel during the first quarter of the marketing year was the lowest in six years, the USDA said. Projected soybean use for biodiesel production is expected to accelerate because of the 2011 mandate and the return of the $1/gal blending
http://www.icis.com/Articles/2011/02/09/9433941/us-corn-prices-jump-on-tighter-supply-estimate-for-the-year.html
CC-MAIN-2014-52
refinedweb
243
58.72
Table Of Contents UrlRequest¶ New in version 1.0.8. You can use the UrlRequest to make asynchronous requests on the web and get the result when the request is completed. The spirit is the same as the XHR object in Javascript. The content is also decoded if the Content-Type is application/json and the result automatically passed through json.loads. The syntax to create a request: from kivy.network.urlrequest import UrlRequest req = UrlRequest(url, on_success, on_redirect, on_failure, on_error, on_progress, req_body, req_headers, chunk_size, timeout, method, decode, debug, file_path, ca_file, verify) Only the first argument is mandatory: the rest are optional. By default, a “GET” request will be sent. If the UrlRequest.req_body is not None, a “POST” request will be sent. It’s up to you to adjust UrlRequest.req_headers to suit your requirements and the response to the request will be accessible as the parameter called “result” on the callback function of the on_success event. Example of fetching JSON: def got_json(req, result): for key, value in req.resp_headers.items(): print('{}: {}'.format(key, value)) req = UrlRequest('', got_json) Example of Posting data (adapted from httplib example): import urllib def bug_posted(req, result): print('Our bug is posted!') print(result) params = urllib.urlencode({'@number': 12524, '@type': 'issue', '@action': 'show'}) headers = {'Content-type': 'application/x-www-form-urlencoded', 'Accept': 'text/plain'} req = UrlRequest('bugs.python.org', on_success=bug_posted, req_body=params, req_headers=headers) If you want a synchronous request, you can call the wait() method. - class kivy.network.urlrequest. UrlRequest(url, on_success=None, on_redirect=None, on_failure=None, on_error=None, on_progress=None, req_body=None, req_headers=None, chunk_size=8192, timeout=None, method=None, decode=True, debug=False, file_path=None, ca_file=None, verify=True, proxy_host=None, proxy_port=None, proxy_headers=None, user_agent=None, on_cancel=None, cookies=None)[source]¶ Bases: threading.Thread A UrlRequest. See module documentation for usage. Changed in version 1.5.1: Add debug parameter Changed in version 1.0.10: Add method parameter Changed in version 1.8.0: Parameter decode added. Parameter file_path added. Parameter on_redirect added. Parameter on_failure added. Changed in version 1.9.1: Parameter ca_file added. Parameter verify added. Changed in version 1.10.0: Parameters proxy_host, proxy_port and proxy_headers added. Changed in version 1.11.0: Parameters on_cancel added. - Parameters - url: str Complete url string to call. - on_success: callback(request, result) Callback function to call when the result has been fetched. - on_redirect: callback(request, result) Callback function to call if the server returns a Redirect. - on_failure: callback(request, result) Callback function to call if the server returns a Client or Server Error. - on_error: callback(request, error) Callback function to call if an error occurs. - on_progress: callback(request, current_size, total_size) Callback function that will be called to report progression of the download. total_size might be -1 if no Content-Length has been reported in the http response. This callback will be called after each chunk_size is read. - on_cancel: callback(request) Callback function to call if user requested to cancel the download operation via the .cancel() method. - req_body: str, defaults to None Data to sent in the request. If it’s not None, a POST will be done instead of a GET. - req_headers: dict, defaults to None Custom headers to add to the request. - chunk_size: int, defaults to 8192 Size of each chunk to read, used only when on_progress callback has been set. If you decrease it too much, a lot of on_progress callbacks will be fired and will slow down your download. If you want to have the maximum download speed, increase the chunk_size or don’t use on_progress. - timeout: int, defaults to None If set, blocking operations will timeout after this many seconds. - method: str, defaults to ‘GET’ (or ‘POST’ if bodyis specified) The HTTP method to use. - decode: bool, defaults to True If False, skip decoding of the response. - debug: bool, defaults to False If True, it will use the Logger.debug to print information about url access/progression/errors. - file_path: str, defaults to None If set, the result of the UrlRequest will be written to this path instead of in memory. - ca_file: str, defaults to None Indicates a SSL CA certificate file path to validate HTTPS certificates against - verify: bool, defaults to True If False, disables SSL CA certificate verification - proxy_host: str, defaults to None If set, the proxy host to use for this connection. - proxy_port: int, defaults to None If set, and proxy_host is also set, the port to use for connecting to the proxy server. - proxy_headers: dict, defaults to None If set, and proxy_host is also set, the headers to send to the proxy server in the CONNECTrequest. cancel()[source]¶ Cancel the current request. It will be aborted, and the result will not be dispatched. Once cancelled, the callback on_cancel will be called. New in version 1.11.0. - property chunk_size¶ Return the size of a chunk, used only in “progress” mode (when on_progress callback is set.) decode_result(result, resp)[source]¶ Decode the result fetched from url according to his Content-Type. Currently supports only application/json. - property error¶ Return the error of the request. This value is not determined until the request is completed. get_connection_for_scheme(scheme)[source]¶ Return the Connection class for a particular scheme. This is an internal function that can be expanded to support custom schemes. Actual supported schemes: http, https. - property resp_headers¶ If the request has been completed, return a dictionary containing the headers of the response. Otherwise, it will return None. - property resp_status¶ Return the status code of the response if the request is complete, otherwise return None. - property result¶ Return the result of the request. This value is not determined until the request is finished.(delay=0.5)[source]¶ Wait for the request to finish (until resp_statusis not None) Note This method is intended to be used in the main thread, and the callback will be dispatched from the same thread from which you’re calling. New in version 1.1.0.
https://kivy.org/doc/stable/api-kivy.network.urlrequest.html
CC-MAIN-2021-39
refinedweb
985
51.14
, different shapes etc. - Using nested for loops we can access multidimensional array’s elements easily. - Nested loops are used for complex programs. Example: using System; namespace csharpBasic { // Start class definition / declaration. class Program { // Static main method void type declaration. static void Main(string[] args) { // initialize an int type variable sum with 0. int sum = 0; /* Following is for loop declaration, this loop is called an outer loop, execute this loop 3 times according to its condition part. */ for (int outerNumber = 1; outerNumber <= 3; outerNumber++) { // starting outer loop scope. /* Following loop is called inner loop which is declared in outer loop’s body and according to condition this is also executed 3 times. */ for (int innerNumber = 1; innerNumber <= 3; innerNumber++) { // start inner loop scope /* At every execution of inner for loop, 1 will be added to sum variable. NOTE: this process continuous as is until inner loop’s condition becomes false and this inner loop depends on outer loop, so this inner loop always executed until outer loop becomes false. */ sum += 1; // Print sum variable. Console.Write(" {0}", sum); } // End inner loop scope. /* When inner loop becomes false, than a new line will be created every time. */ Console.WriteLine(); } // End outer loop scope. Console.ReadKey(); } // End of main method definition. } // End of class. /* The Output will be: 1 2 3 4 5 6 7 8 9 */ }
https://tutorialstown.com/csharp-nested-for-loops/
CC-MAIN-2018-43
refinedweb
225
59.19
Demystifying Hello World in Java ! Majority of the folks in the programming community start with the plain old “Hello World” when experimenting with a new language. Let’s print “Hello World” in PHP. <?php echo "Hello World"; ?> Too lazy, use short tags (not recommended): <?="Hello World"?> In JavaScript : console.log("Hello World"); Let’s enter the Java. This is a simple blueprint of an iPhone 5. It includes a home button, volume rockers and a mute switch. There is power button on the top and a few sensors on the top portion of the display. The front-facing camera also lies on the top. It has a Multi‑Touch display and many more features. The main purpose of a blueprint is to describe the features and various components of anything we wish to develop; inherently abstracting the complexity involved. We can create many instances of a class having a specific state and behaviour. In the above case, we can manufacture multiple iPhone 5’s with this blueprint; some maybe defective (a faulty camera, button, etc) implying not all objects will have the same state. Components of an object should not be accessed directly. They should be accessed with the help of methods. Methods are functions that exist inside a class. Switching to our Hello World Example public simply implies that the class is visible/accessible to everyone. DemoClass is the name of the class I created. //public implies the class is accessible to everyone. public class DemoClass{ main() is the starting point of an application in Java. Whenever we run a Java program, main() is the first method to be called. //We want our main method to be public since an external agent(outside the class) will call the main method. //void simply implies the main method does not return any value.public static void main(String[] args){ What is the “static” keyword? Any method or field with the keyword static, by convention, has to be called by the Classname. So some external agent will make a call to main() method not by creating an instance of the enclosing class rather just by Classname.main(args …); e.g, DemoClass.main(). NOTE: dot(.) operator is used to access/call the components of an object. //static implies method will be called with the help of Classname //args consists of an array of data passed by an external agent public static void main(String[] args){ Any method or field without the static keyword is a non-static method or field. Non-static members need to be called with the instance of the class. println() is a non-static method inside the PrintStream class. Non-static method needs to be called with the instance of the class. The out field inside System class is an instance of the PrintStream class. To call println() on PrintStream instance, we make use of the dot(.) operator. //println() is a non-static method in PrintStream class //We make use of the out instance to call println() out.println("Hello World"); The out field inside System class is an instance of the PrintStream class but it is a static field. As we all know, static fields need to be called with the enclosing classname. Hence, we have: System.out.println() //public implies the class is accessible to everyone public class DemoClass{ //public implies the main method is accessible to everyone //static implies the method can be called with the classname //void indicates the method does not return any value public static void main(String[] args){ //println is a non-static method inside PrintStream class //out is an instance of PrintStream class //out is a static field inside System class System.out.println("Hello World"); }} To summarize, - A class is a blueprint of any object describing it’s state and behaviour. - An object is an instance of a class. - Functions existing inside a class are called as methods. - Variables existing inside a class are called as fields. - Static members need to be called with classname. - Non-static members need to be called with the instance of the class.
https://deveshshetty.medium.com/demystifying-hello-world-in-java-a03397217fb8?responsesOpen=true&source=---------3----------------------------
CC-MAIN-2021-17
refinedweb
678
65.73
This is a question about workflow. I like to code in an exploratory, iterative way, but I generally prefer not to use REPL or notebook environments, because I like to keep everything in a state where even if I get interrupted tomorrow and have to come back to it 6 months later, I can get it running again straight away, without having to remember what I did. (This happens to me regularly.) In Python, I do this by putting everything in a file test.py, and then just call python3 test.py at a shell prompt. The code in test.py typically imports matplotlib, does some calculations, then pops up a plot window showing the results. When doing things this way, it’s absolutely guaranteed that my code is not accidentally referring to any global state and will run in exactly the same way next time, as long as I use the same version of Python. I’d like to achieve a similar workflow in julia. However, it’s made complicated by the fact that starting julia and importing Plots takes a good 30 seconds, and Plots doesn’t seem designed to support this kind of workflow. So I am wondering if it’s possible to import/run my file in the REPL, in such a way that it’s guaranteed to be unable to access any global state, without having to wait for Plots to be imported every time it’s run. I know that I can put my code in a module, and that when I re-include it the module’s namespace will be reinitialised. However, this isn’t enough, because if I’ve understood correctly, this only applies to global variables in my module’s namespace and not other modules’ namespaces. So if some previous iteration of my code has changed some global state stored in Plots or some other module, the current version might not run the same way if the REPL is restarted. Is there a way to achieve what I’m asking for? Or am I asking the wrong question somehow? How do julia users handle this kind of repeatability issue, in general?
https://discourse.julialang.org/t/workflow-question-how-to-guarantee-no-dependence-on-global-state-without-long-load-times/24613
CC-MAIN-2019-35
refinedweb
360
69.01
(1)Chris wrote excellent blog about Named pipe Binding (2)Named pipe binding security model () When a named pipe channel listener creates a new named pipe it has to supply a discretionary ACL that describes who can connect to the pipe. Here is how that DACL is constructed: - An access control entry is added to deny GENERIC_ALL access to the well-known network SID (S-1-5-2). - Access control entries are added to allow GENERIC_READ and GENERIC_WRITE access to a list of SIDs that is defined on the binding element. The default is to allow the well-known world SID (S-1-1-0). Since this list is an internal setting, you will almost always be using the default. - An access control entry is added to allow GENERIC_READ and GENERIC_WRITE access to the well-known creator owner SID (S-1-3-0). And that’s how the DACL gets built. There are a few other settings as well required to create the pipe if you’re interested in their values. The pipe is bidirectional (PIPE_ACCESS_DUPLEX), data is written to the pipe as messages (PIPE_TYPE_MESSAGE), data is read from the pipe as messages (PIPE_READMODE_MESSAGE), we use overlapped IO (FILE_FLAG_OVERLAPPED), and if this is the first pipe created by the listener, then we need to say that more pipes are coming (FILE_FLAG_FIRST_PIPE_INSTANCE). (3)How can I know what are the pipes opened on my machine() Named Pipe Directory Listings: Did you know that the device driver that implements named pipes is actually a file system driver” In fact, the driver’s name is NPFS.SYS, for “Named Pipe File System”. What you might also find surprising is that it’s. To demonstrate the listing of named pipes I’ve written a program called PipeList. PipeList displays the named pipes on your system, including the number of maximum instances and active instances for each pipe. (4) Sample code to Create Named Pipe #include <windows.h> #include <process.h> #include <stdio.h> HANDLE hPipe; int Buffer_in; int count; int main(int argc, char* argv[]) { hPipe = CreateNamedPipe(“\\\\.\\pipe\\muller“, //this machine PIPE_ACCESS_INBOUND, PIPE_TYPE_BYTE | PIPE_WAIT, 10, 0, sizeof(Buffer_in), 10000, // timeout in millseconds NULL); // security descriptor if(INVALID_HANDLE_VALUE == hPipe) { printf(“Server Pipe not created\n”); exit(0); } else printf(“Successful in creating server pipe\n”); // wait of a connection. while ( !ConnectNamedPipe(hPipe, (LPOVERLAPPED) NULL)); printf(“Client has connected\n”); for(int i=0; i<10; i++) { ReadFile(hPipe, (LPVOID) &Buffer_in, (DWORD) sizeof(Buffer_in), (LPDWORD) &count, (LPOVERLAPPED) NULL); printf(“revieved %d\n”, Buffer_in); } printf(“press ‘c’ to quit\n”); while( toupper(getchar()) != ‘C’); CloseHandle(hPipe); return 0; } // clientpipe.cpp : Defines the entry point for the console application. // #include <windows.h> #include <process.h> #include <stdio.h> HANDLE hPipe; const int BUFSIZE = 10; int Buffer_out; int count; int main(int argc, char* argv[]) { hPipe = CreateFile(“\\\\.\\pipe\\muller“, // this machine GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, (HANDLE) NULL); if(INVALID_HANDLE_VALUE == hPipe) { printf(“Server Pipe not found\n”); goto done; } else printf(“Successful in finding server pipe\n”); for(Buffer_out=0; Buffer_out< BUFSIZE; Buffer_out++) { printf(“sending %d\n”, Buffer_out); WriteFile(hPipe, &Buffer_out, sizeof(Buffer_out), (LPDWORD) &count, NULL); } printf(“%d integers written, press ‘c’ to quit\n”); CloseHandle(hPipe); done: while( toupper(getchar()) != ‘C’); return 0; } MORE INFO: If you want to know about SID PingBack from
https://blogs.msdn.microsoft.com/madhuponduru/2008/07/11/all-about-named-pipe-binding/
CC-MAIN-2016-50
refinedweb
546
53.61
- can with this kinds of attacks I would recommend to view the following talk of the 28C3: Denial-of-Service attacks on web applications made easy). To make a long story short the core issue is the usage of a non-cryptographic hash functions (where finding collisions is easy). The root cause is hidden in the java.lang.String.hashCode() function. The obvious approach would be to patch the java.lang.String class which is difficult for two reasons: - it contains native code - it belongs to the Java core classes which are delivered with the Java installation and thus out of our control The first point would force us to patch with architecture and OS specific libs which we should circumvent whenever possible. The second point is true but it is a little more flexible as we will see in the following. Ok, so let’s reconsider: Patching native is dirty and we are not eager to go this way – we have to do some work for others (in this case patch SDK libs) who are not willing to fix their code. An attempt: The classes java.util.Hashtable and java.util.HashMap are concerned by the hashing issue and don’t use any native code. Patching these classes is much easier as it is sufficient to provide one compiled class for all architectures and OSs. We could use one of the provided solutions for the bug and adjust (or replace) the original classes with fixed versions. The difficulty is to patch the VM without touching the core libs – I guess users would be very disappointed if they have to change parts of their JVM installation or, even worse, our application does this automatically during installation. Further on introducing new, custom Classloaders could be difficult in some cases. What we need is a solution to patch our single application on the fly – replace the buggy classes and don’t touch anything else. If we do this transparently other software parts don’t even recognize any changes (in best case) and remain interfacing the classes without any modifications. This could easily be done by abusing the Java Instrumentation API. To quote the JavaDoc: And that is exactly what we need! Proof of concept At first we need a sample application to demonstrate the concept: public class StringChanger { public static void main(String[] args) { System.out.println(A.shout()); } } public class A { public static String shout() { return "A"; } } When this class is run it simply outputs: A After applying our “patch” we would like to have the following output: Apatched The “patched code looks like this: public class A { public static String shout() { return "Apatched"; } } Further on we need an “Agent” which governs the used classes and patches the right ones: final public class PatchingAgent implements ClassFileTransformer { private static byte[] PATCHED_BYTES; private static final String PATH_TO_FILE = "Apatched.class"; private static final String CLASS_TO_PATCH = "stringchanger/A"; public PatchingAgent() throws FileNotFoundException, IOException { if (PATCHED_BYTES == null) { PATCHED_BYTES = readPatchedCode(PATH_TO_FILE); } } public static void premain(String agentArgument, final Instrumentation instrumentation) { System.out.println("Initializing hot patcher..."); PatchingAgent agent = null; try { agent = new PatchingAgent(); } catch(Exception e) { System.out.println("terrible things happened...."); } instrumentation.addTransformer(agent); } @Override public byte[] transform(final ClassLoader loader, String className, final Class classBeingRedefined, final ProtectionDomain protectionDomain, final byte[] classfileBuffer) throws IllegalClassFormatException { byte[] result = null; if (className.equals(CLASS_TO_PATCH)) { System.out.println("Patching... " + className); result = PATCHED_BYTES; } return result; } private byte[] readPatchedCode(final String path) throws FileNotFoundException, IOException { ... } } Don’t worry – I’m not going to bother you with implementaion details, since this is only PoC code, far from being nice, clever, fast and neat. Except from the fact that I’m catching Exception just because I’m too lazy at this point I’m not filtering inputs, building deep-copies (defensive programming as a buzzword) this really shouldn’t be taken as production code. public PatchingAgent() Initializes the agent (in this case fetching the bytes of a patched A.class file. The patched class was compiled and is stored somewhere where we can access it. public static void premain(…) This method is called after the JVM has initialized and prepares the agent. public byte[] transform(…) Whenever a class is defined (for example by ClassLoader.defineClass(…)) this function will get invoked and may transform the handled class byte[] (classfileBuffer). As can be seen we will do this for our class A in the stringchanger package. You are not limited how you are going to transform the class (as long as it remains a valid Java class ) – for example you could utilize byte code modification frameworks… – to keep things simple we assume that we replace the old byte[] with the one of the patched class (by simply buffering the complete patched A.class file into a byte[]). That’s all for the coding part of the patcher… As a final thing we have to build a jar of the agent with a special manifest.mf file which tells the JVM how the agent can be invoked. Manifest-Version: 1.0 X-COMMENT: Main-Class will be added automatically by build Premain-Class: stringchanger.PatchingAgent After building this jar we could try out our PoC application. at first we will call it without the necessary JVM arguments to invoke the agent: run: A BUILD SUCCESSFUL (total time: 0 seconds) It behaves as expect and prints the output as defined by the unpatched class. And now we will try it with the magic JVM arguments to invoke the agent -javaagent:StringChanger.jar: run: Initializing hot patcher… Reading patched file. Patching… stringchanger/A Apatched BUILD SUCCESSFUL (total time: 0 seconds) Voilà, the code was successfully patched on-the-fly! As we can see it is possible to hot-patch a JVM dynamically without touching the delivered code. What has to be done is the development of a patching agent and a patched class. At this moment I’m not aware of performance measuring data. Thus I’m very unsure how practical this solution is for production systems and in how far it influences application performance. To make it clear, this is not an elegant solution – at least it is very dirty! The best way would be to patch the root cause, but as long as there is no vendor fix developers can prevent their software by hot-patching without rewriting every single line where the vulnerable classes are used. Finally I would kindly ask for comments, improvements or simply better solutions. Many thanks to Juraj Somorovsky who joint-works on this issue with me. Reference: Patching Java at runtime from our JCG partner Christopher Meyer at the Java security and related topics. Looks interesting. I’m going to try this to crack a very nasty java exploit that has been breaking my balls lately
http://www.javacodegeeks.com/2012/02/patching-java-at-runtime.html/comment-page-1/
CC-MAIN-2015-14
refinedweb
1,125
51.68