Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is it possible to create a table (in my dev db) using a SELECT from a different database? I want something like: ``` create tmp_table as select * from prod_db.prod_schema.table ``` Is there syntax to do this, or do I need to create a database link first?
**You have to create a datalink first**. Oracle cannot query other databases unless a DB link is created. If a DB link exists, as you remarked, you have to do : ``` create tmp_table as select * from prod_schema.table@prod_db ```
@Steve is correct that there has to be a DB Link, but the syntax is: ``` create tmp_table as select * from table@dblink ```
How can I create a table as a select from another database in Oracle?
[ "", "sql", "oracle", "" ]
Is it possible to evaluate the following in C# at runtime I have a class that contains 3 properties (`Field`,`Operator`,`Value`) ``` rule.Field; rule.Operator; rule.Value; ``` this is my rule class... Now I have a loop ``` foreach(item in items) { // here I want to create a dynamic expression to evaluate at runtime // something like if (item.[rule.field] [rule.operator] [rule.value]) { do work } } ``` I just don't know the syntax, or if its possible in C#, I know in JS its possible but that's not a compiled language. **Update** Essentially I want a way to `eval(stringCode)` or a better more supported way.
I'm not entirely sure what you are saying. Can you try clarifying it a bit? Are you wanting to to take a string expression and evaluate it at runtime in C#? If so the answer is no. C# does not support such types of dynamic evaluation.
No, C# doesn't support anything like this directly. The closest options are: * Create a full valid C# program and dynamically compile it with [`CSharpCodeProvider`](http://msdn.microsoft.com/en-us/library/microsoft.csharp.csharpcodeprovider.aspx). * Build an [expression tree](http://msdn.microsoft.com/en-us/library/bb397951.aspx), compile and execute it * Perform the evaluation yourself (this may actually be easiest, depending on your operators etc)
eval(string) to C# code
[ "", "c#", "runtime", "evaluation", "" ]
I have a database table with a field that I need to read from and write to via Hibernate. It is string field, but the contents are encrypted. And for various reasons (e.g. a need to sort the plain text values), the encrypt/decrypt functions are implemented inside the database, not in Java. The problem I'm struggling with now is finding a way to invoke the encrypt/decrypt functions in Hibernate-generated SQL everywhere that the field is referenced and in a way that's transparent to my application code. Is this possible? I've looked into Hibernate's support for "derived" properties, but unfortunately, that approach doesn't support read-write fields. Any ideas appreciated.
Actually, in the end, I went a different route and submitted a [patch](https://hibernate.atlassian.net/browse/HHH-4440) to Hibernate. It was committed to trunk last week and so I think it will be in the next release following 3.5. Now, in property mappings, you can specify SQL "read" and "write" expressions to call SQL functions or perform some other kind of database-side conversion.
I don't think there's a way to make encryption like you've described it **completely** transparent to your application. The closest thing you can get is to make it transparent outside of entity. In your entity class: ``` @Entity @SQLInsert(sql="INSERT INTO my_table(my_column, id) VALUES(encrypt(?),?)") @SQLUpdate( sql="UPDATE my_table SET my_column = encrypt(?) WHERE id = ?") public class MyEntity { private String myValue; .... @Formula("decrypt(my_column)") public String getValue() { return myValue; } public void setValue(String value) { myValue = value; } @Column (name="my_column") private String getValueCopy() { return myValue; } private void setValueCopy(String value) { } } ``` `value` is mapped as derived property, you should be able to use it in queries. `valueCopy` is private and is used to get around derived property being read-only. `SQLInsert` and `SQLUpdate` is black voodoo magic to force encryption on insert / update. Note that parameter order **IS** important, you need to find out what order Hibernate would generate parameters in without using custom insert / update and then replicate it.
Automatically apply field conversion function in Hibernate
[ "", "java", "database", "hibernate", "encryption", "jpa", "" ]
> I am trying to write a C++ implementation of factory design pattern. I would also like to do it using shared objects and dynamic loading. I am implementing a function called new`_`animal() which is passed a string. If the string is "dog", then it needs to see if a *class dog* is registered in shared object and create a dog object. If the string is "cat" it needs to find registered *class cat* and return an object of it. The function new`_`animal() does not know ahead of the time what strings will be passed to it. Thus, it would error out if a string with corresponding unregistered class is passed. Here is the code - creator.hpp - ``` #ifndef CREATOR_HPP #define CREATOR_HPP #include <string> class Animal { public : virtual string operator() (const string &animal_name) const = 0; virtual void eat() const = 0; virtual ~Animal() { } }; class AnimalCreator { public : // virtual Animal *create() const = 0; virtual ~AnimalCreator() { } }; typedef Animal* create_animal_t(); typedef void destroy_animal_t(Animal *); #endif ``` cat.hpp - ``` #ifndef CAT_HPP #define CAT_HPP #include "creator.hpp" #include <iostream> #include <string> class cat : public Animal { public : string operator() (const string &animal_name) const { return "In cat () operator"; } void eat() const { cout << "cat is eating" << endl; } }; class catCreator : public AnimalCreator { public : }theCatCreator; #endif ``` cat.cpp - ``` #include "cat.hpp" #include <iostream> using namespace std; extern "C" Animal *create() { cout << "Creating cat ..." << endl; return new cat; } extern "C" void destroy(Animal* a) { delete a; } ``` dog.hpp - ``` #ifndef DOG_HPP #define DOG_HPP #include <string> #include "creator.hpp" class dog : public Animal { public: string operator() (const string &animal_name) const { return "In dog"; } void eat() const { cout << "Dog is eating" << endl; } }; class dogCreator : public AnimalCreator { public: }theDogCreator; #endif ``` dog.cpp - ``` #include "dog.hpp" #include <iostream> using namespace std; extern "C" Animal *create() { cout << "Creating dog" << endl; return new dog; } extern "C" void destroy(Animal *aa) { delete aa; } main.cpp - #include "creator.hpp" #include "cat.hpp" #include "dog.hpp" #include <iostream> #include <string> #include <map> #include <dlfcn.h> map<string, AnimalCreator *> AnimalMap; void initialize() { AnimalMap["dog"] = &theDogCreator; AnimalMap["cat"] = &theCatCreator; } Animal * new_animal(const string &animal) { static bool isInitialised (false); if (!isInitialised) { initialize(); isInitialised = true; } AnimalCreator *theAnimalCreator = AnimalMap[animal]; if (!theAnimalCreator) { cout << "error: " << animal << " not registerd" << endl; exit(1); } Animal *theAnimal = theAnimalCreator->create(); return theAnimal; } int main() { void *animal = dlopen("animal", RTLD_LAZY); if (!animal) { cout << "error is dlopen" << endl; exit(1); } create_animal_t* new_animal = (create_animal_t*) dlsym(animal, "create"); if (!new_animal) { cout << "error is dlsym create" << endl; exit(1); } destroy_animal_t* destroy_animal = (destroy_animal_t*) dlsym(animal, "destroy"); if (!destroy_animal) { cout << "error is dlsym destroy" << endl; exit(1); } Animal *a = new_animal("dog"); Animal *b = new_animal("cat"); a->eat(); b->eat(); destroy_animal(a); destroy_animal(b); dlclose(animal); return 0; } ``` Makefile - ``` # macros CC = g++ CFLAGS = -g -Wall MODFLAGS = -fpic -shared LDFLAGS = -ldl OBJECTS = main.o animal # targets all: foo foo: $(OBJECTS) $(CC) -o foo $(OBJECTS) $(LDFLAGS) animal: dog.cpp cat.cpp $(CC) $(CFLAGS) $(MODFLAGS) dog.cpp cat.cpp -o animal clean: rm -f foo $(OBJECTS) ``` when I create a shared object using *make animal*, this is what I get - ``` bash-2.05$ make animal g++ -g -Wall -fpic -shared dog.cpp cat.cpp -o animal ld: fatal: symbol `create' is multiply-defined: (file /var/tmp/ccgDUpwo.o type=FUNC; file /var/tmp/ccv0VjHp.o type=FUNC); ld: fatal: symbol `destroy' is multiply-defined: (file /var/tmp/ccgDUpwo.o type=FUNC; file /var/tmp/ccv0VjHp.o type=FUNC); ld: fatal: File processing errors. No output written to animal collect2: ld returned 1 exit status make: *** [animal] Error 1 ``` I understand that there are multiple definitions of method create() and destroy() and hence the error. But at the same time, I could not use any class specific create() method in main.cpp because doing that will not make it generic. I am keeping create() and destroy() functions outside the class definition. I am also using extern "C" to make sure that compiler does not add name mangling, and keeps symbol name in shared libraries same as the function name. Can somebody please give me some hints about how to go about this problem ? Any changes that could be done in the class design ? Thanks for being patient in reading the code above. `-` Onkar Deshpande
THe problem is exactly what the linker tells you: how to choose a `create` function between the two offered definitions? So you could make it easier on the linker by using different symbols. You can manually choose for e.g. `dog_create`, `cat_create`, but you could also just write a template function and instantiate that for the `cat` and `dog` types. ``` // an abstract base factory class struct Creator { virtual Animal* create() const = 0; virtual ~Creator(){}; } template<typename tAnimal> struct TCreator : public Creator { tAnimal* create() const { return new tAnimal(); } // note that the return type can be covariant in C++ } ... map<string, Creator*> AnimalMap; void initialize() { AnimalMap["dog"] = new TCreator<dog>; AnimalMap["cat"] = new TCreator<cat>; } ``` This way, you won't need a `create` method *per animal*.
You mixed two proposed solutions: either use the `extern "C"` linkage for the `create` and `destroy` functions - if you want runtime flexibility (e.g. loading more modules on-the-fly), or use a compiled-in Factory Function, where you don't need to dynamically load symbols at all. Now you ended up with two shared objects that export no creator objects at all: the code ``` struct X { int i; double d; } theX; ``` only *declares* a symbol `theX` to whatever cpp file includes the header containing it. It should also be *defined* somewhere, either in the X.cpp file, or, preferably, using the proposed template class, in the main/creators.cpp file. But my guess is you want the dynamic behaviour, so this is my proposal: 1. go back to the `extern "C" Animal* create()` version 2. keep your makefile to have a "dog" and a "cat" shared object
Multiple Symbol Reference problem in shared library of a factory design pattern
[ "", "c++", "design-patterns", "oop", "shared-libraries", "" ]
I would like to be able to allow a user to view the output of a long-running GCI script as it is generated rather than after the script is complete. However even when I explicitly flush STDOUT the server seems to wait for the script to complete before sending the response to the client. This is on a Linux server running Apache 2.2.9. Example python CGI: ``` #!/usr/bin/python import time import sys print "Content-type: text/plain" print for i in range(1, 10): print i sys.stdout.flush() time.sleep(1) print "Done." ``` Similar example in perl: ``` #!/usr/bin/perl print "Content-type: text/plain\n\n"; for ($i = 1; $i <= 10 ; $i++) { print "$i\n"; sleep(1); } print "Done."; ``` This link says as of Apache 1.3 CGI output should be unbuffered (but this might apply only to Apache 1.x): <http://httpd.apache.org/docs/1.3/misc/FAQ-F.html#nph-scripts> Any ideas?
Randal Schwartz's article [Watching long processes through CGI](http://www.stonehenge.com/merlyn/LinuxMag/col39.html) explains a different (and IMHO, better) way of watching a long running process.
Flushing STDOUT can help. For example, the following Perl program should work as intended: ``` #!/usr/bin/perl use strict; use warnings; local $| = 1; print "Content-type: text/plain\n\n"; for ( my $i = 1 ; $i <= 10 ; $i++ ) { print "$i\n"; sleep(1); } print "Done."; ```
How can I serve unbuffered CGI content from Apache 2?
[ "", "python", "perl", "apache2", "cgi", "" ]
I overloaded operator << ``` template <Typename T> UIStream& operator<<(const T); UIStream my_stream; my_stream << 10 << " heads"; ``` Works but: ``` my_stream << endl; ``` Gives compilation error: > error C2678: binary '<<' : no operator found which takes a left-hand operand of type 'UIStream' (or there is no acceptable conversion) What is the work around for making `my_stream << endl` work?
`std::endl` is a function and `std::cout` utilizes it by implementing `operator<<` to take a function pointer with the same signature as `std::endl`. In there, it calls the function, and forwards the return value. Here is a code example: ``` #include <iostream> struct MyStream { template <typename T> MyStream& operator<<(const T& x) { std::cout << x; return *this; } // function that takes a custom stream, and returns it typedef MyStream& (*MyStreamManipulator)(MyStream&); // take in a function with the custom signature MyStream& operator<<(MyStreamManipulator manip) { // call the function, and return it's value return manip(*this); } // define the custom endl for this stream. // note how it matches the `MyStreamManipulator` // function signature static MyStream& endl(MyStream& stream) { // print a new line std::cout << std::endl; // do other stuff with the stream // std::cout, for example, will flush the stream stream << "Called MyStream::endl!" << std::endl; return stream; } // this is the type of std::cout typedef std::basic_ostream<char, std::char_traits<char> > CoutType; // this is the function signature of std::endl typedef CoutType& (*StandardEndLine)(CoutType&); // define an operator<< to take in std::endl MyStream& operator<<(StandardEndLine manip) { // call the function, but we cannot return it's value manip(std::cout); return *this; } }; int main(void) { MyStream stream; stream << 10 << " faces."; stream << MyStream::endl; stream << std::endl; return 0; } ``` Hopefully this gives you a better idea of how these things work.
The problem is that `std::endl` is a function template, as your operator `<<` is. So when you write: ``` my_stream << endl; ``` you'll like the compiler to deduce the template parameters for the operator as well as for `endl`. This isn't possible. So you have to write additional, non template, overloads of operator `<<` to work with manipulators. Their prototype will look like: ``` UIStream& operator<<(UIStream& os, std::ostream& (*pf)(std::ostream&)); ``` (there are two others, replacing `std::ostream` by `std::basic_ios<char>` and `std::ios_base`, which you have also to provide if you want to allow all manipulators) and their implementation will be very similar to the one of your templates. In fact, so similar that you can use your template for implementation like this: ``` typedef std::ostream& (*ostream_manipulator)(std::ostream&); UIStream& operator<<(UIStream& os, ostream_manipulator pf) { return operator<< <ostream_manipulator> (os, pf); } ``` A final note, often writing a custom `streambuf` is often a better way to achieve what one try to achieve applying to technique you are using.
std::endl is of unknown type when overloading operator<<
[ "", "c++", "stl", "operator-overloading", "" ]
I'm wondering if someone would help me troubleshoot my test for stream.publish. I thought I had all the right pieces. Here's the code: ``` <?php require_once 'facebook.php'; $appapikey = 'xxxxxxx'; $appsecret = 'xxxxxxx'; $facebook = new Facebook($appapikey, $appsecret); $user_id = $facebook->require_login(); $message = "Will this status show up and allow me to dominate the world?!"; $uid = $user_id; echo $uid; $facebook->api_client->stream_publish($message,$uid); ``` What I'm expecting is my status to change to $message's content. What happens instead is that my UID is echo'd, and then it throws a 500 error. I've allowed `publish_stream` as well as `offline_access` (verified in my app settings, via my profile), the the API key hooks this small bit of code to my app. What other pieces do I need to make this simple example work? I'm finding the FB documentation a little hard to put together. -- The include is the official PHP Facebook library
stream\_publish() takes more than two arguments: ``` stream_publish($message, $attachment = null, $action_links = null, $target_id = null, $uid = null) ``` Where $target\_id is the user or page you're publishing **to** and $uid is the user or page who is doing the publishing - and which defaults to your session id. To be completely explicit about this, I think you need to try ``` <?php require_once 'facebook.php'; $appapikey = 'xxxxxxx'; $appsecret = 'xxxxxxx'; $facebook = new Facebook($appapikey, $appsecret); $user_id = $facebook->require_login(); $message = "Will this status show up and allow me to dominate the world?!"; echo $user_id; $facebook->api_client->stream_publish($message,null,null,$user_id,$user_id); ``` An alternate form might be: ``` $app_id = 'xxxxxxx'; $facebook->api_client->stream_publish($message,null,null,$user_id,$app_id); ```
This one works in 2011! I had the same problem. Most of the tuts seem to be out of date thanks to Facebook changes. I eventually found a way that worked and did a quick blog article about it here: <http://facebookanswers.co.uk/?p=214> There's also a screen shot to show you what the result is. Make sure you also see the blog post about authentication though.
What are the prerequisites for Facebook stream.publish?
[ "", "php", "api", "facebook", "" ]
I have heard that PHP6 will natively support unicode, which will hopefully make multi-language support much easier. However, PHP5 has pretty weak support for unicode and multi-language (i.e. just a bunch of specialized string functions). I was wondering what are your strategies to enable unicode and multi-languaage support in your PHP5 applications? Also, how do you store translations since PHP5 doesn't have WebResource file like ASP.NET does?
It's not all that hard really, but you may want to make your question a bit more specific. If you're talking to a database, make sure your database stores data in UTF-8 and the **connection to your database is in UTF-8** (a common pitfall). Make sure to run this when establishing a connection: ``` mysql_set_charset('utf8'); ``` For user input, set the `accept-charset` attribute on your forms. ``` <form accept-charset="utf-8"> ``` Serve your sites with an appropriate HTTP header: ``` header('Content-Type: text/html; charset=utf-8'); ``` or at least set appropriate meta tags for your site: ``` <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> ``` Keep your source code files encoded in UTF-8. If you keep everything in UTF-8, you usually don't need to worry about anything. It's only getting problematic once you start mixing encodings throughout your app. If you're starting to talk about string manipulation of course, you'll have to take a little more care. Mostly you'll want to use the [`mb_`](http://php.net/manual/en/ref.mbstring.php) set of string functions, as you pointed out yourself.
For translations, you can either use a framework, or just roll your own library. You can store translations in csv files and use PHP's fgetcsv() to parse it. CSV files can be edited with any spreadsheet app. For an example, you can look at the code of Zend\_Translate (part of Zend Framework). It's easy to follow along.
Strategy for supporting unicode & multi language in PHP5
[ "", "php", "unicode", "" ]
My JOptionPane code is as follows: ``` selectedSiteName = JOptionPane.showInputDialog("Enter the name of the new site:"); ``` This renders out an input with a textbox and an OK and Cancel button. I need to detect if Cancel was clicked. Cheers.
Check if selectedSiteName == null . This will be the case if the user clicks Cancel or closes the dialog.
Read the JOptionPane API and follow the link to the Swing turorial on "How to Use Dialogs" for a working example.
Get the return value of JOptionPane
[ "", "java", "joptionpane", "" ]
If we have: ``` public interface Foo{} public class Bar implements Foo{...} ``` Is there a difference between: ``` public class BarBar extends Bar implements Foo{..} ``` and ``` public class BarBar extends Bar{..} ``` I see a lot of code like this and it always confuses me. Does `BarBar` need to implement `Foo`? I mean since it extends `Bar` to begin with isn't that already there? I guess my question is, what purpose does implementing `Foo` in `BarBar` serve here?
The main difference is 15 completely unnecessary characters :-) When your parent class implements some interface, all interface methods are either implemented by it or are defined (explicitly or implicitly) as abstract. Either way, your class extending the parent class inherits all those methods and implicitly implements the original interface.
There is no difference. The extra `implements` there is harmless, but useless.
Java interface and inheritance
[ "", "java", "interface", "" ]
In Java, is it legal to call remove on a collection when iterating through the collection using a foreach loop? For instance: ``` List<String> names = .... for (String name : names) { // Do something names.remove(name). } ``` As an addendum, is it legal to remove items that have not been iterated over yet? For instance, ``` //Assume that the names list as duplicate entries List<String> names = .... for (String name : names) { // Do something while (names.remove(name)); } ```
To safely remove from a collection while iterating over it you should use an Iterator. For example: ``` List<String> names = .... Iterator<String> i = names.iterator(); while (i.hasNext()) { String s = i.next(); // must be called before you can call i.remove() // Do something i.remove(); } ``` From the [Java Documentation](http://docs.oracle.com/javase/7/docs/api/java/util/ArrayList.html) : > The iterators returned by this class's iterator and listIterator > methods are fail-fast: if the list is structurally modified at any > time after the iterator is created, in any way except through the > iterator's own remove or add methods, the iterator will throw a > ConcurrentModificationException. Thus, in the face of concurrent > modification, the iterator fails quickly and cleanly, rather than > risking arbitrary, non-deterministic behavior at an undetermined time > in the future. Perhaps what is unclear to many novices is the fact that iterating over a list using the for/foreach constructs implicitly creates an iterator which is necessarily inaccessible. This info can be found [here](http://docs.oracle.com/javase/1.5.0/docs/guide/language/foreach.html)
You don't want to do that. It can cause undefined behavior depending on the collection. You want to use an [Iterator](http://java.sun.com/javase/6/docs/api/java/util/Iterator.html) directly. Although the for each construct is syntactic sugar and is really using an iterator, it hides it from your code so you can't access it to call [`Iterator.remove`](http://java.sun.com/javase/6/docs/api/java/util/Iterator.html#remove()). > The behavior of an iterator is > unspecified if the underlying > collection is modified while the > iteration is in progress in any way > other than by calling this method. Instead write your code: ``` List<String> names = .... Iterator<String> it = names.iterator(); while (it.hasNext()) { String name = it.next(); // Do something it.remove(); } ``` Note that the code calls `Iterator.remove`, not `List.remove`. **Addendum:** Even if you are removing an element that has not been iterated over yet, you still don't want to modify the collection and then use the `Iterator`. It might modify the collection in a way that is surprising and affects future operations on the `Iterator`.
Calling remove in foreach loop in Java
[ "", "java", "loops", "iterator", "foreach", "" ]
I have an `IDataRecord reader` that I'm retrieving a decimal from as follows: ``` decimal d = (decimal)reader[0]; ``` For some reason this throws an invalid cast exception saying that the "Specified cast is not valid." When I do `reader[0].GetType()` it tells me that it is an Int32. As far as I know, this shouldn't be a problem.... I've tested this out by this snippet which works just fine. ``` int i = 3750; decimal d = (decimal)i; ``` This has left me scratching my head wondering why it is failing to unbox the int contained in the reader as a decimal. Does anyone know why this might be occurring? Is there something subtle I'm missing?
You can only unbox a value type to its original type (and the nullable version of that type). By the way, this is valid (just a shorthand for your two line version): ``` object i = 4; decimal d = (decimal)(int)i; // works even w/o decimal as it's a widening conversion ``` For the reason behind this read this [Eric Lippert's blog entry: Representation and Identity](https://ericlippert.com/2009/03/03/representation-and-identity/) Personally, I categorize things done by cast syntax into four different types of operation (they all have different IL instructions): 1. Boxing (`box` IL instruction) and unboxing (`unbox` IL instruction) 2. Casting through the inhertiance hierarchy (like `dynamic_cast<Type>` in C++, uses `castclass` IL instruction to verify) 3. Casting between primitive types (like `static_cast<Type>` in C++, there are plenty of IL instructions for different types of casts between primitive types) 4. Calling user defined conversion operators (at the IL level they are just method calls to the appropriate `op_XXX` method).
There is no problem in casting an `int` to `decimal`, but when you are unboxing an object you have to use the exact type that the object contains. To unbox the `int` value into a `decimal` value, you first unbox it as an int, then cast it to decimal: ``` decimal d = (decimal)(int)reader[0]; ``` The IDataRecord interface also has methods for unboxing the value: ``` decimal d = (decimal)reader.GetInt32(0); ```
Why can't I unbox an int as a decimal?
[ "", "c#", "decimal", "int", "unboxing", "" ]
I'm fairly new to C++ so this is probably somewhat of a beginner question. It regards the "proper" style for doing something I suspect to be rather common. I'm writing a function that, in performing its duties, allocates memory on the heap for use by the caller. I'm curious about what a good prototype for this function should look like. Right now I've got: ``` int f(char** buffer); ``` To use it, I would write: ``` char* data; int data_length = f(&data); // ... delete[] data; ``` However, the fact that I'm passing a pointer to a pointer tips me off that I'm probably doing this the wrong way. Anyone care to enlighten me?
In C, that would have been more or less legal. In C++, functions typically shouldn't do that. You should try to use [RAII](http://en.wikipedia.org/wiki/RAII) to guarantee memory doesn't get leaked. And now you might say "how would it leak memory, I call `delete[]` just there!", but what if an exception is thrown at the `// ...` lines? Depending on what exactly the functions are meant to do, you have several options to consider. One obvious one is to replace the array with a vector: ``` std::vector<char> f(); std::vector<char> data = f(); int data_length = data.size(); // ... //delete[] data; ``` and now we no longer need to explicitly delete, because the vector is allocated on the stack, and its destructor is called when it goes out of scope. I should mention, in response to comments, that the above implies a *copy* of the vector, which could potentially be expensive. Most compilers will, if the `f` function is not too complex, optimize that copy away so this will be fine. (and if the function isn't called too often, the overhead won't matter *anyway*). But if that doesn't happen, you could instead pass an empty array to the `f` function by reference, and have `f` store its data in that instead of returning a new vector. If the performance of returning a copy is unacceptable, another alternative would be to decouple the choice of container entirely, and use iterators instead: ``` // definition of f template <typename iter> void f(iter out); // use of f std::vector<char> vec; f(std::back_inserter(vec)); ``` Now the usual iterator operations can be used (`*out` to reference or write to the current element, and `++out` to move the iterator forward to the next element) -- and more importantly, all the standard algorithms will now work. You could use `std::copy` to copy the data to the iterator, for example. This is the approach usually chosen by the standard library (ie. it is a good idea;)) when a function has to return a sequence of data. Another option would be to make your own object taking responsibility for the allocation/deallocation: ``` struct f { // simplified for the sake of example. In the real world, it should be given a proper copy constructor + assignment operator, or they should be made inaccessible to avoid copying the object f(){ // do whatever the f function was originally meant to do here size = ??? data = new char[size]; } ~f() { delete[] data; } int size; char* data; }; f data; int data_length = data.size; // ... //delete[] data; ``` And again we no longer need to explicitly delete because the allocation is managed by an object on the stack. The latter is obviously more work, and there's more room for errors, so if the standard vector class (or other standard library components) do the job, prefer them. This example is only if you need something customized to your situation. The general rule of thumb in C++ is that "if you're writing a `delete` or `delete[]` outside a RAII object, you're doing it wrong. If you're writing a `new` or `new[] outside a RAII object, you're doing it wrong, unless the result is immediately passed to a smart pointer"
In 'proper' C++ you would return an object that contains the memory allocation somewhere inside of it. Something like a std::vector.
Prototype for function that allocates memory on the heap (C/C++)
[ "", "c++", "c", "buffer", "heap-memory", "" ]
I need to remove the extension ".tex": ``` ./1-aoeeu/1.tex ./2-thst/2.tex ./3-oeu/3.tex ./4-uoueou/4.tex ./5-aaa/5.tex ./6-oeua/6.tex ./7-oue/7.tex ``` Please, do it with some tools below: 1. Sed and find 2. Ruby 3. Python **My Poor Try:** ``` $find . -maxdepth 2 -name "*.tex" -ok mv `sed 's@.tex@@g' {}` {} + ```
A Python script to do the same: ``` import os.path, shutil def remove_ext(arg, dirname, fnames): argfiles = (os.path.join(dirname, f) for f in fnames if f.endswith(arg)) for f in argfiles: shutil.move(f, f[:-len(arg)]) os.path.walk('/some/path', remove_ext, '.tex') ```
One way, not necessarily the fastest (but at least the quickest developed): ```` ``` pax> for i in *.c */*.c */*/*.c ; do ...> j=$(echo "$i" | sed 's/\.c$//') ...> echo mv "$i" "$j" ...> done ``` ```` It's equivalent since your maxdepth is 2. The script is just echoing the `mv` command at the moment (for test purposes) and working on C files (since I had no `tex` files to test with). Or, you can use find with all its power thus: ```` ``` pax> find . -maxdepth 2 -name '*.tex' | while read line ; do ...> j=$(echo "$line" | sed 's/\.tex$//') ...> mv "$line" "$j" ...> done ``` ````
removing extensions in subdirectories
[ "", "python", "ruby", "sed", "find", "" ]
I'm trying to automate downloading of some text files from a z/os PDS, using Python and ftplib. Since the host files are EBCDIC, I can't simply use FTP.retrbinary(). FTP.retrlines(), when used with open(file,w).writelines as its callback, doesn't, of course, provide EOLs. So, for starters, I've come up with this piece of code which "looks OK to me", but as I'm a relative Python noob, can anyone suggest a better approach? Obviously, to keep this question simple, this isn't the final, bells-and-whistles thing. Many thanks. ``` #!python.exe from ftplib import FTP class xfile (file): def writelineswitheol(self, sequence): for s in sequence: self.write(s+"\r\n") sess = FTP("zos.server.to.be", "myid", "mypassword") sess.sendcmd("site sbd=(IBM-1047,ISO8859-1)") sess.cwd("'FOO.BAR.PDS'") a = sess.nlst("RTB*") for i in a: sess.retrlines("RETR "+i, xfile(i, 'w').writelineswitheol) sess.quit() ``` Update: Python 3.0, platform is MingW under Windows XP. z/os PDSs have a fixed record structure, rather than relying on line endings as record separators. However, the z/os FTP server, when transmitting in text mode, provides the record endings, which retrlines() strips off. **Closing update:** Here's my revised solution, which will be the basis for ongoing development (removing built-in passwords, for example): ``` import ftplib import os from sys import exc_info sess = ftplib.FTP("undisclosed.server.com", "userid", "password") sess.sendcmd("site sbd=(IBM-1047,ISO8859-1)") for dir in ["ASM", "ASML", "ASMM", "C", "CPP", "DLLA", "DLLC", "DLMC", "GEN", "HDR", "MAC"]: sess.cwd("'ZLTALM.PREP.%s'" % dir) try: filelist = sess.nlst() except ftplib.error_perm as x: if (x.args[0][:3] != '550'): raise else: try: os.mkdir(dir) except: continue for hostfile in filelist: lines = [] sess.retrlines("RETR "+hostfile, lines.append) pcfile = open("%s/%s"% (dir,hostfile), 'w') for line in lines: pcfile.write(line+"\n") pcfile.close() print ("Done: " + dir) sess.quit() ``` My thanks to both John and Vinay
Just came across this question as I was trying to figure out how to recursively download datasets from z/OS. I've been using a simple python script for years now to download ebcdic files from the mainframe. It effectively just does this: ``` def writeline(line): file.write(line + "\n") file = open(filename, "w") ftp.retrlines("retr " + filename, writeline) ```
You should be able to download the file as a binary (using `retrbinary`) and use the `codecs` module to convert from EBCDIC to whatever output encoding you want. You should know the specific EBCDIC code page being used on the z/OS system (e.g. cp500). If the files are small, you could even do something like (for a conversion to UTF-8): ``` file = open(ebcdic_filename, "rb") data = file.read() converted = data.decode("cp500").encode("utf8") file = open(utf8_filename, "wb") file.write(converted) file.close() ``` **Update:** If you need to use `retrlines` to get the lines and your lines are coming back in the correct encoding, your approach will not work, because the callback is called once for each line. So in the callback, `sequence` will be the line, and your for loop will write individual characters in the line to the output, *each on its own line*. So you probably want to do `self.write(sequence + "\r\n")` rather than the `for` loop. It still doesn' feel especially right to subclass `file` just to add this utility method, though - it probably needs to be in a different class in your `bells-and-whistles` version.
Downloading text files with Python and ftplib.FTP from z/os
[ "", "python", "ftp", "mainframe", "zos", "" ]
Is the following code valid? ``` var i; var objs={}; for (i=0; i <10; i++) { objs.i=new FooObject(); } alert(objs.4.someMethod()); ``` If not, how should it be rewritten to accomplish what I want?
You should edit your code as following: ``` var i; var objs = {}; for (i = 0; i < 10; i++) { objs[i] = new FooObject(); } alert(objs[4].someMethod()); ```
``` var i; var objs = new Array(); for(i = 0; i < 10; i++) { objs.push(new FooObject()); } objs[4].someMethod(); ```
Using numbers for names of javascript object elements
[ "", "javascript", "jquery", "oop", "" ]
I wondered whether UpdateModel is considered an "expensive" operation (due to Reflection lookup of the model properties), especially when seen in the context of a larger web application (think StackOverflow)? I don't want to engage in premature optimization but I consider it a design choice to use UpdateModel which is why I'd like to know early whether it is advisable or not to go with it. The other (tedious) choice is writing my own UpdateModel method for various domain objects with fixed properties. Thank you!
You are smart to want to not engage in premature optimization. Especially since this "optimization" would favor the processor's time over yours, which is far more expensive. The primary rule of optimization is to optimize the slow stuff first. So consider how often you actually update a model versus selecting from your database backend. I'm guessing it's 1/10 as often or less. Now consider the cost of selecting from the database backend versus the cost of reflection. The cost of reflection is measured in milliseconds. The cost of selecting from the database backend can be measured in seconds at worst. My experience is that POSTs are rarely very slow, and when they are it's usually the database at fault rather than the reflection. I think you're likely to spend most of your optimization time on GETs.
Compared to network latency, database calls and general IO, the UpdateModel() call is trivial and I wouldn't bother with it.
ASP.NET MVC: Is UpdateModel an "expensive" operation (due to Reflection)?
[ "", "c#", ".net", "asp.net", "asp.net-mvc", "updatemodel", "" ]
given this table: ``` x y -- - 10 a 20 b 30 c ``` I want the best way to to map values ``` [10,20) -> a [20,30) -> b [30,inf) -> c ``` Right now I'm using a query like: ``` select y from foo where x=(select max(x) from foo where x<=21); ``` Is there a better way to do this? Is there an analytic function that might help? Here's my test case: ``` create table foo as select 10 as x ,'a' as y from dual union select 20,'b' from dual union select 30,'c' from dual; -- returns: a,b,b: select y from foo where x=(select max(x) from foo where x<=19); select y from foo where x=(select max(x) from foo where x<=20); select y from foo where x=(select max(x) from foo where x<=21); ```
You can rewrite your query to only access the foo table once instead of twice, by using the MAX-KEEP aggregate function. An example: ``` SQL> var N number SQL> exec :N := 19 PL/SQL-procedure is geslaagd. SQL> select max(y) keep (dense_rank last order by x) y 2 from foo 3 where x <= :N 4 / Y - a 1 rij is geselecteerd. SQL> exec :N := 20 PL/SQL-procedure is geslaagd. SQL> select max(y) keep (dense_rank last order by x) y 2 from foo 3 where x <= :N 4 / Y - b 1 rij is geselecteerd. SQL> exec :N := 21 PL/SQL-procedure is geslaagd. SQL> select max(y) keep (dense_rank last order by x) y 2 from foo 3 where x <= :N 4 / Y - b 1 rij is geselecteerd. ``` Also a,b,b as a result. The query plans: ``` SQL> set serveroutput off SQL> select /*+ gather_plan_statistics */ 2 y 3 from foo 4 where x = (select max(x) from foo where x<=:N) 5 / Y - b 1 rij is geselecteerd. SQL> select * from table(dbms_xplan.display_cursor(null,null,'predicate -note last')) 2 / PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------- SQL_ID 3kh85qqnb2phy, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ y from foo where x = (select max(x) from foo where x<=:N) Plan hash value: 763646971 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 8 (100)| | |* 1 | TABLE ACCESS FULL | FOO | 1 | 16 | 4 (0)| 00:00:01 | | 2 | SORT AGGREGATE | | 1 | 13 | | | |* 3 | TABLE ACCESS FULL| FOO | 2 | 26 | 4 (0)| 00:00:01 | ---------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("X"=) 3 - filter("X"<=:N) 22 rijen zijn geselecteerd. SQL> select max(y) keep (dense_rank last order by x) y 2 from foo 3 where x <= :N 4 / Y - b 1 rij is geselecteerd. SQL> select * from table(dbms_xplan.display_cursor(null,null,'predicate -note last')) 2 / PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------- SQL_ID avm2zh62c8cwd, child number 0 ------------------------------------- select max(y) keep (dense_rank last order by x) y from foo where x <= :N Plan hash value: 3274996510 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 4 (100)| | | 1 | SORT AGGREGATE | | 1 | 16 | | | |* 2 | TABLE ACCESS FULL| FOO | 1 | 16 | 4 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("X"<=:N) 20 rijen zijn geselecteerd. ``` Two full table scans on foo, against one for the new query. Regards, Rob.
Here's another answer received via usenet. So far this one seems to have the most efficient execution. ``` select max(y) keep (dense_rank last order by x) from foo where x<=21; ```
Is there a better way to search over a range of values in Oracle than testing against a subquery?
[ "", "sql", "oracle", "range", "" ]
I'm trying to use reflection to set properties on some OpenXML types (e.g. Justification). Assigning a value by enumerating all possibilities is straight-forward: ``` // attr is an XmlAttribute, so .Name and .Value are Strings if (attr.Name == "Val") { if (element is Justification) { ((Justification)element).Val = (JustificationValues)Enum.Parse(typeof(JustificationValues), attr.Value); return; } else { // test for dozens of other types, such as TabStop } } ``` What makes this difficult to do via reflection is: 1) The type of the Val property is EnumValue<T>, so I don't know how to extract the type to pass as the first argument to Enum.Parse. 2) There is an implicit conversion from the actual enumeration type to the EnumValue<> type, which I don't know how to invoke with reflection. I would like the code to end up looking something like: ``` PropertyInfo pInfo = element.GetType().GetProperty(attr.Name); Object value = ConvertToPropType(pInfo.PropertyType, attr.Value); /* this would return an instance of EnumValue<JustificationValues> in this case */ pInfo.SetValue(element, value, null); ``` How do I implement ConvertToPropType? Or is there a better solution? Thanks Edit: I got a solution working using Earwicker's suggestion, but it relies on the convenient fact that the enumeration's type name can be derived from the node's type name ("Justification" -> "JustificationValues"). I'm still curious how to solve this in the general case, though. Edit2: GetGenericArguments got me the rest of the way there. Thanks.
If the attribute value is just a string, I assume you already have some way of figuring out that the string identifies a value from a specific enumeration. In your example you have it hard-coded, so I'm not sure if that's what you want or what you want to change. Assuming you know it's an enumeration, and you know which enumeration, you already know how to get an object containing a boxed value of the right `enum` type, as in your snippet. Now if I assume `EnumValue<T>` has a constructor that takes a `T`. ``` Type genericType = typeof(EnumValue<>); Type concreteType = genericType.MakeGenericType(typeof(JustificationValues)); ``` Now `concreteType` is the type `EnumValue<JustificationValues>`. From that you can get a constructor, hopefully one that takes a `JustificationValues` parameter, and `Invoke` it. **Update** Ahh, I see what you're doing now. You use the XML attribute name to pick a C# property. You need to be able to detect whether that property is of a type `EnumValue<T>`, and find out what `T` is. ``` PropertyInfo p = // ... get property info Type t = p.GetType(); if (t.IsGenericType && t.GetGenericTypeDefinition == typeof(EnumValue<>)) { Type e = t.GetGenericArguments()[0]; // get first (and only) type arg // e is the enum type... ``` Give that a try.
.Net 4.0 adds support to do a late bound implict or explict conversion. This is simplified in the open source framework [ImpromptuInterface](http://code.google.com/p/impromptu-interface/) with it's static method called [InvokeConvert](http://code.google.com/p/impromptu-interface/wiki/UsageReallyLateBinding#dynamic_InvokeConvert%28object_target,_Type_type,_bool_explict_=fa). In your ideal example it would work like this: ``` PropertyInfo pInfo = element.GetType().GetProperty(attr.Name); Object value = Impromptu.InvokeConvert(attr.Value, pInfo.PropertyType); pInfo.SetValue(element, value, null); ```
Using reflection with generic types and implicit conversions
[ "", "c#", "reflection", "openxml", "" ]
I'd like to have a random number like this:(in C#) ``` Random r = new Random(); r.next (0,10) ``` BUT it's important to the random number be more near 8,(or it be usually big), I mean if we use a for: ``` for (int i =0; i<...;i++) { write: r.next (0,10) } ``` the result be like this; ``` 8 7 6 9 1 0 5 3 2 2 3 8 9 7 7 6 2 3 8 8 9 7 2 8 2 8 4 3 ```
You need a distribution function that takes a number between 0 and 1 and converts it to a number in the range you want, with a higher weight on a specific number. You could create such a function with trigonometric functions (sin, cos, ...), exponential, or maybe a polynomial. UPDATE: Have a look at [this page](http://en.wikipedia.org/wiki/Discrete_probability_distribution) for more information on probability distribution
You need to weight your results. You can do that with something like this: ``` private int[] _distribution = new int[] { 0, 1, 2, 3, 4, 5, 6, 6, 7, 7, 8, 8, 8, 9, 9 }; Random _r = new Random(); public int GetWeightedRandom() { return _distribution[_r.Next(0, _distribution.Length)]; } ``` If I knew my range was small and consistent, I'd use the table - it's trivial to make it its own class. For completeness, I'll also add this class in. This class borrows from image processing and uses the gamma correction function: a value between 0 and 1 raised to gamma, which returns a value between 0 and 1 but distributed more to the low end if gamma < 1.0 and more to the high end if gamma > 1.0. ``` public class GammaRandom { double _gamma; Random _r; public GammaRandom(double gamma) { if (gamma <= 0) throw new ArgumentOutOfRangeException("gamma"); _gamma = gamma; _r = new Random(); } public int Next(int low, int high) { if (high <= low) throw new ArgumentOutOfRangeException("high"); double rand = _r.NextDouble(); rand = math.Pow(rand, _gamma); return (int)((high - low) * rand) + low; } } ``` (from comments, moved r out of GetWeightedRandom(). Also added range checking to Next()) OK, let's really go to town here. I'm channeling John skeet for this - it's an abstract class with a template property that returns a transform function that maps the range [0..1) to [0..1) and scales the random number to that range. I also reimplemented gamma in terms of it and implemented sin and cos as well. ``` public abstract class DelegatedRandom { private Random _r = new Random(); public int Next(int low, int high) { if (high >= low) throw new ArgumentOutOfRangeException("high"); double rand = _r.NextDouble(); rand = Transform(rand); if (rand >= 1.0 || rand < 0) throw new Exception("internal error - expected transform to be between 0 and 1"); return (int)((high - low) * rand) + low; } protected abstract Func<double, double> Transform { get; } } public class SinRandom : DelegatedRandom { private static double pihalf = Math.PI / 2; protected override Func<double, double> Transform { get { return r => Math.Sin(r * pihalf); } } } public class CosRandom : DelegatedRandom { private static double pihalf = Math.PI / 2; protected override Func<double, double> Transform { get { return r => Math.Cos(r * pihalf); } } } public class GammaRandom : DelegatedRandom { private double _gamma; public GammaRandom(double gamma) { if (gamma <= 0) throw new ArgumentOutOfRangeException("gamma"); _gamma = gamma; } protected override Func<double, double> Transform { get { return r => Math.Pow(r, _gamma); } } } ```
special random number
[ "", "c#", "random", "numbers", "" ]
In a .NET windows application to to modify a remote machine config file that is used by an ASP.NET application. However, I keep getting the error: `System.IO.IOException: The process cannot access the file '[file name]' because it is being used by another process.` Now, this may not be the problem, but I'm figuring that if I can stop the IIS, then I can modify the machine config file (without getting the exception), and then I can restart the IIS using this code: ``` Process proc = new Process(); proc.EnableRaisingEvents = false; proc.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; proc.StartInfo.FileName = "iisreset"; proc.StartInfo.Arguments = serverName; try { proc.Start(); proc.WaitForExit(); ... ``` 1) Is there a way to stop the IIS *without* restarting it, and 2) Doe this approach to changing the server.config file even make sense? (note, I am modifying the file with regular expressions search and replace; is this a problem?)
You should be able to do something like this. I don't have windows, so I can't check the exact name of the service, but I think it is "IISADMIN" or "w3svc". Remember this should be the service name and not the display name you see in the service control panel. ``` ServiceController controller = new ServiceController(); controller.MachineName = "."; // or the remote machine name controller.ServiceName = "IISADMIN"; // or "w3svc" string status = controller.Status.ToString(); // Stop the service controller.Stop(); // Start the service controller.Start(); ``` You can also use > net stop w3svc or > net stop IISADMIN from the commandline or in your process in your code
Strange. A .config file should not be locked exclusively. But to answer your question, you can also use the net command for this: ``` net stop w3svc ``` to stop the www service, and ``` net start w3svc ``` to start it again. You can also do this programmatically as described by @monkeyp **Note** that I would advice against this and first try to determine (and resolve) the cause of the lock as described by @RichardOD.
Can I stop an IIS?
[ "", "c#", ".net", "iis", "configuration", "" ]
Right now I'm playing with [ScrollTable from GWT Incubator](http://code.google.com/docreader/#p=google-web-toolkit-incubator&s=google-web-toolkit-incubator&t=ScrollTable). I'm trying to make functionality when user select one row then click on Edit button and then he will be able to edit that particular Object. Right now I have to check what row has been selected ``` Integer secRowPosition = e.getSelectedRows().iterator().next().getRowIndex(); ``` then query my dataTable for row and column to select unique id and then query my database for that object: ``` myObject = getObjectFromDBbyID(dataTable.getText(secRowPosition, 0)); ``` This method works fine for me, but is it possible to get that object straight from the table and not form database so I can save some time without querying my database. I assume I need to assign each object to the row in ScrollTable in order to do that. Any ideas?
There may be better ways but one approach I have used is to store a reference to the object you wish to edit inside a Button widget. You'd do this by extending Button and adding an instance to every row. The appropriate object is then readily available whenever the corresponding edit button is clicked.
If you're populating your table with a list of objects, simply get the index of the table row and get the corresponding object from the list.
Get Object from GWT ScrollTable
[ "", "java", "gwt", "" ]
I was wondering if there is a class out there that implements both the `Map` and `List` interfaces in Java. I have a data structure that is primarily a `Map`. I map strings (IDs) to `Image`s. But in a specific part of my code, I need to present the user with all the available IDed `Images`. The only way to do that so far is to write this: ``` for (String id : myMap.keySet()) { // get the image like this "myMap.get(id)" } ``` So it would be nice to have a class that implements both `Map` and `List` so I could simply write: ``` for (Image img : myMap) { // the image is img } ``` Does anyone know of such an implementation? **EDIT**: After viewing the answers (which are all correct, voted up), I now realize I would also need the map to be sorted. When I say "sorted", all I mean is that I would like it to have the values in a specific order, one that I would be able to modify. I know this is not the original question, but I just realized that I need that. **EDIT 2**: It seems I am indecisive. What I need is an ordered map, not a sorted one. Sorry for the confusion, people.
If you need your items in a specific order, [LinkedHashMap](http://java.sun.com/javase/6/docs/api/java/util/LinkedHashMap.html) is your friend - it keeps items in insertion order. TreeMap will keep your items in an order defined by either a Comparator you give or a compareTo method of the key.
For an ordered Map, look at the [`LinkedHashMap`](http://java.sun.com/javase/6/docs/api/java/util/LinkedHashMap.html). That will keep your keys in the order of insertion. If you use a [`SortedMap`](http://java.sun.com/javase/6/docs/api/java/util/SortedMap.html) it will keep the keys in sorted order. (The [`TreeMap`](http://java.sun.com/javase/6/docs/api/java/util/TreeMap.html) is the most common implementation.) What you can use is `map.entrySet()`. That will allow you to iterate over the Set of MapEntries. Check out the [javadoc](http://java.sun.com/javase/6/docs/api/java/util/Map.html#entrySet()) for a bit more info.
Ordered List Map implementation in Java
[ "", "java", "list", "dictionary", "" ]
``` var foo = "bar"; new Func<String>(() => { var foo = ""; // This can't be done in C#. Why is that? /* In JavaScript, this is perfectly valid, since this scope (the anonymous function) is disconnected from the outer scope, and any variable declared within this scope will not affect variables in the outer scope */ })() ```
Actually, even in javascript it *isn't entirely* disconnected; javascript allows lexical closures - so without the `var`, the old value of `foo` should still be available. The difference is that javascript **chooses** to allow you to re-declare the name with a different meaning (in the inner scope). C# **chooses** not to. I find the C# version less easy to get confused about! In particular when code (further down in the method) expects to be talking about the "old" variable, and suddenly it starts looking at the "new" one.
C# captures local variables inside the anonymous function. This is actually a very powerful feature that JavaScript also supports but in a slightly different way. This concept is what computer scientists call a [closure](http://en.wikipedia.org/wiki/Closure_(computer_science)). By capturing local variables they can become part of the state of the function itself thus giving you more flexibility.
C# Anonymous function-scope
[ "", "c#", "lambda", "anonymous-function", "" ]
As the title says, in a multiple ethernet interfaces with multiple IP environment, the default Django test server is not attached to the network that I can access from my PC. Is there any way to specify the interface which Django test server should use? -- Added -- The network configuration is here. I'm connecting to the machine via 143.248.x.y address from my PC. (My PC is also in 143.248.a.b network.) But I cannot find this address. Normal apache works very well as well as other custom daemons running on other ports. The one who configured this machine is not me, so I don't know much details of the network... ``` eth0 Link encap:Ethernet HWaddr 00:15:17:88:97:78 inet addr:192.168.6.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe88:9778/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:441917680 errors:0 dropped:0 overruns:0 frame:0 TX packets:357190979 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:191664873035 (178.5 GB) TX bytes:324846526526 (302.5 GB) eth1 Link encap:Ethernet HWaddr 00:15:17:88:97:79 inet addr:172.10.1.100 Bcast:172.10.1.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe88:9779/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1113794891 errors:0 dropped:97 overruns:0 frame:0 TX packets:699821135 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:843942929141 (785.9 GB) TX bytes:838436421169 (780.8 GB) Base address:0x2000 Memory:b8800000-b8820000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1085510396 errors:0 dropped:0 overruns:0 frame:0 TX packets:1085510396 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:422100792153 (393.1 GB) TX bytes:422100792153 (393.1 GB) peth0 Link encap:Ethernet HWaddr 00:15:17:88:97:78 inet6 addr: fe80::215:17ff:fe88:9778/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:441918386 errors:0 dropped:742 overruns:0 frame:0 TX packets:515286699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:199626686230 (185.9 GB) TX bytes:337365591758 (314.1 GB) Base address:0x2020 Memory:b8820000-b8840000 veth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth2 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth3 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.1 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.2 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.3 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ``` -- Added (2) -- Finally I used w3m (a text-mode web browser which runs on terminal) to connect from localhost. :P
I think the OP is referring to having multiple interfaces configured on the test machine. You can specify the IP address that Django will bind to as follows: ``` # python manage.py runserver 0.0.0.0:8000 ``` This would bind Django to all interfaces on port 8000. You can pass any active IP address in place of 0.0.0.0, so simply use the IP address of the interface you want to bind to. Hope this helps.
Yes, if the IP of your interface is for example 192.168.1.2 and you want to run on port 8080, start the development server like this: ``` ./manage.py runserver 192.168.1.2:8080 ```
How to specify which eth interface Django test server should listen on?
[ "", "python", "django", "networking", "ethernet", "" ]
I have an includes.php page that I load at the start of every page of my website. As I develop the website, the number of classes that I am using is growing. So I end up with something like this: ``` $db = new DB($config); $login = new Login($db, $config); $form = new Form($db, $config); ``` And the list goes on and on. I have two questions about this practice: First, Considering I might not be using a class at a certain page (I might not have a $form on every single page), how much does it really matter, performance-wise, to load this class every time any given page is loaded? Second, you may have noticed that I am passing the class instance $db to all the other classes, as well as a variable $config. In the php code of every class, I do something like this: ``` public $db; public $config; public function __construct($db, $config, $smarty){ $this->db = $db; $this->config = $config; } ``` then in the class methods, I address the database and config files with 'this' as such: ``` public function myfunction(){ $this->db; $this->config; } ``` When should I use 'extends' rather than passing $db to the class, assuming every class uses the db? Does passing $db to every class hurt in any way in terms of performance? Thanks!
> *When should I use 'extends' rather > than passing $db to the class, > assuming every class uses the db?* When it makes sense -- and only when it does ! You have at least two things to consider : * "`class A extends B`" kind of means "`class A **is a** B`" + more clearly, a `Car` is a `MotorVehicule` ; a `MotorVehicule` is a `Vehicule` ; a `Bus` is a `MotorVehicule` ; a Bicycle is a `Vehicule` + however, a `Ball` is not a `Vehicule` + In your case, a `Form` is definitly not a `DataBase` ! Nor is a `Login` * In PHP, a class can only `extend` **one** class + You can not have something being both a `Vehicule` and an `Animal` + But a `Car` **is a** `MotorVehicule`, which, itself, **is a** `Vehicule` **:-)** In the case of a Database object *(in your case, it's more a connection to a DB)*, mosts of your classes will not themselves "be" a database connection. So, they shouldn't extend that class. However, they are using a DB connection *(a `Form` "**has a**" DB connection)* ; so, they should have a property representing that DB connection. That's what you are doing. Instead of passing `$db` to each constructor, you might use * either the [Singleton](http://en.wikipedia.org/wiki/Singleton) design pattern * or the Registry design pattern * or some kind of global variable, but that's almost the same... just being worse *(not OOP and all that)* ! But passing the `$db` object is great for unit-testing, mock objects, and all that... I think it could be considered as being the Dependancy Injection design pattern, btw *(not sure, but looks like it)* --- About loading lots of classes, other people gave answers : * Use autoloading if you can * Use an opcode cache, like APC, if you can Both of those are great suggestions that you should take into consideration **;-)** --- One last thing : > *Does passing $db to every class hurt > in any way in terms of performance?* Maybe it does a **little** bit ; but, honnestly, except if you are google and have millions of users... who cares ? If you are doing a couple of DB queries, those will take LOTS of time, comparing to passing one more parameter to even a dozen methods ! So, the small amount of time used passing paremeters can probably be neglected **:-)**
Have you tried something like this? ``` function __autoload($class_name) { require_once("includes/php/class." . $class_name . ".php"); } ``` So it only loads the class name when the class name is encountered. (Change the path to suit your php classes... mine are like class.Object.php, with the class name "Object").
performance of loading php classes and the use of 'extends'
[ "", "php", "oop", "class", "" ]
I want to read a line in a file and insert the new line ("\n") character in the n position on a line, so that a 9-character line, for instance, gets converted into three 3-character lines, like this: ``` "123456789" (before) "123\n456\n789" (after) ``` I've tried with this: ``` f = open(file, "r+") f.write("123456789") f.seek(3, 0) f.write("\n") f.seek(0) f.read() ``` -> '123\n56789' I want it not to substitute the character in position n, but only to insert another ("\n") char in that position. Any idea about how to do this? Thanks
I don't think there is any way to do that in the way you are trying to: you would have to read in to the end of the file from the position you want to insert, then write your new character at the position you wish it to be, then write the original data back after it. This is the same way things would work in C or any language with a seek() type API. Alternatively, read the file into a string, then use list methods to insert your data. ``` source_file = open("myfile", "r") file_data = list(source_file.read()) source_file.close() file_data.insert(position, data) open("myfile", "wb").write(file_data) ```
``` with open(file, 'r+') as f: data = f.read() f.seek(0) for i in range(len(data)): # could also use 'for i, chara in enumerate(data):' and then 'f.write(chara)' instead of 'f.write(data[i])' if (i + 1) % 3 == 0: # could also do 'if i % 3 == 2:', but that may be slightly confusing f.write('\n') else: f.write(data[i]) ``` I don't think it's all that Pythonic (due to the `range(len(data))`), but it should work, unless your data file is really really large (in which case you'll have to process the data in the file part by part and store the results in another file to prevent overwriting data you haven't processed yet). ([More on the `with` statement.](http://docs.python.org/3.1/reference/compound_stmts.html#the-with-statement))
Writing in file's actual position in Python
[ "", "python", "file", "" ]
This question is in relation to another question asked here: [Sorting 1M records](https://stackoverflow.com/questions/1180240/best-way-to-sort-1m-records-in-python) I have since figured out the problem I was having with sorting. I was sorting items from a dictionary into a list every time I updated the data. I have since realized that a lot of the power of Python's sort resides in the fact that it sorts data more quickly that is already partially sorted. So, here is the question. Suppose I have the following as a sample set: ``` self.sorted_records = [(1, 1234567890), (20, 1245678903), (40, 1256789034), (70, 1278903456)] ``` `t[1]` of each tuple in the list is a unique id. Now I want to update this list with the follwoing: ``` updated_records = {1245678903:45, 1278903456:76} ``` What is the fastest way for me to do so ending up with ``` self.sorted_records = [(1, 1234567890), (45, 1245678903), (40, 1256789034), (76, 1278903456)] ``` Currently I am doing something like this: ``` updated_keys = updated_records.keys() for i, record in enumerate(self.sorted_data): if record[1] in updated_keys: updated_keys.remove(record[1]) self.sorted_data[i] = (updated_records[record[1]], record[1]) ``` But I am sure there is a faster, more elegant solution out there. Any help? **\* edit** It turns out I used bad exaples for the ids since they end up in sorted order when I do my update. I am actually interested in t[0] being in sorted order. After I do the update I was intending on resorting with the updated data, but it looks like bisect might be the ticket to insert in sorted order. **end edit \***
Since apparently you don't care about the ending value of `self.sorted_records` actually *being* sorted (you have values in order 1, 45, 20, 76 -- that's NOT sorted!-), AND you only appear to care about IDs in `updated_records` that are also in `self.sorted_data`, a listcomp (with side effects if you want to change the updated\_record on the fly) would serve you well, i.e.: ``` self.sorted_data = [(updated_records.pop(recid, value), recid) for (value, recid) in self.sorted_data] ``` the `.pop` call removes from `updated_records` the keys (and corresponding values) that are ending up in the new `self.sorted_data` (and the "previous value for that `recid`", `value`, is supplied as the 2nd argument to pop to ensure no change where a recid is NOT in `updated_record`); this leaves in `updated_record` the "new" stuff so you can e.g append it to `self.sorted_data` before re-sorting, i.e I suspect you want to continue with something like ``` self.sorted_data.extend(value, recid for recid, value in updated_records.iteritems()) self.sorted_data.sort() ``` though this part DOES go beyond the question you're actually asking (and I'm giving it only because I've seen your *previous* questions;-).
You're scanning through all n records. You could instead do a binary search, which would be O(log(n)) instead of O(n). You can use the [`bisect`](http://docs.python.org/library/bisect.html) module to do this.
Python: update a list of tuples... fastest method
[ "", "python", "" ]
I am writing a web application which will include several parts - interactive calendar, todo list, managing finances,... How should I design my solution? I thought something like this: each part has 3 projects (main project, DAL, BLL). So in my case I would have 9 projects in my solution: * List item * Calendar * CalendarDAL * CalendarBLL * Todo * TodoDAL * TodoBLL * Money * MoneyDAL * MoneyBLL Would this design be OK? Also: where should web.config be? In it I have a connectionString which I would like to call from all DAL projects. Now I had web.config file in Calendar project and when I wanted to create dataAdapter in CalendarDAL with designer, I couldn't use existing connectionString from web.config. Thanks
Unless you need to be able to separate and use the logic of this code in multiple applications, there is really no need to separate it into that many projects. It adds complexity but doesn't really add value. I used to separate the general BL library from the DL library but realized I wasn't really getting anything out of it...and I was making some things more annoying in the process. **What is most important in separating code is the logical separation, not the physical separation into separate dlls**. Also, instead of breaking this up into separate web apps, put them in one. It will be a lot easier to develop and deploy. This allows you to use one web.config. If they are separate websites then create different web projects. If they are not, then don't. **[Edited]** One thing I meant to add, which is important, is this: The question of how you should design this is actually too general to really come up with a real answer. Those thoughts are just my general thoughts on project organization, which is what the question really seemed to revolve around.
read up on MVC or nTier programming. three basic layers: 1. your view: the aspx web pages 2. a controller: allows the view to interact with the model (kinda like encapsulation) it's just one class that acts as a go between. 3. a model: in here is your database/xmldata and your functionality. this is where the magic happens. work in increments. first make the most basic of websites. then add functionality (one new feature at a time) , test it then move on.
How to design my solution
[ "", "c#", "projects-and-solutions", "" ]
I have this code template in Eclipse **@since ${date}** when entered i get something like this : **@since 4.8.2009** But when i add the same template (**@since ${date}**) to NetBeans it outputs **@since date** Can someone help ? No answer yet ? Is this not possible in Netbeans ???
Something like the following example should doing the job : ``` ${date?date?string("yyyy")}.${date?date?string("MM")}.${date?date?string("dd")} ``` * yyyy => year on 4 elements (ex: 2012) * MM => Month on 2 elements (ex: march -> 03) * dd => Day of the month on 2 elements (ex: 23) * . => separator you want to separate each fields (ex: - or / or . or smth else) You should have to check about available format somewhere in the netbeans help (sorry I don't find out informations about this for now). I see that's a very old post, but if it may usefull for someone ... regards. ollie314
Not wanting to raise the dead with this post, but I thought it worth mentioning so I signed up to SO specifically to clarify, since Ollie314 saved me a lot of time. The format ollie314 used is correct (for version 7.1+ at least) BUT just to be clear, if it's not displaying correctly it may be due to your system locale settings, if outside of USA. Be sure to include <#setting locale="en\_AU"> (replace en\_AU with your locale id) in the *template* you are editing, prior to the date?date?string cast declaration, or it will not work. If you place it in the user settings it won't cast the date string in the template and you will generate errors in your template output. The documentation from Netbeans isn't particularly clear on that. Still, best IDE ever :)
NetBeans Code Templates ${date}?
[ "", "php", "eclipse", "netbeans", "code-templates", "" ]
having a FileStream that I read with a StreamReader (it is a very large file), how can I set the Seek position of the FileStream to the first occurrence of a certain substring so that I can start reading this large file from a given point? Thanks
What's in the file? Just lines of Unicode text? Then you've got a problem. You will never know the position of the start of a line until you've read all the previous lines at least once. Unless the file is encoded in UTF-32, each character may take a variable number of bytes to represent it. Each line will have a variable length. The best you can do is to scan through the file once and then make note of the positions of the starts of lines, in an index.
`FileStream` cannot do the search for you. You'll have to manually search for it. Probably you'll want to use an efficient string searching algorithm such as [Knuth Morris Pratt](http://en.wikipedia.org/wiki/Knuth-Morris-Pratt_algorithm).
C#: String.IndexOf to FileStream.Seek
[ "", "c#", "" ]
I'm starting to learn python and loving it. I work on a Mac mainly as well as Linux. I'm finding that on Linux (Ubuntu 9.04 mostly) when I install a python module using apt-get it works fine. I can import it with no trouble. On the Mac, I'm used to using Macports to install all the Unixy stuff. However, I'm finding that most of the python modules I install with it are not being seen by python. I've spent some time playing around with PATH settings and using python\_select . Nothing has really worked and at this point I'm not really understanding, instead I'm just poking around. I get the impression that Macports isn't universally loved for managing python modules. I'd like to start fresh using a more "accepted" (if that's the right word) approach. **So, I was wondering, what is the method that Mac python developers use to manage their modules?** Bonus questions: Do you use Apple's python, or some other version? Do you compile everything from source or is there a package manger that works well (Fink?).
The most popular way to manage python packages (if you're not using your system package manager) is to use setuptools and easy\_install. It is probably already installed on your system. Use it like this: ``` easy_install django ``` easy\_install uses the [Python Package Index](http://pypi.python.org) which is an amazing resource for python developers. Have a look around to see what packages are available. A better option is [pip](http://pypi.python.org/pypi/pip), which is gaining traction, as it attempts to [fix a lot of the problems](http://www.b-list.org/weblog/2008/dec/14/packaging/) associated with easy\_install. Pip uses the same package repository as easy\_install, it just works better. Really the only time use need to use easy\_install is for this command: ``` easy_install pip ``` After that, use: ``` pip install django ``` At some point you will probably want to learn a bit about [virtualenv](http://pypi.python.org/pypi/virtualenv). If you do a lot of python development on projects with conflicting package requirements, virtualenv is a godsend. It will allow you to have completely different versions of various packages, and switch between them easily depending your needs. Regarding which python to use, sticking with Apple's python will give you the least headaches, but If you need a newer version (Leopard is 2.5.1 I believe), I would go with the [macports](http://trac.macports.org/browser/trunk/dports/lang/python26/Portfile) python 2.6.
Your question is already three years old and there are some details not covered in other answers: Most people I know use [HomeBrew](https://brew.sh/) or [MacPorts](https://trac.macports.org/wiki/Python), I prefer MacPorts because of its clean cut of what is a default Mac OS X environment and my development setup. Just move out your */opt* folder and test your packages with a normal user Python environment MacPorts is only portable within Mac, but with easy\_install or pip you will learn how to setup your environment in any platform (Win/Mac/Linux/Bsd...). Furthermore it will always be more up to date and with more packages I personally let MacPorts handle my Python modules to keep everything updated. Like any other high level package manager (ie: apt-get) it is much better for the heavy lifting of modules with lots of binary dependencies. There is no way I would build my Qt bindings (PySide) with easy\_install or pip. Qt is huge and takes a lot to compile. As soon as you want a Python package that needs a library used by non Python programs, try to avoid easy\_install or pip At some point you will find that there are some packages missing within MacPorts. I do not believe that MacPorts will ever give you the whole [CheeseShop](https://web.archive.org/web/20180415050937/https://pypi.python.org/pypi). For example, recently I needed the [Elixir](https://web.archive.org/web/20150202214141/http://elixir.ematia.de/trac/wiki) module, but MacPorts only offers py25-elixir and py26-elixir, no py27 version. In cases like these you have: > pip-2.7 install --user elixir ( make sure you always type pip-(version) ) That will build an extra Python library in your home dir. Yes, Python will work with more than one library location: one controlled by MacPorts and a user local one for everything missing within MacPorts. Now notice that I favor pip over easy\_install. There is a good reason you should avoid setuptools and easy\_install. Here is a [good explanation](https://www.b-list.org/weblog/2008/dec/14/packaging/) and I try to keep away from them. One very useful feature of pip is giving you a list of all the modules (along their versions) that you installed with MacPorts, easy\_install and pip itself: > pip-2.7 freeze If you already started using easy\_install, don't worry, pip can recognize everything done already by easy\_install and even upgrade the packages installed with it. If you are a developer keep an eye on [virtualenv](https://pypi.org/project/virtualenv/) for controlling different setups and combinations of module versions. Other answers mention it already, what is not mentioned so far is the [Tox](https://pypi.org/project/tox/) module, a tool for testing that your package installs correctly with different Python versions. Although I usually do not have version conflicts, I like to have virtualenv to set up a clean environment and get a clear view of my packages dependencies. That way I never forget any dependencies in my setup.py If you go for MacPorts be aware that multiple versions of the same package are not selected anymore like the old Debian style with an extra python\_select package (it is still there for compatibility). Now you have the select command to choose which Python version will be used (you can even select the Apple installed ones): ``` $ port select python Available versions for python: none python25-apple python26-apple python27 (active) python27-apple python32 $ port select python python32 ``` Add tox on top of it and your programs should be really portable
What is the most compatible way to install python modules on a Mac?
[ "", "python", "macos", "module", "package", "macports", "" ]
Apologies for the awkward wording in this question; I'm still trying to wrap my head around the beast that is Drupal and haven't quite gotten the vocabulary down yet. I'm looking to access all rows in a view as an array (so I can apply some array sorting and grouping functions before display) in a display output. The best I can tell, you are able to access individual rows as an array using row-style output, but seemingly not in display output. Thanks!
Ultimately, I had to use node\_load on each item and load the results of that into an array. Inefficient, but it worked.
1. You have to change the Row style setting: to NODE. 2. Click on Theme Information. 3. Create an file with the name of one you find in the Display output point (I would use the second one eq. `views-view--portfolio.tpl.php`) 4. And now you can use your own Node Template and access the $node variable.
Drupal: Accessing all rows in a view as an array
[ "", "php", "drupal", "drupal-6", "" ]
I've read and heard from several people that [WPF](http://en.wikipedia.org/wiki/Windows_Presentation_Foundation) has a pretty steep learning curve (depending on how knowledgeable or experienced you are). Seems like most people can get the demo or starter projects to work, and then find themselves getting stuck for lengths of time on miscellaneous problems. I'm curious what specifically is difficult to learn or understand (layers, SDK, [XAML](http://en.wikipedia.org/wiki/Extensible_Application_Markup_Language), data binding, etc.) and if you have any suggestions on how to avoid/alleviate some of those difficult topics?
WPF is different; there is no getting away from that. **My main advice is do not be afraid of XAML; Embrace it, that is where the power is!** Let me explain:- For me to be productive, I prefer to write XAML in the text view as I can create the bare layout of the window in a few keystrokes. If you regularly type code then this is a very quick way to develop windows. You can then use the Visual editors to make it look pretty. If you keep in your mind that each element of the XAML will "`new`" that object and that each attribute of that XAML element is a property of the object, you can think of XAML as object creation and assignments of properties. Very similar to writing code. If you spend too much time in the visual designer then you do not get to appreciate this, and for some this will slow down the learning curve. A recent [Hanselminutes](http://www.hanselminutes.com/default.aspx?showID=184) podcast may interest you. I also advise strongly to learn early the concepts of Views and View-Models, even if you do not subscribe to all that is part of [CompositeWPF](http://msdn.microsoft.com/en-us/library/cc707819.aspx) as this really does help.
There is a nice article from Karsten Januszewski called [Hitting the Curve: On WPF and Productivity](https://blogs.msdn.com/karstenj/archive/2006/04/05/curve.aspx) you might find interesting: > Let's be clear: WPF comes with a > curve. I've now watched a bunch of > developers hit that curve. And the > curve is steep. We're talking between > two weeks to two months of curve > depending on the developer and the > level of experience/intuition the > developer has. There will be moments > of total mystification and plenty of > moments of illumination. It is at once > painful and enjoyable, if the > developer delights in the discovery of > a deep and well thought out UI > platform. It is at once familiar and > alien. There are many similarities to > other UI development paradigms: styles > feel like CSS, well sort of. XAML > code behind feels like ASP.NET, well > sort of. 3D feels like DX or OpenGL, > well sort of. Routed events feel like > .NET events, well sort of. Dependent > properties feel like properties, well > sort of. The list could go on. But > admidst these (sort of) familiar > metaphors there are so many alien > concepts that must be mastered: > control templating, storyboards, > databinding come to mind immediately. > It is not a trivial curve and don't > expect to be productive on day 1 or > even week 1 or even month 1. It's all worth it though! ;)
How bad is the WPF Learning Curve?
[ "", "c#", "wpf", "" ]
I'm trying to have python delete some directories and I get access errors on them. I think its that the python user account doesn't have rights? ``` WindowsError: [Error 5] Access is denied: 'path' ``` is what I get when I run the script. I've tried ``` shutil.rmtree os.remove os.rmdir ``` they all return the same error.
We've had issues removing files and directories on Windows, even if we had just copied them, if they were set to 'readonly'. `shutil.rmtree()` offers you sort of exception handlers to handle this situation. You call it and provide an exception handler like this: ``` import errno, os, stat, shutil def handleRemoveReadonly(func, path, exc): excvalue = exc[1] if func in (os.rmdir, os.remove) and excvalue.errno == errno.EACCES: os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777 func(path) else: raise shutil.rmtree(filename, ignore_errors=False, onerror=handleRemoveReadonly) ``` You might want to try that.
I've never used Python, but I would assume it runs as whatever user executes the script.
What user do python scripts run as in windows?
[ "", "python", "windows", "file-permissions", "" ]
Can extension methods be applied to the class? For example, extend DateTime to include a Tomorrow() method that could be invoked like: ``` DateTime.Tomorrow(); ``` I know I can use ``` static DateTime Tomorrow(this Datetime value) { //... } ``` Or ``` public static MyClass { public static Tomorrow() { //... } } ``` for a similar result, but how can I extend DateTime so that I could invoke DateTime.Tomorrow?
You cannot add methods to an existing type unless the existing type is marked as partial, you can only add methods that *appear* to be a member of the existing type through extension methods. Since this is the case you cannot add static methods to the type itself since extension methods use instances of that type. There is nothing stopping you from creating your own static helper method like this: ``` static class DateTimeHelper { public static DateTime Tomorrow { get { return DateTime.Now.AddDays(1); } } } ``` Which you would use like this: ``` DateTime tomorrow = DateTimeHelper.Tomorrow; ```
Use an [extension method](http://msdn.microsoft.com/en-us/library/bb383977.aspx). Ex: ``` namespace ExtensionMethods { public static class MyExtensionMethods { public static DateTime Tomorrow(this DateTime date) { return date.AddDays(1); } } } ``` Usage: ``` DateTime.Now.Tomorrow(); ``` or ``` AnyObjectOfTypeDateTime.Tomorrow(); ```
How do I extend a class with c# extension methods?
[ "", "c#", "extension-methods", "" ]
I have a table named `user_ips` to keep track of users in case they delete their cookies or change browser. So anyway, the following code is simple. It updates entries in user\_ips that are equal to the user's id and IP. If the query did not update any rows, then it means that IP for that user is not in the table, so it inserts it. ``` $site->query('UPDATE `user_ips` SET `last_time` = UNIX_TIMESTAMP(), `user_agent` = \''.$this->user_agent.'\' WHERE `ip` = '.$this->ip.' AND `userid` = '.$this->id); if(mysql_affected_rows() == 0) { $site->query('INSERT INTO `user_ips` SET `userid` = '.$this->id.', `ip` = '.$this->ip.', `first_time` = UNIX_TIMESTAMP(), `last_time` = UNIX_TIMESTAMP(), `user_agent` = \''.$this->user_agent.'\''); } ``` The problem is mysql\_affected\_rows() sometimes returns 0 even if a row with the user's current ID and IP exists. So then the code adds another row to the table with the same IP. In case you are wondering, $site is mysql class I made for my website and the only query it executes is the one passed to it by query(), and nothing more, so this is not a problem with the class. Oh and the IP is stored as a long IP, so it does not need quotes around it.
I'm directly quoting the PHP documentation here: *When using UPDATE, MySQL will not update columns where the new value is the same as the old value. This creates the possibility that mysql\_affected\_rows() may not actually equal the number of rows matched, only the number of rows that were literally affected by the query.* So in your case, mysql\_affected\_rows() will return 0 when UNIX\_TIMESTAMP() returns the same value (for example, two requests from the same client in the same second).
To build on slipbull's answer, the simplest way to handle this is perhaps to simply perform a SELECT query to evaluate whether or not an INSERT is necessary. Another solution would be to simply INSERT a record at user creation, as this would guarantee a valid record.
Weird behavior with mysql_affected_rows() in PHP
[ "", "php", "mysql", "" ]
What are the pluses and minuses to using the vcredist.exe versus the msm files to install the Visual C++ 8.0 runtime libraries?
MSM will give you a better streamline experience then vcredist, it will integrate with the progress bar and will rollback on error (or cancel). From the developer side you will benefit by seeing the msm log in the main setup log file and it will execute its actions side by side with the setup action (with vcredist you will need to sequence it yourself). Because of all of the above reasons I usually choose to use the msm (and its more or less one Wix liner to use it).
Merge Modules can not be updated (unless they solved that in Windows Installer) once they are installed, so my advice would be to stick with vcredist.exe.
Should I use vcredist.exe or the msm's to install the Visual C++ runtime library
[ "", "c++", "visual-c++", "deployment", "" ]
I have a java program that I would like to be able to run from anywhere on my machine. I would like to run it from my Cygwin command prompt. I've made scripts to call the java program. I added the location of the java program to the classpath, and the scripts work when I run them from the java program's directory. However, when I try to run from any other directory, I get: ``` java.lang.NoClassDefFoundError: commandprogram/CommandProgram ``` This is my script: ``` #!/bin/sh CWD=`dirname "$0"` java -cp "$CWD/classes;$CWD/lib/AJarFile.jar" commandprogram/CommandProgram ``` Changing the java line to the following: ``` java -cp "$CWD/classes;$CWD/classes/commandprogram;$CWD/lib/AJarFile.jar" CommandProgram ``` produces the same results.
After trying just about everything I could think of, I echoed out the command and saw that there was mixing of Cygwin paths and Windows paths. The solution was to change the script to: ``` #!/bin/sh CWD=`dirname "$0"` CWD=`cygpath -w "$CWD"` java -cp "$CWD/classes;$CWD/lib/AJarFile.jar" commandprogram/CommandProgram ``` Then CWD changed to "C:\Program Files\..." instead of "/cygdrive/c/Program\ Files/..." I had previously encountered this problem and solved it with the `cygpath -w` solution, but then changed my script slightly and didn't notice that the path problem came back.
add your directory to classpath example: ``` java -classpath commandprogram CommandProgram ``` or ``` java -classpath directory_to_program Program ```
How do I run a java program from a different directory?
[ "", "java", "bash", "scripting", "cygwin", "" ]
``` <tr> <td>#</td> <td>2009</td> <td><a class="delete_this">Click</a></td> </tr> ``` I want to use jquery and get the text of 2nd (second) "td" when clicking the anchor. I want the "td" in the same tr as the anchor... How do I do this? So far I have ``` $(document).ready(function(){ $(".delete_this').click(function(){ var myNUmber = $(this).parent()....///And this i should write the code to get the text for second td in tr where the anchor belongs to }) }) ```
Here's a few ways: ``` $(this).parent().siblings("td:eq(1)").text() ``` If your looking for the cell before you can do it this way: ``` $(this).parent().prev().text() ```
`$('.delete_this').closest('tr').children(':eq(1)') .text();` 1) Get the .delete\_this A tag 2) Get the parent TR 3) Get the 2nd TD 4) Get the Text of the 2nd TD
How do I get the value of td using hierarchy?
[ "", "javascript", "jquery", "" ]
I know that I [shouldn't](http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.3) throw exceptions from a destructor. If my destructor calls a function that can throw an exception, is it OK if I catch it in the destructor and don't throw it further? Or can it cause abort anyway and I shouldn't call such functions from a destructor at all?
Yes, that's legal. An exception must not *escape* from the destructor, but whatever happens inside the destructor, or in functions it calls, is up to you. (Technically, an exception can escape from a destructor call as well. If that happens during stack unwinding because another exception was thrown, `std::terminate` is called. So it is well-defined by the standard, but it's a *really* bad idea.)
Yes. Look at the std::fstream class in the standard library for an example. * close() could potentially throw an exception. * The destroctor can call close() but the destructor does not throw (it will swallow any exceptions). The concept is that if the destructor calls any methods that can throw then these methods should be public. Thus if the user of your object wants to check for exceptions they can use the public methods and handle the exception. If they do not care about the exception then just let the destructor handle the problem. Going back to the std::fstream example. ``` { std::fstream text("Plop"); // Load Text. // I don't care if the close fails. // So let the destructor handle it and discard exceptions } { // If this fails to write I should at least warn the user. // So in this case I will explicitly try and close it. try { std::ofstram password("/etc/password"); // Update the password file. password.close(); } catch(...) { Message.ShowDialog("You failed to update the Password File"); } } ```
Destructor that calls a function that can throw exception in C++
[ "", "c++", "exception", "destructor", "throw", "try-catch", "" ]
I have the following code: ``` $data['x'] = $this->x->getResults(); $data['y'] = $data['x']; //some code here to modify $data['y'] //this causes (undesirably) $data['x] to be modified as well ``` I guess since all the elements of $data are themselves references, modifying `$data['y']` also modifies `$data['x']` which is NOT what I want. I want `$data['x']` to remain the same. Is there any way to dereference the elements here so that I can copy the elements by value? Update: `$this->x->getResults();` returns an object array. So I can then do something like: `$data['x'][0]->date_create` ... **Update**: my latest attempt to clone the array looks something like this: ``` $data['x'] = $this->x->getResults(); $data['y'] = $data['y']; foreach($data['x'] as $key=>$row) { $data['y'][$key]->some_attr = clone $row->some_attr; } ``` Am I way off here? I keep getting a "\_\_clone method called on non-object" error. From reading the responses it seems like my best option is to iterate over each element and clone it (which is what I was trying to do with that code..).
You can take advantage of the fact that PHP will dereference the results of a function call. Here's some example code I whipped up: ``` $x = 'x'; $y = 'y'; $arr = array(&$x,&$y); print_r($arr); echo "<br/>"; $arr2 = $arr; $arr2[0] = 'zzz'; print_r($arr); print_r($arr2); echo "<br/>"; $arr2 = array_flip(array_flip($arr)); $arr2[0] = '123'; print_r($arr); print_r($arr2); ``` The results look like this: ``` Array ( [0] => x [1] => y ) Array ( [0] => zzz [1] => y ) Array ( [0] => zzz [1] => y ) Array ( [0] => zzz [1] => y ) Array ( [0] => 123 [1] => y ) ``` You can see that the results of using `array_flip()` during the assigment of `$arr` to `$arr2` results in differences in the subsequent changes to `$arr2`, as the `array_flip()` calls forces a dereference. It doesn't seem terribly efficient, but it might work for you if `$this->x->getResults()` is returning an array: ``` $data['x'] = array_flip(array_flip($this->x->getResults())); $data['y'] = $data['x']; ``` See [this (unanswered) thread](https://stackoverflow.com/questions/894814/deep-copy-of-php-array-of-references) for another example. If everything in your returned array is an object however, then the only way to copy an object is to use `clone()`, and you would have to iterate through `$data['x']` and clone each element into `$data['y']`. Example: ``` $data['x'] = $this->x->getResults(); $data['y'] = array(); foreach($data['x'] as $key => $obj) { $data['y'][$key] = clone $obj; } ```
`array_merge()` can accept any number of parameters, even 1, then produce a new array. So just do following: ``` $new_array = array_merge($existing_array); ```
Make a copy of an object's value without copying the reference
[ "", "php", "object", "reference", "clone", "" ]
I have a link with id "helpTopicAnchorId". I want to change its text in jQuery. How do I do this?
``` $('#helpTopicAnchorId').text('newText'); ``` P.S the [jQuery Docs](http://docs.jquery.com/Manipulation) make great reading
If you are referring to the title attribute that can be used for tooltips, this works: ``` $('#helpTopicAnchorId').attr('title','new title'); ```
Changing the title of a link in jQuery
[ "", "javascript", "jquery", "" ]
I want to store certain objects in a HashMap. The problem is, usually you just use a single object as a key. (You can, for example, use a String.) What I want to do it to use multiple object. For example, a Class and a String. Is there a simple and clean way to implement that?
You key must implement the hashCode and equals. If it is a **SortedMap**, it must also implements the Comparable interface ``` public class MyKey implements Comparable<MyKey> { private Integer i; private String s; public MyKey(Integer i,String s) { this.i=i; this.s=s; } public Integer getI() { return i;} public String getS() { return s;} @Override public int hashcode() { return i.hashcode()+31*s.hashcode(); } @Override public boolean equals(Object o) { if(o==this) return true; if(o==null || !(o instanceof MyKey)) return false; MyKey cp= MyKey.class.cast(o); return i.equals(cp.i) && s.equals(cp.s); } public int compareTo(MyKey cp) { if(cp==this) return 0; int i= i.compareTo(cp.i); if(i!=0) return i; return s.compareTo(cp.s); } @Override public String toString() { return "("+i+";"+s+")"; } } public Map<MyKey,String> map= new HashMap<MyKey,String>(); map.put(new MyKey(1,"Hello"),"world"); ```
I tend to use a list ``` map.put(Arrays.asList(keyClass, keyString), value) ```
Using two (or more) objects as a HashMap key
[ "", "java", "hash", "dictionary", "" ]
Is it possible to create a mock object that implements several interfaces with EasyMock? For example, interface `Foo` and interface `Closeable`? In Rhino Mocks you can provide multiple interfaces when creating a mock object, but EasyMock's `createMock()` method only takes one type. Is it possbile to achieve this with EasyMock, without resorting to the fallback of creating a temporary interface that extends both `Foo` and `Closeable`, and then mocking that?
EasyMock doesn't support this so you're stuck with fallback of the temporary interface. As an aside, I smell a little bit of a code wiff - should a method really be treating an object as 2 different things, the `Foo` and `Closeable` interface in this case? This implies to me that the method is performing multiple operations and while I suspect one of those operations is to 'close' the `Closeable`, wouldn't it make more sense for the calling code to decide whether or not the 'close' is required? Structuring the code this way keeps the 'open' and 'close' in the same `try ... finally` block and IMHO makes the code more readable not to mention the method more general and allows you to pass objects that only implement `Foo`.
Although I fundamentally agree with Nick Holt's answer, I thought I should point out that [mockito](http://code.google.com/p/mockito) allows to do what you ask with the following call : ``` Foo mock = Mockito.mock(Foo.class, withSettings().extraInterfaces(Bar.class)); ``` Obviously you'll have to use the cast: `(Bar)mock` when you need to use the mock as a `Bar` but that cast will not throw `ClassCastException` Here is an example that is a bit more complete, albeit totally absurd: ``` import static org.junit.Assert.fail; import org.junit.Test; import static org.mockito.Mockito.*; import org.mockito.Mockito; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.*; import org.hamcrest.Matchers; import java.util.Iterator; public class NonsensicalTest { @Test public void testRunnableIterator() { // This test passes. final Runnable runnable = mock(Runnable.class, withSettings().extraInterfaces(Iterator.class)); final Iterator iterator = (Iterator) runnable; when(iterator.next()).thenReturn("a", 2); doThrow(new IllegalStateException()).when(runnable).run(); assertThat(iterator.next(), is(Matchers.<Object>equalTo("a"))); try { runnable.run(); fail(); } catch (IllegalStateException e) { } } ```
Is it possible to create a mock object that implements multiple interfaces with EasyMock?
[ "", "java", "unit-testing", "mocking", "rhino-mocks", "easymock", "" ]
I've got an error while using find() function. Here is the code: ``` #include <iostream> #include <map> #define N 100000 using namespace std; int main (int argc, char * const argv[]) { map<int,int> m; for (int i=0; i<N; i++) m[i]=i; find(m.begin(), m.end(), 5); return 0; } ``` I'm getting an compiller error: ``` error: no match for 'operator==' in '__first. __gnu_debug::_Safe_iterator<_Iterator, _Sequence>::operator* [with _Iterator = std::_Rb_tree_iterator<std::pair<const int, int> >, _Sequence = __gnu_debug_def::map<int, int, std::less<int>, std::allocator<std::pair<const int, int> > >]() == __val' ``` Including 'algorithm' nothing changes. Compiling in VS2008 shows similar error. I know about m.find(), but I realy need to use find() too. Thanks a lot for your assistance! P.S. Actualy, the task is to compare speed of m.find(5) and find(m.begin(), m.end(), 5), so I need to make both of them work properly.
`begin()` and `end()` on all STL containers provide access to elements of those collections. Type of those elements is known as `value_type` of the container. For `std::map<Key, Value>`, its `value_type` is `std::pair<Key, Value>`. Therefore, your `find` function is trying to find a `pair<int, int>` which is equal to 5. Since there's no `operator==` defined to compare `pair<int, int>` and `int`, you get the error. The correct way to do this (so long as you want to avoid member `find()`) is to use `std::find_if`: ``` template <class First> struct first_equal { const First value; first_equal(const First& value) : value(value) { } template <class Second> bool operator() (const std::pair<First, Second>& pair) const { return pair.first == value; } }; ... find_if(m.begin(), m.end(), first_equal<int>(5)); ``` You could also overload `operator==` for `pair` and `int` to do what you want, but it's a very hackish way (because it will affect all your code, and because such a comparison has no meaning in general).
find() requires a parameter that can be compared to \*iterator. For your map, this will be pair<int,int>. You'll need to create a dummy pair, plus a comparison functor to compare the pairs.
find() problems
[ "", "c++", "search", "" ]
Some documents I can't get the height of the document (to position something absolutely at the very bottom). Additionally, a padding-bottom on seems to do nothing on these pages, but do on the pages where height will return. Case(s) in point: <http://fandango.com> <http://paperbackswap.com> On Fandango jQuery's `$(document).height();` returns correct value `document.height` returns 0 `document.body.scrollHeight` returns 0 On Paperback Swap: jQuery's `$(document).height();` TypeError: `$(document)` is null `document.height` returns an incorrect value `document.body.scrollHeight` returns an incorrect value Note: I have browser level permissions, if there is some trick there.
Document sizes are a browser compatibility nightmare because, although all browsers expose clientHeight and scrollHeight properties, they don't all agree how the values are calculated. There used to be a complex best-practice formula around for how you tested for correct height/width. This involved using document.documentElement properties if available or falling back on document properties and so on. The simplest way to get correct height is to get all height values found on document, or documentElement, and use the highest one. This is basically what jQuery does: ``` var body = document.body, html = document.documentElement; var height = Math.max( body.scrollHeight, body.offsetHeight, html.clientHeight, html.scrollHeight, html.offsetHeight ); ``` A quick test with Firebug + [jQuery bookmarklet](http://www.learningjquery.com/2006/12/jquerify-bookmarklet) returns the correct height for both cited pages, and so does the code example. Note that testing the height of the document before the document is ready will always result in a 0. Also, if you load more stuff in, or the user resizes the window, you may need to re-test. Use `onload` or a [document ready](https://stackoverflow.com/search?q=document+ready) event if you need this at load time, otherwise just test whenever you need the number.
This is a really old question, and thus, has many outdated answers. As of 2020 all [major browsers have adhered to the standard](https://caniuse.com/#feat=mdn-api_element_scrollheight). **Answer for 2020:** ``` document.body.scrollHeight ``` Edit: the above doesn't take margins on the `<body>` tag into account. If your body has margins, use: ``` document.documentElement.scrollHeight ```
How to get height of entire document with JavaScript?
[ "", "javascript", "" ]
I am looking for a good config file library for c that is not xml. Optimally I would really like one that also has python bindings. The best option I have come up with is to use a JSON library in both c and python. What would you recommend, or what method of reading/writing configuration settings do you prefer?
[YaML](http://www.yaml.org/) :)
If you're not married to Python, try [Lua](http://www.lua.org/). It was originally designed for configuration.
What is a good configuration file library for c thats not xml (preferably has python bindings)?
[ "", "python", "c", "configuration-management", "" ]
I am using Sphinx to document a webservice that will be deployed in different servers. The documentation is full of URL examples for the user to click and they should just work. My problem is that the host, port and deployment root will vary and the documentation will have to be re-generated for every deployment. I tried defining substitutions like this: ``` |base_url|/path .. |base_url| replace:: http://localhost:8080 ``` But the generated HTML is not what I want (doesn't include "/path" in the generated link): ``` <a href="http://localhost:8080">http://localhost:8080</a>/path ``` Does anybody know how to work around this?
New in Sphinx v1.0: sphinx.ext.extlinks – Markup to shorten external links <https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html> The extension adds one new config value: **extlinks** This config value must be a dictionary of external sites, mapping unique short alias names to a base URL and a prefix. For example, to create an alias for the above mentioned issues, you would add ``` extlinks = {'issue': ('http://bitbucket.org/birkenfeld/sphinx/issue/%s', 'issue ')} ``` Now, you can use the alias name as a new role, e.g. `` :issue:`123` ``. This then inserts a link to <http://bitbucket.org/birkenfeld/sphinx/issue/123>. As you can see, the target given in the role is substituted in the base URL in the place of `%s`. The link caption depends on the second item in the tuple, the prefix: If the prefix is None, the link caption is the full URL. If the prefix is the empty string, the link caption is the partial URL given in the role content (123 in this case.) If the prefix is a non-empty string, the link caption is the partial URL, prepended by the prefix – in the above example, the link caption would be issue 123. You can also use the usual “explicit title” syntax supported by other roles that generate links, i.e. `` :issue:`this issue <123>` ``. In this case, the prefix is not relevant.
I had a similar problem where I needed to substitute also URLs in image targets. The `extlinks` do not expand when used as a value of image `:target:` attribute. Eventually I wrote a custom sphinx transformation that rewrites URLs that start with a given prefix, in my case, `http://mybase/`. Here is a relevant code for conf.py: ``` from sphinx.transforms import SphinxTransform class ReplaceMyBase(SphinxTransform): default_priority = 750 prefix = 'http://mybase/' def apply(self): from docutils.nodes import reference, Text baseref = lambda o: ( isinstance(o, reference) and o.get('refuri', '').startswith(self.prefix)) basetext = lambda o: ( isinstance(o, Text) and o.startswith(self.prefix)) base = self.config.mybase.rstrip('/') + '/' for node in self.document.traverse(baseref): target = node['refuri'].replace(self.prefix, base, 1) node.replace_attr('refuri', target) for t in node.traverse(basetext): t1 = Text(t.replace(self.prefix, base, 1), t.rawsource) t.parent.replace(t, t1) return # end of class def setup(app): app.add_config_value('mybase', 'https://en.wikipedia.org/wiki', 'env') app.add_transform(ReplaceMyBase) return ``` This expands the following rst source to point to English wikipedia. When conf.py sets `mybase="https://es.wikipedia.org/wiki"` the links would point to the Spanish wiki. ``` * inline link http://mybase/Helianthus * `link with text <http://mybase/Helianthus>`_ * `link with separate definition`_ * image link |flowerimage| .. _link with separate definition: http://mybase/Helianthus .. |flowerimage| image:: https://upload.wikimedia.org/wikipedia/commons/f/f1/Tournesol.png :target: http://mybase/Helianthus ```
Substitutions inside links in reST / Sphinx
[ "", "python", "python-sphinx", "substitution", "restructuredtext", "" ]
What runs faster? Setting a default value and changing it- ``` $foo = ""; if($bar) { $foo = "someValue"; } ``` Or- ``` if($bar) { $foo = "someValue"; } else { $foo = ""; } ```
You absolutely should not be bothered with performance of a single `if` statement. Also, consider: ``` $foo = $bar ? "" : "someValue" ```
At a guess, the 2nd one "potentially". First one you're potentially setting 2 values. Second one you're definitely only setting one. Its also a question though of code clarity. If you've got 2 alternatives (e.g turn left or right) then use the "else" statement. If you've got a 'normal' value vs a flag e.g http or https, then use the first one
Most efficient php if structure
[ "", "php", "performance", "" ]
This PHP code below will take an image and add the word **\_thumb** in between the filename and file extension This file is called up to 50 times on some pages to show user photo, please not I only linked to a full path image on another server for this example, the file path is usually pulled from a Mysql DB that stores the path to a users photo I am currently restructuring a lot of things on my site though and I am wanting to know if this is a bad way? I store a thumbnail image like this: **filename\_thumb.jpg** So on a page to view a thumbnail, I only have the path to the full size image and I must add the \_thumb part into the filename before displaying the image to screen. Would it be better performance to do something like **t\_**filename.jpg The data that come from the database is like this: user/photos/1/23/45/34/filename.jpg So if I changed where I add the thumb part to the fron of the name would it still have to run all the code below to get a result? Please notice how the path I get from the DB is not JUST a filename it has the folder structure saved into it as well and the folders are always different. ``` <?php $file = 'http://file-managers.net/skins/blue/images/actions/view_tree.png'; $fil_ext1 = pathinfo($file); $fil_ext = $fil_ext1['extension']; $fil_explode = '.' . $fil_ext; $arr = explode($fil_explode, '/skins/blue/images/actions/view_tree.png'); $pic1 = $arr[0] . "_thumb" . $fil_explode; // will show; // /skins/blue/images/actions/view_tree_thumb1.png echo $pic1; ?> ```
I'd recommend a folder structure like this: ``` images/ originals/ small/ middle/ big/ ``` Every image gets uploaded into `images/originals/`, smaller versions will be created in the respective folders (add as needed). In the database you only store the original\* filename. \*) *Note that you should create a new random filename, don't use the actual **original user-supplied** filename.* When including images you simply do ``` echo "images/small/".$filename; // (pseudocode) ``` If you ever want to change your design and need to resize images, it's as simple as cleaning out the `small/`, `middle/` etc folders and re-creating them with a small script, or even on-the-fly with something like: ``` // pseudocode if (!file_exists('images/small/'.$filename)) { create_resized_image($filename, 'small'); // looks for "images/originals/$filename" } echo "images/small/".$filename; ``` You'll need to see for yourself if/how you can map this to your "user/photos/1/23/45/34/filename.jpg" scheme.
If you are worried about processing time why not store two different paths in the database or store the file extension and file path without the extension in the database. This way you can do the processing once when adding to the database. Personally I like the second method.
Best way to name thumbnails and look them up in PHP
[ "", "php", "" ]
I have a bunch of URLs stored in a table waiting to be scraped by a script. However, many of those URLs are from the same site. I would like to return those URLs in a "site-friendly" order (that is, try to avoid two URLs from the same site in a row) so I won't be accidentally blocked by making too many http requests in a short time. The database layout is something like this: ``` create table urls ( site varchar, -- holds e.g. www.example.com or stockoverflow.com url varchar unique ); ``` Example result: ``` SELECT url FROM urls ORDER BY mysterious_round_robin_function(site); http://www.example.com/some/file http://stackoverflow.com/questions/ask http://use.perl.org/ http://www.example.com/some/other/file http://stackoverflow.com/tags ``` I thought of something like "`ORDER BY site <> @last_site DESC`" but I have no idea how to go about writing something like that.
See this article in my blog for more detailed explanations on how it works: * [**PostgreSQL: round-robin order**](http://explainextended.com/2009/07/21/postgresql-round-robin-records/) With new `PostgreSQL 8.4`: ``` SELECT * FROM ( SELECT site, url, ROW_NUMBER() OVER (PARTITION BY site ORDER BY url) AS rn FROM urls ) ORDER BY rn, site ``` With elder versions: ``` SELECT site, ( SELECT url FROM urls ui WHERE ui.site = sites.site ORDER BY url OFFSET total LIMIT 1 ) AS url FROM ( SELECT site, generate_series(0, cnt - 1) AS total FROM ( SELECT site, COUNT(*) AS cnt FROM urls GROUP BY site ) s ) sites ORDER BY total, site ``` , though it can be less efficient.
I think you're overcomplicating this. Why not just use ORDER BY NewID()
SQL return rows in a "round-robin" order
[ "", "sql", "postgresql", "" ]
**I will start by explaining my scenario in code:** ``` public class A { } public class B : A { } public class C : B { } public class D { } public class Test { private A a = new A ( ) ; private B b = new B ( ) ; private C c = new C ( ) ; private D d = new D ( ) ; public Test ( ) { // Evaluates to "false" if ( a.GetType == typeof(B) ) { } //TODO: Add Logic // Evaluates to "true" if ( b.GetType == typeof(B) ) { } //TODO: Add Logic // I WANT this to evaluate to "true" if ( c.GetType == typeof(B) ) { } //TODO: Add Logic // Evaluates to "false" if ( d.GetType == typeof(B) ) { } //TODO: Add Logic } } ``` **The important line to take notice of here is:** ``` if ( c.GetType == typeof(B) ) { } ``` I believe that this will in fact evaluate to "false", since typeof(B) and typeof(C) are not equal to each other in both directions. (C is a B, but B is not necessarily a C.) But what I need is some kind of condition that will take this into account. How can I tell if an object is a B or anything derived from it? I don't care if it is an object DERIVED from B, so long as the base B class is there. And I can't anticipate what derived class might show up in my application. I just have to assume that unkown derived classes may exist in the future - and therefore I can only focus on making sure that the base class is what I am expecting. I need a condition that will perform this check for me. How can this be accomplished?
You can just use `is`: ``` if (c is B) // Will be true if (d is B) // Will be false ```
Edit: this answers the question in the thread title. [cdm9002](https://stackoverflow.com/questions/1218178/c-net-how-can-i-get-typeof-to-work-with-inheritance/1218194#1218194) has the better answer to the problem as described in the full post. ``` typeof(B).IsAssignableFrom(c.GetType()) ```
C#.NET - How can I get typeof() to work with inheritance?
[ "", "c#", ".net", "inheritance", "typeof", "" ]
I have two forms in a page. Have decided to update my post with more realistic code from the page itself... ``` <form action="test.php" method="POST"> <strong>Details of work carried out</strong> <textarea name="detailsOfWorkCarriedOut"></textarea> <strong>Materials used</strong> <textarea name="materialsUsed"></textarea> <input type="hidden" name="submitted" value="true"> <input type="submit" value="Save"> <form/> <br /> <form action="test.php" method="POST"> <strong>Details of work not carried out</strong> <textarea name="detailsOfWorkNotCarriedOut"></textarea> <input type="hidden" name="submitted" value="true"> <input type="submit" value="Save"> </form> ``` "test.php" simply contains: ``` <?php print_r($_POST) ?> ``` No matter which form I post, I always get the same array returned: ``` Array ( [detailsOfWorkCarriedOut] => [materialsUsed] => [submitted] => true [detailsOfWorkNotCarriedOut] => ) ``` Why is this?
Your problem is the ``` <form/> ``` at line 8: replace it with ``` </form> ``` :)
Yes, browsers submit only the fields in the form in which the submit button is nested. You could use JavaScript to monitor form submissions and include values from the other form in the submission, but you're seeing the expected behavior right now.
Should posting one of two forms in a page post all fields in both forms, or have I got a bug in my code
[ "", "php", "forms", "post", "" ]
I am trying to learn Windows.Forms, and while I have Visual Studio (edit, my mistake obviously), I feel that I learn much more effectively by doing everything in Notepad. I have searched everywhere for a tutorial that takes this approach. I finally got started with <http://www.geekpedia.com/tutorial33_Windows-forms-programming.html>, but after the first example, it too begins working with multiple files? somehow generated with Visual Studio. How can I do this in Notepad? Can anyone point me somewhere helpful? Thanks! \*\*While the overwhelming response seems seems strongly against this and I started to be convinced, I saw SmokingRope talking about it being as simple as writing all classes into one file. This is what I have been doing up till now for sizable, just non Windows.Form projects into. Can you maybe help explain how those necessary files can be included using this method?\*
Seriously... I admire your fire, but you are out of your mind! What you can possibly hope to learn about .NET with NotePad that you couldn't learn a zillion times faster with Visual Studio? It's like asking, "I want to learn to drive a car. I have put wheels on my couch. What should I do next?" I mean no disrespect, but make friends with Visual Studio and get busy learning. You're wasting your time otherwise.
It is actually quite easy to code *C#* outside of the IDE. It will basically come down to using the *csc.exe* compiler from the command line prompt. The hardest part of this process is making sure the environment variables are correctly configure and Microsoft provides the *vsvars32.bat* batch file to do exactly that. Here is a short tutorial on how to do use this batch file and the compiler from the command line: <http://www.zeusedit.com/forum/viewtopic.php?t=1235> And an even better, but slightly more complicate approach is to use *MsBuild* utility to manage the building of the project: <http://www.zeusedit.com/forum/viewtopic.php?t=2518> Finally you can even use the external *C# Debugger* to debug you programs: <http://www.zeusedit.com/forum/viewtopic.php?t=2507>
Windows.Form c# without visual studio
[ "", "c#", "winforms", "notepad", "" ]
For the sake of argument, consider a UI thread as a thread that has had a call to `Application.Run()` or one of it's overloads called on it and has an active message loop running. Is there a way of detecting if we're currently executing on such a thread? The reason I want this is because I have a class with a private function that is long-running. The class itself is already multithreaded, and the usage of this class is such that it might be used from either the UI or from background threads doing processing. This function also falls into this net. But I don't want it to block up the UI thread. So I want to detect if I am running on a UI thread and if so, fork the function call into a background thread (probably `ThreadPool`, but that's a non-issue for this discussion). This is entirely well-behaved, but the background threads are probably relying on the output of the function, so blocking for them is better, whereas the UI thread is accessing it in a more "set-and-forget" manner.
I would suggest that it's the kind of decision the caller should make. You could always write wrapper methods to make it easier - but it means that you won't have problems with the caller being in an "odd" situation (e.g. a UI framework you don't know about, or something else with an event loop) and you making the wrong decision for them. If the method ever needs to provide feedback in the right thread, I'd pass in an `ISynchronizeInvoke` (implemented by `Control`) to do that in a UI-agnostic way.
If you have access to a `Form` or a `Control`, you can check the [`InvokeRequired`](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invokerequired.aspx) property; this will return `false` if you are on the UI thread and `true` if you are not.. If it happens in a context where you cannot check against a `Control`, you could easily set up a static property in your program that you could check against. Store a reference to `Thread.CurrentThread` at startup, and compare `Thread.CurrentThread` to that reference when you need to know: ``` static class Program { private static Thread _startupThread = null; [STAThread] static void Main() { _startupThread = Thread.CurrentThread; Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public static bool IsRunningOnStartupThread() { return Thread.CurrentThread == _startupThread; } } ``` By calling `Program.IsRunningOnStartupThread` you will get a `bool` saying if you are or not.
How to detect if we're on a UI thread?
[ "", "c#", "winforms", "multithreading", "ui-thread", "" ]
This is an easy one, I am sure, but I can't seem to find it on google. If I have a function: ``` foo(int[] a) {} ``` and I want to call it with a subarray of an array: ``` for instance: myNums[3]+ (i.e. an array {4,5,6}), given myNums: int[] myNums = { 1,2,3,4,5,6}; ``` Of course, I don't want to create a new array of just those sub items. I certainly could just pass the array and the index, but then I would have to write all the function code with offsets...ugly. So, how does one do this in C# (.NET 2.0)? `foo(&myNums[3])` is reporting unsafe. `foo(myNums[3])` won't work (it is passing just an int). Do I need to map this to some kind of collection?
If you have a function exactly as defined (i.e. it takes an array), there's no way you can do this. If you'd have a function that would take some collection interface instead, e.g.: ``` void foo(IList<int> a) { ... } ``` then you could pass an array to it, and you could also write your own `IList<T>` implementation that wraps a subrange of any other `IList<T>` (but there isn't a stock solution for this in FCL).
Passing the address of an array to a function is something not supported by C# out-of-the-box. Either you have to do it using **unsafe** method, the code will be C-like (though you need to compile with unsafe option). or you would have to code it the way managed code are ought to be, in this case, pass the **offset**. ``` public Form1() { InitializeComponent(); int[] a = { 4, 5, 6, 7, 8 }; unsafe { fixed (int* c = a) { SubArrayPointer(c + 3); } } SubArray(a, 3); } unsafe void SubArrayPointer(int* d) { MessageBox.Show(string.Format("Using pointer, outputs 7 --> {0}", *d)); } void SubArray(int[] d, int offset) { MessageBox.Show(string.Format("Using offset, outputs 7 --> {0}", d[offset])); } ```
Passing a sub array as an array in C#
[ "", "c#", "arrays", "" ]
I'm writing a Vector3D class that calls a static method on a VectorMath class to perform a calculation. When I compile, I get this: ``` bash-3.1$ g++ VectorMath.cpp Vector3D.cpp /tmp/cc5cAPia.o: In function `main': Vector3D.cpp:(.text+0x4f7): undefined reference to 'VectorMath::norm(Vector3D*)' collect2: ld returned 1 exit status ``` The code: **VectorMath.h:** ``` #ifndef VECTOR3D_H #include "Vector3D.h" #endif class VectorMath { public: static Vector3D* calculatePerpendicularVector(Vector3D*, Vector3D*); static Vector3D* norm(Vector3D*); static double length(Vector3D*); }; ``` **VectorMath.cpp** ``` #include "VectorMath.h" Vector3D* norm(Vector3D* vector) { // can't be found by linker // do vector calculations return new Vector3D(xHead, yHead, zHead, xTail, yTail, zTail); } // other methods ``` **Vector3D.cpp** ``` #include "Vector3D.h" #include "VectorMath.h" // ... // vector implementation // ... int main(void) { Vector3D* v = new Vector3D(x, y, z); Vector3D* normVector = VectorMath::norm(v); // error here } ``` Why can't the linker find the `VectorMath::norm` method? At first glance I'd think that I'd need to declare norm like this: ``` Vector3D* VectorMath::norm(Vector3D* vector) { ``` but that doesn't help either...
You're missing this: ``` //VectorMath.cpp #include "VectorMath.h" | V - here Vector3D* VectorMath::norm(Vector3D* vector) { ... } ``` The `norm` function is part of `VectorMath::`. Without that, you just have a free function. --- This is more about your design, but why are you using pointers to everything? This is much cleaner: ``` class VectorMath { public: static Vector3D norm(const Vector3D&); }; ``` Take references, you're in C++ so don't write C code. What happens when I call this? ``` VectorMath::norm(0); // null ``` It will either crash, you have to put in a check, in which case, what should it return? This is all cleaned up by using references. Also, why not just make these members of the `Vector3D` class? ``` Vector3D* v = new Vector3D(x, y, z); v->norm(); // normalize would be better, in my opinion ``` Lastly, stack-allocate things. Your code right now has a memory leak: ``` int main(void) { Vector3D* v = new Vector3D(x, y, z); Vector3D* normVector = VectorMath::norm(v); // delete v; // ^ you're not deleting it! } ``` Change it to this, and use [RAII](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization) concepts: ``` int main(void) { Vector3D v(x, y, z); Vector3D* normVector = VectorMath::norm(v); // delete v; // ^ you're not deleting it! } ``` And by making `norm` a member function you end up with the very clean code: ``` int main(void) { Vector3D v(x, y, z); Vector3D normVector(v.norm()); } ``` No pointers, no leaks, all sexy.
You haven't defined `Vector3D::norm` method in `VectorMath.cpp`. Instead you've defined a global function named `norm`. What you need to do is qualify the method name in the definition: ``` Vector3D* Vector3D::norm(Vector3D* vector) ```
C++ linker problems with static method
[ "", "c++", "linker", "static-methods", "" ]
I'm refactoring a function that, given a series of endpoints that implicitly define intervals, checks if a number is included in the interval, and then return a corresponding (not related in any computable way). The code that is now handling the work is: ``` if p <= 100: return 0 elif p > 100 and p <= 300: return 1 elif p > 300 and p <= 500: return 2 elif p > 500 and p <= 800: return 3 elif p > 800 and p <= 1000: return 4 elif p > 1000: return 5 ``` Which is IMO quite horrible, and lacks in that both the intervals and the return values are hardcoded. Any use of any data structure is of course possible.
``` import bisect bisect.bisect_left([100,300,500,800,1000], p) ``` here the docs: [bisect](https://docs.python.org/3.7/library/bisect.html)
You could try a take on this: ``` def check_mapping(p): mapping = [(100, 0), (300, 1), (500, 2)] # Add all your values and returns here for check, value in mapping: if p <= check: return value print check_mapping(12) print check_mapping(101) print check_mapping(303) ``` produces: ``` 0 1 2 ``` As always in Python, there will be any better ways to do it.
Python: Mapping from intervals to values
[ "", "python", "range", "intervals", "" ]
I'm new to encryption. I need to implement asymmetric encryption algorithm, which i think it uses private/public key. I started using a sample of RSACryptoServiceProvider. it was ok with small data to encrypt. But when using it on relatively larger data "2 lines", i get the exception CryptographicException "Bad Length"! ``` //Create a new instance of RSACryptoServiceProvider. using (RSACryptoServiceProvider RSA = new RSACryptoServiceProvider()) { //Import the RSA Key information. This only needs //toinclude the public key information. //RSA.ImportParameters(RSAKeyInfo); byte[] keyValue = Convert.FromBase64String(publicKey); RSA.ImportCspBlob(keyValue); //Encrypt the passed byte array and specify OAEP padding. //OAEP padding is only available on Microsoft Windows XP or //later. encryptedData = RSA.Encrypt(DataToEncrypt, DoOAEPPadding); } ``` Then I found some samples of encrypting large data (or files) by using CryptoStream, and only use symmetric algorithms like DES or 3DES, which have the function CreateEncryptor to return ICryptoTransform as one of the input to the constructor of CryptoStream!!! ``` CryptoStream cStream = new CryptoStream(fStream, new TripleDESCryptoServiceProvider().CreateEncryptor(Key, IV), CryptoStreamMode.Write); ``` What is the way to encrypt files using RSA?
As mentioned in other answers asymmetric encryption is only designed for encrypting data smaller than its key size. One option that I have implemented when needing to transfer large amounts of encrypted data between two systems is to have an RSA keypair whose public key is known to both the sender and the receiver then when data needs to be sent the receiver generates a new RSA keypair, encrypts the public key of that keypair with the common public key and sends the encrypted public key to the sender. The sender decrypts the receivers public key using its private key (which the receiver does not need to know, just as the sender does not need to know the receivers generated private key), generates a symmetric encryption key, encrypts the data with the symmetric key and then encrypts the symmetric key using the public key received from the receiver. Both the encrypted symmetric key and the encrypted data are then sent to the receiver which uses its generated private key to decrypt the symmetric key and then decrypts the data. You can use the `RSACryptoServiceProvider.ToXMLString()` and `RSACryptoServiceProvider.FromXMLString()` methods to store the common public key as an XML string literal in the receiver application. Don't forget, when you generate the symmetric encryption key to use `RNGCryptoServiceProvider()` to generate the key as it is a much more secure method of generating (pseudo) random numbers. Also, I strongly recommend against using 3DES as your symmetric encryption algorithm, it is old and starting to show its age. Use AES symmetric encryption with either the `AesCryptoServiceProvicer` or `RijndaelManaged` classes.
RSA can only encrypt data blocks that are shorter than the key length so what you normally do is 1. Generate a random key of the correct length required for AES (or similar). 2. Encrypt your data using AES or similar using that key 3. Encrypt the random key using your RSA key Then you publish both the outputs from 2 and 3 To decrypt 1. Decrypt the AES key using your RSA key. 2. Decrypt the data using that AES key
how to use RSA to encrypt files (huge data) in C#
[ "", "c#", "cryptography", "rsa", "encryption-asymmetric", "3des", "" ]
I realize the code below is not the most efficient way of grabbing elements, but for the sake of an example... ``` $('.myFirstClass').each(function(i){ // Here is the first 'THIS' occurrence $(this).find('.mySecondClass').each(function(j){ // Here is the second 'THIS' occurrence // How do i access the first occurrence from here? }); }); ```
No need to store variables. jQuery already does this in the second parameter... ``` $(".myFirstClass").each(function(i, j){ // I am represented as this or j $(j).find(".mySecondClass").each(function(a, b){ // I am represented as this or b // I can communicate with j }); }); ```
Something like this, ``` $('.myFirstClass').each(function(i){ var firstClassThis = this; $(this).find('.mySecondClass').each(function(j){ // Here is the second 'THIS' occurrence // How do i access the first occurrence from here? //You can use firstClassThis here due to closure. }); }); ```
How do I access different levels of "THIS" in jquery?
[ "", "javascript", "jquery", "oop", "" ]
I like how [Facebook](http://en.wikipedia.org/wiki/Facebook) keeps that toolbar on the bottom of the page. Does that require cross-browser ninja skills? Their [JavaScript](http://en.wikipedia.org/wiki/JavaScript)/[CSS](http://en.wikipedia.org/wiki/Cascading_Style_Sheets) files are huge so I am having a hard time narrowing down the implementation (for learning purposes).
You can achieve the effect [with CSS](http://limpid.nl/lab/css/fixed/footer).
Here's a basic example. No, it doesn't require cross-browser ninja skillz. =) ``` <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>Facebook Bar</title> <style type="text/css"> body { margin: 0px; padding: 0px; overflow: hidden; } #page { margin: 10px; overflow: auto; height: 93%; } #bottom { width: 100%; background: #18f8f8; text-align: center; } </style> </head> <body> <div id="page"> Other stuff on page </div> <div id="bottom">Bottom stuff goes here</div> </body> </html> ```
How does Facebook keep that toolbar on the bottom of the page?
[ "", "javascript", "css", "ajax", "toolbar", "" ]
I'm beginning to program in C# 2.0, so I have never used lambda expressions, but, why so much fuss about it? Are them just syntactic sugar around anonymous delegates, or is there something more which I can't see?
Well, lambda expressions have two main things over anonymous methods: * They're more concise than anonymous methods * They can be converted to [expression trees](http://msdn.microsoft.com/en-us/library/bb397951.aspx) as well as delegates Unless you're using expression trees, they're extremely similar to anonymous methods though. The difference is that often you can write several lambda expressions in one statement (chaining method calls together) without losing readability, but anonymous methods are just a bit too wordy. By the way, it's not so much that lambda expressions are "just syntactic sugar around anonymous delegates" as that both lambda expressions and anonymous methods are "just syntactic sugar around creating delegates (and expression trees)." Don't discount syntactic sugar though - the benefits of anonymous functions acting as closures is massive, along with the ability to have the code right where you want it, instead of in a separate method.
They can easily be used as just syntax sugar around a delegate but the big thing about lambdas is that the compiler has the ability turn them into [*expression trees*](http://blogs.msdn.com/charlie/archive/2008/01/31/expression-tree-basics.aspx) which open up many possibilities (not the least of which being LINQ).
Why such hype with C# lambda functions?
[ "", "c#", "lambda", "" ]
I am using a linkedHashMap to guarantee order when someone tries to access it. However, when it comes time to iterate over it, does using entrySet() to return key/value pairs guarantee order as well? No changes will be made while iterating. **EDIT:** Also, are there any adverse effects from iterating through the map by iterating through its keys and calling get?
According to the [Javadocs](http://java.sun.com/javase/6/docs/api/java/util/LinkedHashMap.html), yes. > This implementation differs from `HashMap` in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration ordering, which is normally the order in which keys were inserted into the map (*insertion-order*). As for the edit, no, it should work just fine. But the entry set is somewhat faster since it avoids the overhead of looking up every key in the map during iteration.
If you're sure no changes will be made during the iteration, then proper ordering with `entrySet()` is guaranteed, as stated in the [API](http://java.sun.com/javase/6/docs/api/java/util/HashMap.html#entrySet%28%29).
Does entrySet() in a LinkedHashMap also guarantee order?
[ "", "java", "linkedhashmap", "" ]
I have a page with a listview control and a datapager control. The listviews datasource is set programatically using this code: ``` Dim dal as new dalDataContext Dim bookmarks = From data In dal.getData(userid) listview1.DataSource = bookmarks listview1.DataBind() ``` When i test this page in a browser it comes up with the error: 'ListView with id 'listview1' must have a data source that either implements ICollection or can perform data source paging if AllowPaging is true.' How can i implement paging in this scenario? Thanks
Try ``` listview1.DataSource = bookmarks.ToArray() ``` I had the same problem this week.
An answer to the click-twice problem that the OP subsequently encountered - move the Databind to the OnPreRender event handler: ``` protected void Page_PreRender(object sender, EventArgs e) { listview1.DataBind(); } ```
LINQ and paging with a listview
[ "", "asp.net", "sql", "linq", "listview", "paging", "" ]
I am using PHP to pass some information in a text file back to the user which is then used as input for an app. I'm using the method shown in the following snippet to serve the file to the user. ``` header('Content-type: text/csv'); header('Content-Disposition: attachment; filename=filename.dat'); echo $data; exit(); ``` I get the save as dialogue fine and the data is present the only problem is that there is a line feed character at the start of the output file which I cant seem to get rid of. It just so happens that the app that uses the data is sensitive to white space and although it would be easy to fix this, forcing the users to update is not a route I want to go down for various reasons. Does anyone know what is causing this and more importantly any workarounds.
As I already said in the comments to the question: > Either you `$data` contains that line feed or there is something before that snippet that does this. Maybe a line feed before you opened the PHP block.
Probably $data contains the line feed. Look for includes too
How to stop PHP prefixing LF to start of file
[ "", "php", "header", "" ]
I am working on a standalone JavaScript development runtime environment for using JavaScript as a general-purpose scripting language. Currently I support the following libraries: zlib, SQLite, FastCGI, NSPR (Netscape Portable Runtime), ODE (Open Dynamics Engine), libpng, libjpeg, librsvg, freetype, librsvg, SDL, libiconv, OpenGL, OpenAL, ogg vorbis, libTomCrypt, libffi (Foreign function interface). Now I am looking for other useful libraries to bind to my project.
Graphical: wxWidgets, QT Text Manipulation: GNU Aspell Just google for popular libraries. Believe me, you'll find a plenty of them ;)
Here's some libraries I'd like to see (there may be some overlap): * Cryptography: mcrypt, OpenSSL * Compression: zlib, zip, tar, bzip * Database: mysql, postgresql, sqlite, oracle, mssql, couch * Text: ICU (i18n), unicode, aspell * Image: libgd, libexif, freetype * Mail: cclient or anything really * Math: libbcmath, dcdflib + randlib (stats) * Binary output: pslib, pdflib, libswf * Misc: memcached, cURL, ftp, OpenLDAP, libsvn,GeoIP
Which popular libraries for a standalone JavaScript interpreter (jslibs)?
[ "", "javascript", "runtime", "" ]
does any of you know what would be better: a. get a string `s`, convert to `char` array and loop over it, or b. get a string `s`, loop over substrings of it (`s.Substring(i, 1)`)? Any tips much appreciated.
Option **b)**, looping over substrings, is very inefficient. The fastest method would be **c)** loop over the string chars directly, using the read-only indexer property: ``` for (int i = 0; i < s.Length; i++) { char c = s[i]; ... } ``` or, based on the `IEnumerable<char>` interface: ``` foreach(char c in s) { ... } ```
(A) is much more efficient. But why not just do: ``` foreach (char c in s) ``` Some quick benchmarks indicate that this is a tiny bit (<2% difference) less efficient than ``` for(int i = 0; i < s.Length; ++i) char c = s[i]; ``` But my opinion is that the readability of the former makes it superior for anything but the most time-critical code.
looping over char[] or substring(): Efficiency in C#?
[ "", "c#", "performance", "" ]
I am writing an algorithm to generate combinations of items from a database. They need to be unique permutations (i.e. 145, 156 == 156, 145). The problem I am running into is how to keep track of previous combinations so that i do not end up with 145, 156 and 156, 145. Currently I am adding them to an array with index of id1\_id2... (sorted so id's are always be lowest to highest) and setting the value equal to 1 when a combo is generated so that i can check if $combos[$index] exists or not. If it does not exist, create it. (there are other criteria to weed out EVERY permutation, but they are irrelevant) Once these combinations are generated, they are being stored in a table in MySQL. The problem I am running into is that with the test items i'm using (about 85) I cannot generate a combinations with more than 3 items (id1\_id2\_id3) without running out of memory as the number of combinations is MASSIVE and the $combos array takes up more than the 64M i am allotted in PHP memory. Is there a way that I can do this a) without keeping track of previous combos or b) skipping the $combos array route and only adding a unique row to mysql and let mysql handle the duplicate checking. Here is some pseudo code for reference: ``` $items = array(/*85 items*/); foreach ($items as $item1){ generate(array($item1)); foreach($items as $item2){ generate(array($item1, $item2)); } } } function generate($items_arary){ $temp_array = array(); foreach ($items_array as $item){ $temp_array[] = $item['id']; } sort($temp_array); $index = implode("_", $temp_array); if (!$combos[$index]){ $combos[$index] = 1; /* some code to generate query to store to db */ } } ``` the query ends up looking like this: (the database is truncated at beginning of script) ``` INSERT INTO `combos` (combo_id, more_info) VALUES ('id1_id2', 'Item Name'); ``` In the process of writing this question, I thought of a possible solution: Making sure id3 > id2 > id1. Would this be a viable solution to remove the need for $combos?
The reason I asked about the before data structure is because you could do something like this: ``` $sql = "SELECT id FROM test_a"; $result = mysql_query($sql); while ($row = mysql_fetch_array($result)) { $item1 = $row['id']; $sql2 = "SELECT id FROM test_a"; $result2 = mysql_query($sql2); while ($row2 = mysql_fetch_array($result2)) { $item2 = $row2['id']; $combo1 = $item1 . "_" . $item2; $combo2 = $item2 . "_" . $item1; $sql3 = "SELECT * FROM combos WHERE combo_id = '$combo1' OR combo_id = '$combo2'"; $result3 = mysql_query($sql3); if (mysql_num_rows($result3) == 0) { $sql4 = "INSERT INTO combos (combo_id, more_info) VALUES ('$combo1','Item Name')"; $result4 = mysql_query($sql4); } } } ``` When table test\_a has the values 1,2,3, and 4 this script inserts: 1\_1 1\_2 1\_3 1\_4 2\_2 2\_3 2\_4 3\_3 3\_4 4\_4 This shouldn't have any memory problems. Although if you have a huge database you may run into a issue with php's time limit
Here is the same concept as my other answer but in an all SQL format. ``` INSERT INTO combos (combo_id, more_info) SELECT CONCAT_WS("_",t1.id,t2.id), "item_name" FROM test_a t1, test_a t2 WHERE NOT EXISTS (SELECT * FROM combos WHERE combo_id = CONCAT_WS("_",t1.id,t2.id)) AND NOT EXISTS (SELECT * FROM combos WHERE combo_id = CONCAT_WS("_",t2.id,t1.id)) ``` Assuming you can get item\_name from the db somewhere, this will probably be your fastest and least memory intensive solution. I am running a test on around 1000 ids at the moment. I'll update this when it finishes.
generating unique combinations without running out of memory in php
[ "", "php", "mysql", "algorithm", "" ]
We get into unnecessary coding arguments at my work all-the-time. Today I asked if conditional AND (&&) or OR (||) had higher precedence. One of my coworkers insisted that they had the same precedence, I had doubts, so I looked it up. According to MSDN AND (&&) has higher precedence than OR (||). But, can you prove it to a skeptical coworker? <http://msdn.microsoft.com/en-us/library/aa691323(VS.71).aspx> ``` bool result = false || true && false; // --> false // is the same result as bool result = (false || true) && false; // --> false // even though I know that the first statement is evaluated as bool result = false || (true && false); // --> false ``` So my question is how do you prove with code that AND (&&) has a higher precedence that OR (||)? If your answer is it doesn't matter, then why is it built that way in the language?
Change the first false by true. I know it seems stupid to have (true || true) but it proves your point. ``` bool result = true || true && false; // --> true result = (true || true) && false; // --> false result = true || (true && false); // --> true ```
If you really want to freak him out try: ``` bool result = True() | False() && False(); Console.WriteLine("-----"); Console.WriteLine(result); static bool True() { Console.WriteLine(true); return true; } static bool False() { Console.WriteLine(false); return false; } ``` This will print: ``` True False False ----- False ``` ### Edit: In response to the comment: In C#, `|` is a logical operator that performs the same boolean logic as `||`, but does not short-circuit. Also in C#, the `|` operator has a higher precedence than both `||` and `&&`. By printing out the values, you can see that if I used the typical `||` operator, only the first `True` would be printed - followed by the result of the expression which would have been `True` also. But because of the higher precedence of `|`, the `true | false` is evaluated first (resulting in `true`) and *then* that result is `&&`ed with `false` to yield `false`. I wasn't trying to show the order of evaluation, just the fact that the right half of the `|` was evaluated period when it normally wouldn't be :)
C# conditional AND (&&) OR (||) precedence
[ "", "c#", "conditional-statements", "conditional-operator", "operator-precedence", "associativity", "" ]
I'm a Java programmer who is working on an iPhone application. I'd like it to use Push Notification services. I originally thought I could use Google App Engine to provide the payloads to Apple, but I see now that it won't be possible because App Engine doesn't support the low-level socket programming that is needed to communicate with Apple. Are there any alternatives to Google App Engine that let me use Java? In brief, I'm wondering if there is a free hosting platform that supports Java and Socket Programming UPDATE - since writing this question I've written an app that uses Urban Airship to act as a middleman between GAE and APNS. It works just swell. There is an app engine issue on this - <http://code.google.com/p/googleappengine/issues/detail?id=1164> if you'd like to follow, but personally I've decided that this isn't really something that GAE should be doing. Just use Urban Airship.
[Urban Airship](http://urbanairship.com/push/) provide RESTful web services for sending iPhone push notifications. [This cookbook example](http://appengine-cookbook.appspot.com/recipe/googleappengine-python-with-urbanairship-to-send-out-iphone-os30-push-notifications/?id=ahJhcHBlbmdpbmUtY29va2Jvb2tyvAELEgtSZWNpcGVJbmRleCJPYWhKaGNIQmxibWRwYm1VdFkyOXZhMkp2YjJ0eUpRc1NDRU5oZEdWbmIzSjVJaGROWVhOb2RYQnpJSGRwZEdnZ1FYQndJRVZ1WjJsdVpRdwwLEgZSZWNpcGUiUGFoSmhjSEJsYm1kcGJtVXRZMjl2YTJKdmIydHlKUXNTQ0VOaGRHVm5iM0o1SWhkTllYTm9kWEJ6SUhkcGRHZ2dRWEJ3SUVWdVoybHVaUXc2DA) shows how to use it from GAE using Python; I assume this can be done in Java also.
Like Justin said (thanks random evangelist), [AppNotify](http://appnotify.com) is launching nearer to the end of this month. We're just finishing up a few admin screens and doing some final tests. The pricing will be better structured than Urban Airship, and with a much better interface. If you want something in particular or more info, send me an email personally at adam.m@selectstartstudios.com We're using it to develop our own products, but if we've missed a corner case I'd like to know about it. Good luck
What's a Java alternative to Google App Engine for developing iPhone Push Notification services?
[ "", "java", "iphone", "google-app-engine", "hosting", "push-notification", "" ]
I have a datagridview in C# that fills with customers based upon search criteria when a person clicks "search". The problem I'm having is that when The person clicks on the search button more than once to search for customers, or if the user wants to do a different search, all of the details from the last search are still in the datagridview. The major problem right now is not that the columns are not being removed or extras are being added, but that the column index of the button within the datagridview becomes "0" after searching more than once. Now that I have figured out the problem, I think I'll keep the full details up here in case anyone would like to look at the details in future. The problem was, it seems, that when I remove the custDataGridView.DataSource = null; part, everything seemed to work As there seemed to be some ambiguity, here is the code in full: ``` DataGridViewButtonColumn custAddJobSelect = new DataGridViewButtonColumn(); Datatable dt = new Datatable(); // ---> This was the culprit ---> custDataGridView.DataSource = null; dt.Rows.Clear(); #region SQL connection String ...//CustQuery is the SQL stuff, conn is connection to DB. #endregion da = new System.Data.SqlClient.SqlDataAdapter(CustQuery, conn); da.Fill(dt); conn.Close(); custDataGridView.DataSource = dt; if (AddJobGone == false) { AddJobGone = true; custAddJobSelect.DisplayIndex = 0; custDataGridView.Columns.Add(custAddJobSelect); } private void custDataGridView_CellContentClick(object sender, DataGridViewCellEventArgs e) { if (e.ColumnIndex == this.custDataGridView.Columns.Count - 1) { string addJobsCustNo = custDataGridView.Rows[e.RowIndex].Cells[0].FormattedValue.ToString(); txtAddJobsCustNo.Text = addJobsCustNo; pnlAddJobSearchCustomer.Visible = false; } } ``` Thanks for your patience, hope this helps someone else!
Question now answered, seemed to be a problem with: ``` custDataGridView.DataSource = null; ``` Take a look for full code, if you can make any other suggestions please do :)
Setting the DataSource to null should do the trick, make sure that your particular set of code is being hit when your try a new search. For instance, plug it into its own procedure and call the procedure at the beginning of the search to ensure that the object is cleared. Truthfully, unless you have a reason for not clearing the DataSource each time your search button is pressed, then there is probably no point in having conditional logic surrounding that section of code.
Empty a DataGridView
[ "", "c#", "winforms", "datagridview", "" ]
Does the C++ Standard say I should be able to compare two default-constructed STL iterators for equality? Are default-constructed iterators equality-comparable? I want the following, using std::list for example: ``` void foo(const std::list<int>::iterator iter) { if (iter == std::list<int>::iterator()) { // Something } } std::list<int>::iterator i; foo(i); ``` What I want here is something like a NULL value for iterators, but I'm not sure if it's legal. In the STL implementation included with Visual Studio 2008, they include assertions in std::list's operator==() that preclude this usage. (They check that each iterator is "owned" by the same container and default-constructed iterators have no container.) This would hint that it's not legal, or perhaps that they're being over-zealous.
OK, I'll take a stab. The C++ Standard, Section 24.1/5: > Iterators can also have singular > values that are not associated with > any container. [Example: After the > declaration of an uninitialized > pointer x (as with int\* x;), x must > always be assumed to have a singular > value of a pointer. ] Results of most > expressions are undefined for singular > values; the only excep- tion is an > assignment of a non-singular value to > an iterator that holds a singular > value. So, no, they can't be compared.
This is going to change in C++14. [forward.iterators] 24.2.5p2 of N3936 says > However, value-initialized iterators may be compared and shall compare > equal to other value-initialized iterators of the same type.
Comparing default-constructed iterators with operator==
[ "", "c++", "visual-studio-2008", "stl", "iterator", "" ]
Can I setup JPA/hibernate to persist `Boolean` types as `Y/N`? In the database (the column is defined as `varchar2(1)`. It currently stores them as `0/1`. The database is Oracle.
The only way I've figured out how to do this is to have two properties for my class. One as the boolean for the programming API which is not included in the mapping. It's getter and setter reference a private char variable which is Y/N. I then have another protected property which is included in the hibernate mapping and it's getters and setters reference the private char variable directly. EDIT: As has been pointed out there are other solutions that are directly built into Hibernate. I'm leaving this answer because it can work in situations where you're working with a legacy field that doesn't play nice with the built in options. On top of that there are no serious negative consequences to this approach.
Hibernate has a built-in "yes\_no" type that would do what you want. It maps to a CHAR(1) column in the database. Basic mapping: `<property name="some_flag" type="yes_no"/>` Annotation mapping (Hibernate extensions): ``` @Type(type="yes_no") public boolean getFlag(); ```
Configure hibernate (using JPA) to store Y/N for type Boolean instead of 0/1
[ "", "java", "hibernate", "jpa", "" ]
Ok, I'm stuck, need some help from here on... If I've got a main dictionary like this: ``` data = [ {"key1": "value1", "key2": "value2", "key1": "value3"}, {"key1": "value4", "key2": "value5", "key1": "value6"}, {"key1": "value1", "key2": "value8", "key1": "value9"} ] ``` Now, I need to go through that dictionary already to format some of the data, ie: ``` for datadict in data: for key, value in datadict.items(): ...filter the data... ``` Now, how would I in that same loop somehow (if possible... if not, suggest alternatives please) check for values of certain keys, and if those values match my presets then I would add that whole list to another dictionary, thus effectively creating smaller dictionaries as I go along out of this main dictionary based on certain keys and values? So, let's say I want to create a sub-dictionary with all the lists in which key1 has value of "value1", which for the above list would give me something like this: ``` subdata = [ {"key1": "value1", "key2": "value2", "key1": "value3"}, {"key1": "value1", "key2": "value8", "key1": "value9"} ] ```
Here is a not so pretty way of doing it. The result is a generator, but if you really want a list you can surround it with a call to `list()`. Mostly it doesn't matter. The predicate is a function which decides for each key/value pair if a dictionary in the list is going to cut it. The default one accepts all. If no k/v-pair in the dictionary matches it is rejected. ``` def filter_data(data, predicate=lambda k, v: True): for d in data: for k, v in d.items(): if predicate(k, v): yield d test_data = [{"key1":"value1", "key2":"value2"}, {"key1":"blabla"}, {"key1":"value1", "eh":"uh"}] list(filter_data(test_data, lambda k, v: k == "key1" and v == "value1")) # [{'key2': 'value2', 'key1': 'value1'}, {'key1': 'value1', 'eh': 'uh'}] ```
It's an old question, but for some reason there is no one-liner syntax answer: ``` { k: v for k, v in <SOURCE_DICTIONARY>.iteritems() if <CONDITION> } ``` For example: ``` src_dict = { 1: 'a', 2: 'b', 3: 'c', 4: 'd' } predicate = lambda k, v: k % 2 == 0 filtered_dict = { k: v for k, v in src_dict.iteritems() if predicate(k, v) } print "Source dictionary:", src_dict print "Filtered dictionary:", filtered_dict ``` Will produce the following output: ``` Source dictionary: {1: 'a', 2: 'b', 3: 'c', 4: 'd'} Filtered dictionary: {2: 'b', 4: 'd'} ```
Filtering dictionaries and creating sub-dictionaries based on keys/values in Python?
[ "", "python", "list", "dictionary", "filter", "" ]
I would like to build an Appender (or something similar) that inspects Events and on certain conditions creates logs new Events. An example would be and Escalating Appender that checks if a certain amount of identical Events get logged and if so logs the Event with a higher logleve. So you could define something like: If you get more then 10 identical Warnings on this logger, make it an Error. So my questions are: 1. Does something like this already exist? 2. Is an Appender the right class to implement this behavior? 3. Are there any traps you could think of I should look out for? Clarification: I am fine with the algorithm of gathering and analysing the events. I'll do that with a collection inside the appender. Persistence is not necessary for my purpose. My question #2 is: is an appender the right place for this to do? After all it is not normal behaviour to creat logging entries for an appender.
Logback (log4j's successor) will allow you to enable logging for any event via [TurboFilters](http://logback.qos.ch/manual/filters.html#TurboFilter). For example, assuming the same event occurs N or more times in a given timeframe, you could force the event to be accepted (regardless of its level). See also [DuplicateMessageFilter](http://logback.qos.ch/manual/filters.html#DuplicateMessageFilter) which does the inverse (denying re-occurring events). However, even logback will not allow the level of the logging event to be incremented. Log4j will not either. Neither framework is designed for this and I would discourage you from attempting to increment the level *on the fly* and within the same thread. On the other hand, incrementing the level during post processing is a different matter altogether. Signaling another thread to generate a *new* logging event with a higher level is an additional possibility. (Have your turbo-filter signal another thread to generate a *new* logging event with a higher level.) It was not clear from your question why you wished the level to be incremented. Was the increment of the level a reason in itself or was it a means to a goal, that is having the event logged regardless of its level. If the latter, then logback's TurboFilters are the way to go. HTH,
You can create your own appender by implementing the `Appender` interface provided by log4j. <http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Appender.html> That would be one approach. Another would be to use an existing appender and then write some code that monitors the log. For example, you could log to the database and then write a process that monitors the log entries in the database and creates meta-events based on what it sees. It depends most on what you're comfortable with. One question you'll have to deal with is how to look back in the log to create your meta-events. Either you'll have to accumulate events in your appender or persist them somewhere that you can query to construct your meta-events. The problem with accumulating them is that if you stop and start your process, you'll either have to dump them somewhere so they get picked back up or start over whenever the process restarts. For example, let's say that I want to create a log entry every 10th time a `NullPointerException` is thrown. If I have the log entries in a database of some kind, every time an NPE is thrown I run a query to see how many NPEs have been thrown since the last time I created a log entry for them. If I just count them in memory every time one is thrown, if I restart the application after 5 are thrown, if I don't persist that number I'll lose count.
Log4J rerouting of Log Events
[ "", "java", "logging", "log4j", "appender", "" ]
The two tables below can both hold the same data - a full year, including some arbitrary info about each month ``` table1 (one row = one month) ------ id month year info ``` ``` table2 (one row = one year) ------ id year jan_info feb_info mar_info apr_info may_info jun_info jul_info aug_info sep_info oct_info nov_info dec_info ``` Table A * Seems more intuitive because the month is numeric, but its * 10x more rows for a full year of data. Also the * Rows are smaller (less columns) Table B * 10x less rows for a full year of data, but * Single rows are much larger * Possibly more difficult to add more arbitrary info for a month In a real world test scenerio I set up, there were 12,000 rows in table1 for 10 years of data, where table2 had 150. I realize less is better, generally speaking, but ALWAYS? I'm afraid that im overlooking some caveat that ill find later if I commit to one way. I havent even considered disk usage or what query might be faster. What does MySQL prefer? Is there a "correct" way? Or is there a "better" way? Thanks for your input!
Don't think about how to store it, think about how you use it. And also think about how it might change in the future. The storage structure should reflect use. The first option is more normalized by the second, so I would tend to prefer it. It has the benefit of being easy to change, for example if every month suddenly needed a second piece of information stored about it. Usually this kind of structure is easier to populate, but not always. Think about where the data is coming from. If you're only using this data for reports and you don't need to aggregate data across months, use the second option. It really depends on what the data is for and where it comes from. Generally, though, the first option is better.
12000 rows for 10 years of data? I say that scale pretty well since 12000 rows is next to nothing with a decent DBMS. How are you using the database? Are you sure you really need to worry about optimizations? If you need to store data that is specific to a month then you should absolutely store a row for each month. It's a lot cleaner approach compared to the one with a column for each month.
Same data, two different ways to store it
[ "", "sql", "mysql", "database-design", "" ]
Currently when user logged in, i created 2 sessions. ``` $_SESSION['logged_in'] = 1; $_SESSION['username'] = $username; // user's name ``` So that, those page which requires logged in, i just do this: ``` if(isset($_SESSION['logged_id'])){ // Do whatever I want } ``` Is there any security loopholes? I mean, is it easy to hack my session? How does people hack session? and how do I prevent it?? EDIT: Just found this: <http://www.xrvel.com/post/353/programming/make-a-secure-session-login-script> <http://net.tutsplus.com/tutorials/php/secure-your-forms-with-form-keys/> Just found the links, are those methods good enough?? Please give your opinions. I still have not get the best answer yet.
# Terminology * **User:** A visitor. * **Client:** A particular web-capable software installed on a particular machine. --- # Understanding Sessions In order to understand how to make your session secure, you must first understand how sessions work. Let's see this piece of code: ``` session_start(); ``` As soon as you call that, PHP will look for a cookie called `PHPSESSID` (by default). If it is not found, it will create one: ``` PHPSESSID=h8p6eoh3djplmnum2f696e4vq3 ``` If it is found, it takes the value of `PHPSESSID` and then loads the corresponding session. That value is called a `session_id`. That is the only thing the client will know. Whatever you add into the session variable stays on the server, and is never transfered to the client. That variable doesn't change if you change the content of `$_SESSION`. It always stays the same until you destroy it or it times out. Therefore, it is useless to try to obfuscate the contents of `$_SESSION` by hashing it or by other means as the client never receives or sends that information. Then, in the case of a new session, you will set the variables: ``` $_SESSION['user'] = 'someuser'; ``` The client will never see that information. --- # The Problem A security issue may arise when a malicious user steals the `session_id` of an other user. Without some kind of check, he will then be free to impersonate that user. We need to find a way to uniquely identify the client (not the user). One strategy (the most effective) involves checking if the IP of the client who started the session is the same as the IP of the person using the session. ``` if(logging_in()) { $_SESSION['user'] = 'someuser'; $_SESSION['ip'] = $_SERVER['REMOTE_ADDR']; } // The Check on subsequent load if($_SESSION['ip'] != $_SERVER['REMOTE_ADDR']) { die('Session MAY have been hijacked'); } ``` The problem with that strategy is that if a client uses a load-balancer, or (on long duration session) the user has a dynamic IP, it will trigger a false alert. Another strategy involves checking the user-agent of the client: ``` if(logging_in()) { $_SESSION['user'] = 'someuser'; $_SESSION['agent'] = $_SERVER['HTTP_USER_AGENT']; } // The Check on subsequent load if($_SESSION['agent'] != $_SERVER['HTTP_USER_AGENT']) { die('Session MAY have been hijacked'); } ``` The downside of that strategy is that if the client upgrades it's browser or installs an addon (some adds to the user-agent), the user-agent string will change and it will trigger a false alert. Another strategy is to rotate the `session_id` on each 5 requests. That way, the `session_id` theoretically doesn't stay long enough to be hijacked. ``` if(logging_in()) { $_SESSION['user'] = 'someuser'; $_SESSION['count'] = 5; } // The Check on subsequent load if(($_SESSION['count'] -= 1) == 0) { session_regenerate_id(); $_SESSION['count'] = 5; } ``` You may combine each of these strategies as you wish, but you will also combine the downsides. Unfortunately, no solution is fool-proof. If your `session_id` is compromised, you are pretty much done for. The above strategies are just stop-gap measures.
This is ridiculous. Session hijacking occurs when (usually through a cross site scripting attack) someone intercepts your sessionId (which is a cookie automatically sent to the web server by a browser). Someone has posted this for example: > So when the user log in: > > // not the most secure hash! > $\_SESSION['checksum'] = > md5($\_SESSION['username'].$salt); > > And before entering a sensitive area: > > if (md5($\_SESSION['username'].$salt) > != $\_SESSION['checksum']) { > handleSessionError(); } Lets go through what is wrong with this 1. Salts - Not wrong, but pointless. No one is cracking your damn md5, who cares if it is salted 2. comparing the md5 of a SESSION variable with the md5 of the same variable stored in the SESSION - you're comparing session to session. If the thing is hijacked this will do nothing. > ``` > $_SESSION['logged_in'] = 1; > $_SESSION['username'] = $username; // user's name > $_SESSION['hash'] = md5($YOUR_SALT.$username.$_SERVER['HTTP_USER_AGENT']); > ``` > > // user's name hashed to avoid > manipulation Avoid manipulation by whom? magical session faeries? Your session variables will not be modified unless your server is compromised. The hash is only really there to nicely condense your string into a 48 character string (user agents can get a bit long). At least however we're now checking some client data instead of checking SESSION to SESSION data, they've checked the HTTP\_USER\_AGENT (which is a string identifying the browser), this will probably be more than enough to protect you but you have to realise if the person has already taken your sessionId in someway, chances are you've also sent a request to the bad guys server and given the bad guy your user agent, so a smart hacker would spoof your user agent and defeat this protection. Which is were you get to the sad truth. As soon as your session ID is compromised, you're gone. You can check the remote address of the request and make sure that stays the same in all requests ( as I did ) and that'll work perfectly for 99% of your client base. Then one day you'll get a call from a user who uses a network with load balanced proxy servers, requests will be coming out from here through a group of different IPs (sometimes even on the wrong network) and he'll be losing his session left right and centre.
What do I need to store in the php session when user logged in?
[ "", "php", "security", "session", "" ]
I'm building a GridView control which encapsulates a child gridview. The child gridview holds a div tag which is displayed when the user selects a row in the parent gridview. However, even though the contents is hidden i.e. the div tag, an extra column is added - how do I get rid of the extra column. In the tutorial it states that by adding a `</td></td>` and starting a new row `<tr>` this should happen but it does (I also noticed that the author turned off gridlines so my assumption is that he in fact has this problem also). Here is the gridview, oh and I set the visible state of the `itemtemplate` to `'true'` but then the javascript could (not) find it. ``` <asp:GridView ID="GridView1" runat="server" AllowPaging="True" AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="PublicationID" DataSourceID="ObjectDataSource1" Width="467px" OnRowDataBound="GridView1_RowDataBound" Font-Names="Verdana" Font-Size="Small"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:CheckBox ID="PublicationSelector" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="NameAbbrev" HeaderText="Publication Name" SortExpression="NameAbbrev" /> <asp:BoundField DataField="City" HeaderText="City" SortExpression="City" /> <asp:BoundField DataField="State" HeaderText="State" SortExpression="State" /> <asp:TemplateField HeaderText="Owners"> <ItemTemplate> <asp:Label ID="Owners" runat="server"></asp:Label> </ItemTemplate> <ItemStyle HorizontalAlign="Center" /> </asp:TemplateField> <asp:BoundField DataField="Type" HeaderText="Type" SortExpression="Type" /> <asp:TemplateField> <ItemTemplate > </td></tr> <tr> <td colspan="7"> <div id="<%# Eval("PublicationID") %>" style="display: none; position: relative"> <asp:GridView ID="GridView2_ABPubs" runat="server" AutoGenerateColumns="false" Width="100%" Font-Names="Verdana" Font-Size="small"> <Columns> <asp:BoundField DataField="NameAbbrev" HeaderText="Publication Name" SortExpression="NameAbbrev" /> </Columns> </asp:GridView> </div> </td> </tr> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> ``` Apart from the extra column in the master gridview it works fine. Just for completeness, here is a to the [original article](http://www.aspboy.com/Categories/GridArticles/Hierarchical_GridView_With_Clickable_Rows.aspx) (for some reason it didn't like my `<a href>` tag so it's copy and paste).
To get rid of the extra column, just set its css style to `display: none`. You can do this by applying a CssClass to the `TemplateField` containing the nested grid: ``` <asp:TemplateField HeaderStyle-CssClass="hidden-column" ItemStyle-CssClass="hidden-column" FooterStyle-CssClass="hidden-column"> ``` Here is the definition of the CssClass I used: ``` <style type="text/css"> .hidden-column { display: none; } </style> ``` Note: the markup will still be in the html but at least it won't be visible. Tested under IE 8.0, Google Chrome 2.0 and Opera 10.0 Update: To eliminate the double border, just put the id and the style on the `<tr>` instead of the `<div>`: ``` <tr id="<%# Eval("PublicationID") %>" style="display: none; position: relative"> <td colspan="7"> <div> ... ``` ... and change the display in the javascript from `block` to `table-row`: ``` div.style.display = "table-row"; // not a div anymore!! ```
Looks like you have unbalanced tags in your `<ItemTemplate>`: ``` <ItemTemplate > </td></tr> <<---- These look unbalanced <tr> <td colspan="7"> <div id="<%# Eval("PublicationID") %>" style="display: none; position: relative"> <asp:GridView ID="GridView2_ABPubs" runat="server" AutoGenerateColumns="false" Width="100%" Font-Names="Verdana" Font-Size="small"> <Columns> <asp:BoundField DataField="NameAbbrev" HeaderText="Publication Name" SortExpression="NameAbbrev" /> </Columns> </asp:GridView> </div> </td> </tr> </ItemTemplate> ```
GridView, Child GridView, extra column won't disappear?
[ "", "c#", ".net", "gridview", "" ]
I have a CSS file that is embedded in my assembly. I need to set a background image for certain elements using this CSS file, and the image needs to be an embedded resource also. Is this possible? Is there any way I can reliably do this? I ran into the problem when putting an existing stylesheet into this dll then realized images weren't showing up. I don't know of any way to make it work though because I would need to know the URL to the embedded image. Has anyone done anything like this?
``` <% = WebResource("image1.jpg") %> ``` You can use above statement inside your CSS file, and while you register your CSS with WebResourceAttribute, you can set "PerformSubstitution" to true ``` Default.css body{ background: <%=WebResource("xyz.jpg")%> } [assembly, WebResource("Default.css","text/css", PerformSubstitution=true)] [assembly, WebResource("xyz.jpg","image/jpg")] ```
Just follow the following steps to refer a web resource as background Image in CSS 1. Refer Image URL as "background: url('<%=WebResource("xyz.jpg")%>');" in following manner. ``` Default.css body{ background: url('<%=WebResource("xyz.jpg")%>'); } ``` 2. In AssemblyInfo.cs file register the CSS file with "PerformSubstitution=true" attribute in following manner ``` [assembly, WebResource("Default.css","text/css", PerformSubstitution=true)] ``` 3. Now again in AssemblyInfo.cs file register the image file as ``` [assembly, WebResource("xyz.jpg","image/jpg")] ``` 4. Right Click the Image File (xyz.jpg) and CSS File (Default.css) and click on Properties now select "Build Resource" option as "Embedded Resource". and its done. Happy Coding !!!
How to reference embedded images from CSS?
[ "", "c#", ".net", "css", "embedded-resource", "" ]
I have a `TextBlock` in WPF. I write many lines to it, far exceeding its vertical height. I expected a vertical scroll bar to appear automatically when that happens, but it didn't. I tried to look for a scroll bar property in the Properties pane, but could not find one. How can I make vertical scroll bar created automatically for my `TextBlock` once its contents exceed its height? Clarification: I would rather do it from the designer and not by directly writing to the XAML.
Wrap it in a scroll viewer: ``` <ScrollViewer> <TextBlock /> </ScrollViewer> ``` **NOTE** this answer applies to a `TextBlock` (a read-only text element) as asked for in the original question. If you want to show scroll bars in a `TextBox` (an editable text element) then use the `ScrollViewer` attached properties: ``` <TextBox ScrollViewer.HorizontalScrollBarVisibility="Disabled" ScrollViewer.VerticalScrollBarVisibility="Auto" /> ``` Valid values for these two properties are `Disabled`, `Auto`, `Hidden` and `Visible`.
can use the following now: ``` <TextBox Name="myTextBox" ScrollViewer.HorizontalScrollBarVisibility="Auto" ScrollViewer.VerticalScrollBarVisibility="Auto" ScrollViewer.CanContentScroll="True">SOME TEXT </TextBox> ```
Automatic vertical scroll bar in WPF TextBlock?
[ "", "c#", ".net", "wpf", "scrollbar", "textblock", "" ]
My question as title above. For example ``` IEnumerable<T> items = new T[]{new T("msg")}; items.ToList().Add(new T("msg2")); ``` but after all it only has 1 item inside. Can we have a method like `items.Add(item)` like the `List<T>`?
You cannot, because `IEnumerable<T>` does not necessarily represent a collection to which items can be added. In fact, it does not necessarily represent a collection at all! For example: ``` IEnumerable<string> ReadLines() { string s; do { s = Console.ReadLine(); yield return s; } while (!string.IsNullOrEmpty(s)); } IEnumerable<string> lines = ReadLines(); lines.Add("foo") // so what is this supposed to do?? ``` What you can do, however, is create a *new* `IEnumerable` object (of unspecified type), which, when enumerated, will provide all items of the old one, plus some of your own. You use `Enumerable.Concat` for that: ``` items = items.Concat(new[] { "foo" }); ``` This *will not change the array object* (you cannot insert items into to arrays, anyway). But it will create a new object that will list all items in the array, and then "Foo". Furthermore, that new object will *keep track of changes in the array* (i.e. whenever you enumerate it, you'll see the current values of items).
The type `IEnumerable<T>` does not support such operations. The purpose of the `IEnumerable<T>` interface is to allow a consumer to view the contents of a collection. Not to modify the values. When you do operations like .ToList().Add() you are creating a new `List<T>` and adding a value to that list. It has no connection to the original list. What you can do is use the Add extension method to create a new `IEnumerable<T>` with the added value. ``` items = items.Add("msg2"); ``` Even in this case it won't modify the original `IEnumerable<T>` object. This can be verified by holding a reference to it. For example ``` var items = new string[]{"foo"}; var temp = items; items = items.Add("bar"); ``` After this set of operations the variable temp will still only reference an enumerable with a single element "foo" in the set of values while items will reference a different enumerable with values "foo" and "bar". **EDIT** I contstantly forget that Add is not a typical extension method on `IEnumerable<T>` because it's one of the first ones that I end up defining. Here it is ``` public static IEnumerable<T> Add<T>(this IEnumerable<T> e, T value) { foreach ( var cur in e) { yield return cur; } yield return value; } ```
How can I add an item to a IEnumerable<T> collection?
[ "", "c#", "list", "ienumerable", "" ]
Mobile safari supports an attribute on input elements called [`autocapitalize`](https://developer.apple.com/documentation/webkitjs/htmlelement/2871133-autocapitalize) [[documented here](https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/DesigningForms/DesigningForms.html)], which when set to 'off' will stop the iPhone capitalizing the text input into that field, which is useful for url or email fields. ``` <input type="text" class="email" autocapitalize="off" /> ``` But this attribute is not valid in html 5 (or another spec as far as I know) so including it in the html will produce an invalid html page, what I would like to do is be able to add this attribute to particular fields onload with javascript with something like this: ``` $(document).ready(function(){ jQuery('input.email, input.url').attr('autocapitalize', 'off'); }); ``` which adds the correct attribute in firefox and desktop safari, but doesn't seem to do anything in mobile safari, why?
This should be fixed in iPhone OS 3.0. What version of iPhone OS are you trying this on? ``` Email: <input id="email" type="text"><br> URL: <input id="url" type="text"><br> <script> //document.getElementById("email").autocapitalize = 'off'; //document.getElementById("url").autocapitalize = 'on'; document.getElementById("email").setAttribute('autocapitalize', 'off'); document.getElementById("url").setAttribute('autocapitalize', 'on'); alert(document.body.innerHTML); </script> ```
Side note. You can improve the user experience on iOS even more by specifying the type of the input to be "email" to automatically bring up the "email" keyboard (slightly better characters for typing an email). ``` <input type="email" class="email" autocapitalize="off" /> ``` [Here is some documentation](https://developer.apple.com/library/archive/documentation/StringsTextFonts/Conceptual/TextAndWebiPhoneOS/KeyboardManagement/KeyboardManagement.html) on how input types can control the iOS keyboard.
Can autocapitalize be turned off with javascript in mobile safari?
[ "", "javascript", "html", "iphone", "mobile-safari", "" ]
What is the command line format to display function call graph for a method in templated class with gprof? For simple C method you would specify it like: ``` gprof -f foo myprogram > gprof.output ``` How do you specify method `parse` from the following: ``` template <typename T> class A { public: template <typename X> bool parse(X& x, char*buf) { ... lots of code here ...; } }; ```
I was after the actual format to be used on command line. I can see the compiled symbols by looking at the generated files but I'm not sure what format to use on command line. Thanks anyway for all answers.
Here is python script that can parse this: [gprof2dot](http://code.google.com/p/jrfonseca/wiki/Gprof2Dot). The page has further references too. Personally, I like the [Google Performance Tools](http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools) which can, among other things, also directly generate call graphs (via graphviz / dot).
Format of parameter to display call graph for templated method with gprof?
[ "", "c++", "linux", "gnu", "gprof", "" ]
on this website: <http://www.eco-environments.co.uk/> if you scroll down to "What we do" and rollover the links you get a bubble popup display, can anyone tell me how this is created please? Thanks
You can use a jQuery plugin like [jquery tooltips](http://bassistance.de/jquery-plugins/jquery-plugin-tooltip/) for this kind of effect. Check out the [demos](http://jquery.bassistance.de/tooltip/demo/).
This is actually an effect that is pretty simple to create. First, there is a hidden div in the HTML for the Tooltip. ``` <div class="popup" style="display:none;">Hidden Content Here</div> ``` Then javascript can be used to show the hidden content when something is hovered over. For instance, here is how it would work using jQuery (and using the example page as an example). ``` $(".tooltip").hover(function(){ $(this).children(".popup").fadeIn(); // Other effects can be used to show the Tooltip }, function() { $(this).children(".popup").fadeOut(); }); ``` The rest is a matter of using CSS to make sure that the Tooltip is positioned correctly and making it look nice (in most cases absolute positioning would be used). There are a lot of different techniques to making nice CSS for Tooltips, so as always [Google is your friend](http://www.google.com/search?hl=en&hs=4BR&q=tooltip+css&btnG=Search).
CSS / Jquery Effect - What is this?
[ "", "javascript", "jquery", "css", "" ]
Is there any way to hide a C# program from the Windows Task Manager? EDIT: Thanks for the overwhelming response! Well I didn't intend to do something spooky. Just wanted to win a bet with my friend that I can do it without him noticing. And I'm not a geek myself to be able to write a rootkit, as someone suggested though I'd love to know how to do it.
Not that I'm aware of - and there shouldn't be. The point of the task manager is to allow users to examine processes etc. If the user should be able to do that, they should be able to find your program. If they shouldn't be poking around in Task Manager, group policy should prevent that - not your program.
Don't mean to zombie this but i thought i could contribute some useful information If you want to hide a application there a two methods (that i can think of atm). They both have their ups and downs [1] **SSDT Table hooking** - basically you have to set the MDL of the table to writeable, overwrite the address of `NtQuerySystemInformation` (iirc) with the address of your function and have it call the original function after filtering the results. This method doesn't suit your needs very well because the hooking function would always need to be in memory and would involve writing a kernel mode driver. Its a fun thing to do but debugging is a pain because an exception means a BSOD. [2] **Direct Kernel Object Manipulation (DKOM)** - the list of processes is a doubly linked list, with a kernel mode driver you can alter the pointers of the records above and below your process to point around yours. This still requires the use of a kernel mode driver but there are rootkits such as FU that can be easily downloaded that contain an exe and the service. The exe could be called from inside your application as a child process (in the released version of FU, at least the one I found, there was a bug which I had to fix where if the hidden application exited the computer would BSOD, it was a trivial fix). This will thankfully be caught by almost any decent antivirus so if you are trying to do something sneaky you'll have to learn to get around that (hint: they use a binary signature) I have not used method 1 ever but method 2 has worked for me from a VB.Net application. A third possible option is to just create the application as a windows service, this will show up in task manager by default but I'm willing to bet that there is a way to tell it to not show up there since there are plenty of other services which don't show up in task manager. Hope I helped a little, my advice is that if you are interested in this kind of stuff to learn C++.
Hide a C# program from the task manager?
[ "", "c#", "" ]
I'm looking for a clean way to handle forms with PHP that does validation and caching of data, but doesn't require mixing PHP with the form HTML. Basically, I want to be able to write a form in pure HTML and intercept that form on it's way to it's destination, process it and either let the data go happily on it's way, or return to the form sending validation errors and the cached results back with it. This all part of a little framework I'm developing as an aid to learning. My initial plan was to use the Routing object to sniff out GET and POST data, but that data doesn't always indicate the presence of a form. I've been looking through [this list of HTTP Headers](http://en.wikipedia.org/wiki/List_of_HTTP_headers) but I'm not seeing anything that could help. Although I've never bothered looking at HTTP Headers before so I could easily be missing something. Or is there some other reliable way of detecting form submissions?
There is no generic way to distinguish between form submissions and simple links when it comes to GET requests. With plain HTML, only forms can generate POST requests, so filtering on REQUEST\_METHOD might be a viable solution. (However, this obviously won't catch GET requests, and AJAX can also generate POST requests, so...) If you control the form, I'd suggest adding something like ``` <input type="hidden" name="this_is_a_form" /> ``` to every form. (For bonus points, you can intercept the form HTML and add the above line on the fly.)
You pretty much have two options: 1. Use ajax. 2. Submit to a middleman.php and redirect to step 2 of the process, or back to the form. I would recommend ajax to avoid unnecessary page loads. You could do something like: (assuming use of jQuery here because it is what I know) ``` $.post("middleman.php", {option1: options}, function(data){ if (data == 1) window.location = 'step2.html'; else $("#notificationArea").text("sorry, form incorrect")}, "html"); ``` Then your middleman.php would echo either 1 for a correctly submitted form, or anything else for incorrect. Also, this javascript notifies the user of incorrectness. You could also echo specific error messages for specific parts of the form that are incorrect. (this also assumes you have a div "notificiationArea" that can hold the noficiation)
Detecting whether a form has been submitted with PHP
[ "", "php", "validation", "forms", "" ]
I want to find the column name of the cell in the below event of a datagridview. ``` protected void grvDetailedStatus_ItemDataBound(object sender, DataGridItemEventArgs e) { for (int i = 0; i <= e.Item.Cells.Count - 1; i++) { System.DateTime cellDate = default(System.DateTime); if (System.DateTime.TryParse(e.Item.Cells[i].Text, out cellDate)) { e.Item.Cells[i].Text = string.Format("{0:d}", cellDate); } } } ``` Is there any way to find the column name of the cell I am manipulating? EDIT: Sorry for not giving a clear explanation. Let me explain it more clearly. I want to do the below formatting only for particular column values: ``` protected void grvDetailedStatus_ItemDataBound(object sender, DataGridItemEventArgs e) { for (int i = 0; i <= e.Item.Cells.Count - 1; i++) { System.DateTime cellDate = default(System.DateTime); if (System.DateTime.TryParse(e.Item.Cells[i].Text, out cellDate)) { e.Item.Cells[i].Text = string.Format("{0:d}", cellDate); } } } ``` Say, for example, I have to change the format of the date only for the "column1" and "column5". So now I want to know the column name and with that I want to format that column alone and leave the rest. ``` protected void grvDetailedStatus_ItemDataBound(object sender, DataGridItemEventArgs e) { if ( columnName == "Column1") { for (int i = 0; i <= e.Item.Cells.Count - 1; i++) { System.DateTime cellDate = default(System.DateTime); if (System.DateTime.TryParse(e.Item.Cells[i].Text, out cellDate)) { e.Item.Cells[i].Text = string.Format("{0:d}", cellDate); } } } } ``` .
I'll assume you're using a DataGrid. GridView is a similar control, but slightly different flavor. I'll also assume you're using autogenerated columns, in which case the `DataGrid.Columns` collection won't help you. Instead of checking the column names each time, it's better to store the indexes of the columns you're interested in once. Like this: ``` private List<int> _myColumns; protected void grvDetailedStatus_ItemDataBound(object sender, DataGridItemEventArgs e) { if (e.Item.ItemType == ListItemType.Header) { _myColumns = new List<int>(); for (int i = 0; i < _columnNames.Length; i++) { switch (e.Item.Cells[i].Text) { case "column1": case "column5": // Interesting column, store index _myColumns.Add(i); break; } } } else if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { foreach (int i in _myColumns) { // Your original code: System.DateTime cellDate = default(System.DateTime); if (System.DateTime.TryParse(e.Item.Cells[i].Text, out cellDate)) { e.Item.Cells[i].Text = string.Format("{0:d}", cellDate); } } } } ``` If you really wanted to store all the column names, it would be fairly easy to adapt this code (or look at an earlier version of this post).
If you're running through the individual cells during the ItemDataBound event and need to get the column name of a cell then you can use the following: ``` protected void myDataGrid_ItemDataBound(object sender, DataGridItemEventArgs e) { int i = 0; string colName; foreach (TableCell cell in e.Item.Cells) { //string cellTxt = e.Item.Cells[i].Text; DataRowView dv = (DataRowView)e.Item.DataItem; colName = dv.DataView.Table.Columns[i].ColumnName; i++; } } ```
is it possible to read the column name of a cell in gridview?
[ "", "c#", "asp.net", "data-binding", "datagrid", "" ]
Most programming books will tell you to use Console.ReadKey() to pause the console, but is there a better alternative? ``` Console.WriteLine("Press any key to continue."); Console.ReadKey(); ```
You haven't actually told us what you wish to achieve. If you wish to **stop the output until the user chooses to continue**, then you're not really going to get much better than just waiting for a key to be pressed using `Console.ReadKey`. If you just want to **pause the output for a certain amount of time**, you can use the `Thread.Sleep` method, which doesn't require any human intervention.
How about Console.ReadLine() :)
Is there a better alternative to Console.ReadKey()?
[ "", "c#", ".net", "console", "console-application", "" ]
Given a Java Object, how can I get a list of Objects that referring to it? There must be extension mechanisms in the GC for doing this kind of thing, just can't seem to find them.
I'm not sure if exactly what you're after is simply accessible. The JPDA (Java Platform Debugger Architecture) enables construction of debuggers, so is a good starting point if you want to delve into the internals. There's a [blog on the JPDA](http://preetampalwe.blogspot.com/2006/12/jpda-tutorial.html) that you may also find useful. Check out the [Sun Developer Network JPDA page](http://java.sun.com/javase/technologies/core/toolsapis/jpda/) for links to documentation, FAQs, sample code and forums. Two interfaces that may be good starting points are: * com.sun.jdi.ObjectReference: An instance of java.lang.Class from the target VM * com.sun.jdi.VirtualMachine: A virtual machine targeted for debugging
If you're looking for a memory leak, I find analyzing heap dumps with [Eclipse MAT](http://www.eclipse.org/mat/ "Eclipse MAT") to be very helpful. You can select an object and ask for paths to "GC roots", i.e. show me all chains of references that are keeping this object from being garbage collected.
Getting list of objects referring to an Object
[ "", "java", "garbage-collection", "" ]
I have a simple one text input form that when submitted, needs to fetch a php file (passing the inputs to the file) and then take the result (just a line of text) and place it in a `div` and fade that `div` into view. Here is what I have now: ``` <form id=create method=POST action=create.php> <input type=text name=url> <input type="submit" value="Create" /> <div id=created></div> ``` What I need is the results of `create.php?url=INPUT`, to be dynamically loaded into the `div` called `created`. I have the jquery form script, but I haven't been able to get it to work right. But I do have the library loaded (the file).
This code should do it. You don't need the Form plugin for something as simple as this: ``` $('#create').submit(function() { // catch the form's submit event $.ajax({ // create an AJAX call... data: $(this).serialize(), // get the form data type: $(this).attr('method'), // GET or POST url: $(this).attr('action'), // the file to call success: function(response) { // on success.. $('#created').html(response); // update the DIV } }); return false; // cancel original event to prevent form submitting }); ```
This works also for file upload ``` $(document).on("submit", "form", function(event) { event.preventDefault(); var url=$(this).attr("action"); $.ajax({ url: url, type: 'POST', dataType: "JSON", data: new FormData(this), processData: false, contentType: false, success: function (data, status) { $('#created').html(data); //content loads here }, error: function (xhr, desc, err) { console.log("error"); } }); }); ```
jquery submit form and then show results in an existing div
[ "", "javascript", "jquery", "html", "ajax", "" ]
I'm the newcomer into java ee,I have studied about core java,servlet,jsp. Could anyone give me some suggestion(some books,forums,etc) on how to promote my skills into java ee? Thanks a lot in advance.
One good place to start is Sun's [Java 6 EE Tutorial](http://java.sun.com/javaee/6/docs/tutorial/doc/index.html) on the Sun web site.
I would recommend [Server-Based Java Programming](https://rads.stackoverflow.com/amzn/click/com/1884777716). This isn't a Java EE book per se, but it explains *what* a Java based server needs to do and how to do it with good example code. It will give you the foundation to understand to understand what Java EE is trying to accomplish and why things are the way they are. In the same vein, I would recommend [Expert One-on-One J2EE Development without EJB](https://rads.stackoverflow.com/amzn/click/com/0764558315). This book is written by the founder of the Spring project and provides insight to the problems with Java EE that Spring is trying to solve. Note this was written before Spring was open sourced, so it's more a 'this is how a server framework that's *not* Java EE could work' book, not a 'how to use Spring' book. Even if you are using straight Java EE, it helps to know what issues you could run into (with J2EE) or what the motivations were for Java EE 5 (based on Spring & Hibernate philosophies). I would not recommend the actual specifications from Sun. They are dense, technical and better used as a reference.
study roadmap for newcomer into java ee
[ "", "java", "jakarta-ee", "" ]
I'm using TortiseSVN for my subversion client on a Windows Server 2008 box and I've got a folder with code checked out into it. When I go to open the solution file that's under source control Visual Studio 2008 starts and before it can even finish loading the solution from what I can tell Visual Studio crashes. I'm trying to open a solution that has VB code in it. It gives no error messages or warnings. It's just gone. I have checked the files and they all seem fine. The solution file seems fine when I look at it with a text editor. This is also Visual Studio 2008 SP1 and I've got all the latest .NET service packs installed. Has anyone else seen this before and know how to fix it? **Edit:** I just did an SVN export to a new directory and it still crashes in the exported directory where there is no longer any SVN attached to it. Additionally, it crashes EVERY time I try to open the project that came from SVN.
I had a similar problem and I fixed it and afterwards I wasn't quite sure how I managed it. It basically involved going to the tools/options menu and setting the source control plugin to none. However, I obviously had to have had the solution open in Visual Studio if doing this was to fix the solution, but yet I thought the problem was I could not open the solution. The only possible scenario would be if I was able to open the solution, but not open any of the projects inside it, hence able to change the source control settings immediately after opening the solution. Does that make sense?
You should be looking at the solution file with an [xml editor](http://www.microsoft.com/downloads/details.aspx?familyid=72d6aa49-787d-4118-ba5f-4f30fe913628&displaylang=en), at least then you will get some help for subtile flaw's in the formatting or something like that. You can also submit feedback to Microsoft on the [VIsual Studio Connect site](https://connect.microsoft.com/VisualStudio), if the bug turns out to be real. Some commonsense things todo however would be, goto your visual studio command prompt, start off with "devenv /ResetSettings", that often helps isolate any weirdo add-on or something lke that. Also, try to build clean with msbuild or vcbuild, then build fully with either one (i.e. if vcbuild can not build your solution, use msbuild). That can help by laying out the symbols and such and maybe clear out some corrupted file or something. You also may have .suo files from your subversion, those are binary files that do contain some settings, it's common for people to accidentially check them in, but they are usually better off being kept on a per-developer basis (not in the source tree). The /resetsettings will likely clear these out also, but you may want to make sure. You can also double check the path's to all of the assemblies referenced, that your not going from a 32/64 bit host, and the CLR DLL's are in different path's now etc... One last thing, if your really stuck, you can get a stack trace and debug the crash a bit ;), see where the fault is occuring and search that module online, your'll often find that somebody may have a specific solution. Oh yeah, also, hooker's can be trickey. Don't trust them for a minute. Make sure you set tsvn's "only load in windows exlporer" option and configure it specifically for what folders on your system have local-svn working directories, this will greatly reduce the working set for their shell extension. On most any system, over time, one program or another (apple irw.exe or adobe pdf-preloader.exe sort's of ad-ware) will try to work it's way into your shell. You should try to make sure your dev box is rather clean from anything hook's, simular to what VladV was saying...
Visual Studio 2008 crashes when opening solutions in a TortiseSVN directory? How do I fix this?
[ "", "c#", "vb.net", "visual-studio-2008", "" ]
I have a calendar which passes selected dates as strings into a method. Inside this method, I want to generate a list of all the dates starting from the selected start date and ending with the selected end date, obviously including all of the dates inbetween, regardless of how many days are inbetween the selected start and end dates. Below I have the beginning of the method which takes the date strings and converts them into DateTime variables so that I can make use of the DateTime calculation functions. However, I cannot seem to work out how to calculate all of the dates inbetween the start and end date? Obviously the first stage is to subtract the start date from the end date, but I cannot calculate the rest of the steps. Help appreciated greatly, kind regards. ``` public void DTCalculations() { List<string> calculatedDates = new List<string>(); string startDate = "2009-07-27"; string endDate = "2009-07-29"; //Convert to DateTime variables DateTime start = DateTime.Parse(startDate); DateTime end = DateTime.Parse(endDate); //Calculate difference between start and end date. TimeSpan difference = end.Subtract(start); //Generate list of dates beginning at start date and ending at end date. //ToDo: } ```
``` static IEnumerable<DateTime> AllDatesBetween(DateTime start, DateTime end) { for(var day = start.Date; day <= end; day = day.AddDays(1)) yield return day; } ``` Edit: Added code to solve your particular example and to demonstrate usage: ``` var calculatedDates = new List<string> ( AllDatesBetween ( DateTime.Parse("2009-07-27"), DateTime.Parse("2009-07-29") ).Select(d => d.ToString("yyyy-MM-dd")) ); ```
You just need to iterate from start to end, you can do this in a for loop ``` DateTime start = DateTime.Parse(startDate); DateTime end = DateTime.Parse(endDate); for(DateTime counter = start; counter <= end; counter = counter.AddDays(1)) { calculatedDates.Add(counter); } ```
How to loop between two dates
[ "", "c#", "datetime", "" ]
Why are abstract or interface classes created, or when should we use abstract or interface classes?
Interface is used when you only want to declare which methods and members a class MUST have. Anyone implementing the interface will have to declare and implement the methods listed by the interface. If you also want to have a default implementation, use abstract class. Any class extending the abstract class will have to implement only its abstract methods and members, and will have some default implementation of the other methods of the abstract class, which you may override or not. --EDIT - forgot to mention, Earwicker reminded me Finally, you can implement as many interfaces as you want, but only extend one class (being it abstract or not). Keep that in mind before choosing.
The key difference is that you can `implement` multiple interfaces in a class, but only `extend` a single abstract class. This is because an abstract class can also define fields that store data, whereas an interface cannot.
When to use abstract class or interface?
[ "", "java", "interface", "abstract-class", "" ]
Can someone give me an example of how to save a key/value in app.config using `C#` and WinForms?
In *ASP.NET*: ``` Configuration config = WebConfigurationManager.OpenWebConfiguration(null); AppSettingsSection app = config.AppSettings; app.Settings.Add("x", "this is X"); config.Save(ConfigurationSaveMode.Modified); ``` In *WinForms*: ``` Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); AppSettingsSection app = config.AppSettings; app.Settings.Add("x", "this is X"); config.Save(ConfigurationSaveMode.Modified); ```
I know you specifically asked for WinForms solution, but this might help some others. For a .NET 4.0 console application, none of these worked for me. So I used the following and it worked: ``` private static void UpdateSetting(string key, string value) { Configuration configuration = ConfigurationManager. OpenExeConfiguration(Assembly.GetExecutingAssembly().Location); configuration.AppSettings.Settings[key].Value = value; configuration.Save(); ConfigurationManager.RefreshSection("appSettings"); } ```
How to Save Configuation in app.config in C# Winforms
[ "", "c#", "configuration", "app-config", "" ]
> **Possible Duplicate:** > [Why are references not reseatable in C++](https://stackoverflow.com/questions/728233/why-are-references-not-reseatable-in-c) I am trying to more or less swap two reference variables (as practice, I could have swapped the actual variables). I tried doing this by making a temporary variable and making one of the references equal the other, but this got shot down by the compiler. Here is an example: ``` void Foo() { //code int& ref1 = a; int& ref2 = b; int temp; temp = ref1; ref1 = ref2; ref2 = temp; //or, better yet std::swap(ref1, ref2); } ``` I got an error, and looked on the faq lite. It details that they cannot be reseated, but does not explain why. Why? [Here](https://isocpp.org/wiki/faq/references#reseating-refs) is a link to the Faq Lite for reference (<---, get it?).
Because there is no syntax to do it: ``` int x = 0; int y = 1; int & r = x; ``` Now if I say: ``` r = y; ``` I assign the value of y to x. If I wanted to reseat I would need some special syntax: ``` r @= y; // maybe? ``` As the main reason for using references is as parameters and return types of functions, where this is not an issue, it didn't seem to C++'s designers that this was a path worth going down.
A reference is an alias to object. Alias can not be changed (we are not spies ;-)
Why is it illegal/immoral to reseat a reference?
[ "", "c++", "reference", "" ]
For arrays and lists in Python and Numpy are the following lines equivalent: ``` itemlist = [] for j in range(len(myarray)): item = myarray[j] itemlist.append(item) ``` and: ``` itemlist = [] for item in myarray: itemlist.append(item) ``` I'm interested in the *order* of itemlist. In a few examples that I have tried they are identical, but is it *guaranteed*? For example, I know that the `foreach` statement in C# doesn't guarantee order, and that I should be careful with it.
It is guaranteed for lists. I think the more relevant Python parallel to your C# example would be to iterate over the keys in a dictionary, which is NOT guaranteed to be in any order. ``` # Always prints 0-9 in order a_list = [0,1,2,3,4,5,6,7,8,9] for x in a_list: print x # May or may not print 0-9 in order. Implementation dependent. a_dict = {'0':0,'1':1,'2':2,'3':3,'4':4,'5':5,'6':6,'7':7,'8':8,'9':9} for x in a_dict: print x ``` The `for <element> in <iterable>` structure only worries that the `iterable` supplies a `next()` function which returns something. There is no general guarantee that these elements get returned in any order over the domain of the `for..in` statement; lists are a special case.
Yes, it's entirely guaranteed. `for item in myarray` (where `myarray` is a *sequence*, which includes numpy's arrays, builtin lists, Python's array.arrays, etc etc), is in fact equivalent in Python to: ``` _aux = 0 while _aux < len(myarray): item = myarray[_aux] ...etc... ``` for some phantom variable `_aux`;-). Btw, both of your constructs are also equivalent to ``` itemlist = list(myarray) ```
Is 'for x in array' always result in sorted x? [Python/NumPy]
[ "", "python", "arrays", "list", "numpy", "" ]
I have this: ``` List<object> nodes = new List<object>(); nodes.Add( new { Checked = false, depth = 1, id = "div_" + d.Id }); ``` ... and I'm wondering if I can then grab the "Checked" property of the anonymous object. I'm not sure if this is even possible. Tried doing this: `if (nodes.Any(n => n["Checked"] == false))` ... but it doesn't work. Thanks
If you want a strongly typed list of anonymous types, you'll need to make the list an anonymous type too. The easiest way to do this is to project a sequence such as an array into a list, e.g. ``` var nodes = (new[] { new { Checked = false, /* etc */ } }).ToList(); ``` Then you'll be able to access it like: ``` nodes.Any(n => n.Checked); ``` Because of the way the compiler works, the following then should also work once you have created the list, because the anonymous types have the same structure so they are also the same type. I don't have a compiler to hand to verify this though. ``` nodes.Add(new { Checked = false, /* etc */ }); ```
If you're storing the object as type `object`, you need to use reflection. This is true of any object type, anonymous or otherwise. On an object o, you can get its type: ``` Type t = o.GetType(); ``` Then from that you look up a property: ``` PropertyInfo p = t.GetProperty("Foo"); ``` Then from that you can get a value: ``` object v = p.GetValue(o, null); ``` This answer is long overdue for an update for C# 4: ``` dynamic d = o; object v = d.Foo; ``` And now another alternative in C# 6: ``` object v = o?.GetType().GetProperty("Foo")?.GetValue(o, null); ``` Note that by using `?.` we cause the resulting `v` to be `null` in three different situations! 1. `o` is `null`, so there is no object at all 2. `o` is non-`null` but doesn't have a property `Foo` 3. `o` has a property `Foo` but its real value happens to be `null`. So this is not equivalent to the earlier examples, but may make sense if you want to treat all three cases the same. To use *dynamic* to read properties of anonymous type in your Unit Tests, You need to tell your project's compiler services to make the assembly visible internally to your test project. You can add the following into your the project (.proj) file. Refer [this link](https://blog.sanderaernouts.com/make-internals-visible-with-new-csproj-format) for more information. ``` <ItemGroup> <AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo"> <_Parameter1>Name of your test project</_Parameter1> </AssemblyAttribute> </ItemGroup> ```
How to access property of anonymous type in C#?
[ "", "c#", ".net", "object", "properties", "anonymous-types", "" ]
From [Dr. Dobbs](http://www.ddj.com/cpp/218600111): > Concepts were to have been the central > new feature in C++0x > > Even after cutting "concepts," the > next C++ standard may be delayed. > Sadly, there will be no C++0x (unless > you count the minor corrections in > C++03). We must wait for C++1x, and > hope that 'x' will be a low digit. > There is hope because C++1x is now > feature complete (excepting the > possibility of some national standards > bodies effectively insisting on some > feature present in the formal proposal > for the standard). "All" that is left > is the massive work of resolving > outstanding technical issues and > comments. I was on the bleeding edge of MT- and MP-safe C++ programming circa 1997 - 2000. We had to do many things ourselves. It's a bit shocking that the standard has not addressed concurrency in the 9 years since. So what's the big deal?
Stroustrup was one of the voters to remove Concepts finally. I don't see C++ *collapsing*, instead I see that the C++ committee is doing its job. Half-baked features are not the solution for a robust language like C++. A look at what is going to be in C++0x tells you the opposite of what you are saying. Finally, I don't mind to wait to get *something good forever*, instead of *something good for a while* :)
No. I'm not sure what makes you'd think it is. The Dr.Dobbs article doesn't imply that it's the case. It is a big update, which means a lot of work polishing up the the language spec and fixing errors. That's neither new nor surprising. And the ISO standardization process takes time. That's not new either. The article you posted says just that -- there's work to be done, but the sky is not falling, it's pretty basic and low-risk work they'll be doing from now on. There are a couple of reasons why it's taken so long: The obvious is that they're making a lot of changes, and a few features turned out bigger than expected, and had to be cut. That much goes without saying and is responsible for the delays. The less obvious, but just as important factor is that they *wanted* a long time to pass since C++98. They wanted to give the language time to stabilize and mature, get lots of use experience with *current* language features, and give compilers time to catch up. Until a few years ago, C++ just wasn't ready to be updated. Big commercial compilers were still a mess, and too many people still weren't comfortable with modern C++ design. That's why things like multithreading has not been addressed until now. It didn't make it in C++98, and they didn't want to make changes too soon after that. I don't know which year they originally hoped to target, but I doubt it was earlier than 2007 or so. So yes, the new standard has been delayed a bit, but not because the language is "collapsing".
Is C++0x collapsing under the weight of new features and the standardization process?
[ "", "c++", "standards", "c++11", "" ]