Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
This doesn't work:
```
string temp;
cout << "Press Enter to Continue";
cin >> temp;
``` | ```
cout << "Press Enter to Continue";
cin.ignore();
```
or, better:
```
#include <limits>
cout << "Press Enter to Continue";
cin.ignore(std::numeric_limits<streamsize>::max(),'\n');
``` | Try:
```
char temp;
cin.get(temp);
```
or, better yet:
```
char temp = 'x';
while (temp != '\n')
cin.get(temp);
```
I think the string input will wait until you enter real characters, not just a newline. | Press Enter to Continue | [
"",
"c++",
"string",
"newline",
"cin",
""
] |
Why use precompiled headers?
---
Reading the responses, I suspect what I've been doing with them is kind of stupid:
```
#pragma once
// Defines used for production versions
#ifndef PRODUCTION
#define eMsg(x) (x) // Show error messages
#define eAsciiMsg(x) (x)
#else
#define eMsg(x) (L"") // Don't show error messages
#define eAsciiMsg(x) ("")
#endif // PRODUCTION
#include "targetver.h"
#include "version.h"
// Enable "unsafe", but much faster string functions
#define _CRT_SECURE_NO_WARNINGS
#define _SCL_SECURE_NO_WARNINGS
// Standard includes
#include <stdio.h>
#include <tchar.h>
#include <iostream>
#include <direct.h>
#include <cstring>
#ifdef _DEBUG
#include <cstdlib>
#endif
// Standard Template Library
#include <bitset>
#include <vector>
#include <list>
#include <algorithm>
#include <iterator>
#include <string>
#include <numeric>
// Boost libraries
#include <boost/algorithm/string.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/scoped_array.hpp>
//Windows includes
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include "FILETIME_Comparisons.h"
#include <shlwapi.h>
#include <Shellapi.h>
#include <psapi.h>
#include <imagehlp.h>
#include <mscat.h>
#include <Softpub.h>
#include <sfc.h>
#pragma comment(lib, "wintrust.lib")
#pragma comment(lib,"kernel32.lib")
#pragma comment(lib,"Psapi.lib")
#pragma comment(lib,"shlwapi.lib")
#pragma comment(lib,"imagehlp.lib")
#pragma comment(lib,"Advapi32.lib")
#pragma comment(lib,"Shell32.lib")
#pragma comment(lib,"Sfc.lib")
#pragma comment(lib,"Version.lib")
// Crypto ++ libraries
#ifdef _DEBUG
#pragma comment(lib,"cryptlibd.lib")
#else
#pragma comment(lib,"cryptlib.lib")
#endif
#define CRYPTOPP_ENABLE_NAMESPACE_WEAK 1
#include <md5.h>
#include <sha.h>
// String libraries
#include "stringUnicodeConversions.h"
#include "expandEnvStrings.h"
#include "randomString.h"
#include "getShortPathName.h"
// Regular Expression Libraries
#include "fpattern.h"
// File Result Record
#include "unixTimeToFileTime.h"
#include "fileData.h"
// Writer
#include "writeFileData.h"
// Criteria Structure System
#include "priorities.h"
#include "criterion.H"
#include "OPSTRUCT.H"
#include "regexClass.H"
#include "FILTER.h"
// Sub Programs Root Class
#include "subProgramClass.h"
// Global data
#include "globalOptions.h"
// Logger
#include "logger.h"
// Console parser
#include "consoleParser.h"
// Timeout handler
#include "timeoutThread.h"
// Zip library
#include "zip.h"
#include "unzip.h"
#include "zipIt.h"
// Scanner
#include "mainScanner.h"
#include "filesScanner.h"
// Sub Programs
#include "volumeEnumerate.h"
#include "clsidCompressor.h"
#include "times.h"
#include "exec.h"
#include "uZip.h"
// 64 bit support
#include "disable64.h"
``` | It compiles a *lot* quicker. C++ compilation takes years without them. Try comparing some time in a large project! | In C/C++, the #include mechanism is a textual copy of the file specified into the current file. Headers include other headers (which include yet other headers), so when you do a `#include`, it could be adding tens of thousands of lines of C++ into each cpp file (or cxx, c, whatever), all of which need to be compiled each time. This can be a severe bottleneck for large projects.
Precompiled headers speed this up by compiling each header once, then including that compiled state into the cpp they are included in. | Why use precompiled headers (C/C++)? | [
"",
"c++",
"precompiled-headers",
""
] |
Is there anyway to do something like this?
```
(correct pointer datatype) returnPointer(void* ptr, int depth)
{
if(depth == 8)
return (uint8*)ptr;
else if (depth == 16)
return (uint16*)ptr;
else
return (uint32*)ptr;
}
```
Thanks | No. The return type of a C++ function can only vary based on explicit template parameters or the *types* of its arguments. It cannot vary based on the *value* of its arguments.
However, you can use various techniques to create a type that is the union of several other types. Unfortunately this won't necessarily help you here, as one such technique is void \* itself, and getting back to the original type will be a pain.
However, by turning the problem inside out you may get what you want. I imagine you'd want to use the code you posted as something like, for example:
```
void bitmap_operation(void *data, int depth, int width, int height) {
some_magical_type p_pixels = returnPointer(data, depth);
for (int x = 0; x < width; x++)
for (int y = 0; y < width; y++)
p_pixels[y*width+x] = some_operation(p_pixels[y*width+x]);
}
```
Because C++ needs to know the type of p\_pixels at compile time, this won't work as-is. But what we can do is make bitmap\_operation itself be a template, then wrap it with a switch based on the depth:
```
template<typename PixelType>
void bitmap_operation_impl(void *data, int width, int height) {
PixelType *p_pixels = (PixelType *)data;
for (int x = 0; x < width; x++)
for (int y = 0; y < width; y++)
p_pixels[y*width+x] = some_operation(p_pixels[y*width+x]);
}
void bitmap_operation(void *data, int depth, int width, int height) {
if (depth == 8)
bitmap_operation_impl<uint8_t>(data, width, height);
else if (depth == 16)
bitmap_operation_impl<uint16_t>(data, width, height);
else if (depth == 32)
bitmap_operation_impl<uint32_t>(data, width, height);
else assert(!"Impossible depth!");
}
```
Now the compiler will automatically generate three implementations for bitmap\_operation\_impl for you. | If you can use a template argument instead of a normal parameter, you can create a templated function that returns the correct type for each `depth` value. First there needs to be some definition of the correct type according to `depth`. You can define a template with specializations for the different bit sizes:
```
// template declaration
template<int depth>
struct uint_tmpl;
// specializations for certain types
template<> struct uint_tmpl<8> { typedef uint8_t type; };
template<> struct uint_tmpl<16> { typedef uint16_t type; };
template<> struct uint_tmpl<32> { typedef uint32_t type; };
```
The this definition can be used to declare a templated function that returns the correct type for every bit value:
```
// generic declaration
template<int depth>
typename uint_tmpl<depth>::type* returnPointer(void* ptr);
// specializations for different depths
template<> uint8_t* returnPointer<8>(void* ptr) { return (uint8_t*)ptr; }
template<> uint16_t* returnPointer<16>(void* ptr) { return (uint16_t*)ptr; }
template<> uint32_t* returnPointer<32>(void* ptr) { return (uint32_t*)ptr; }
``` | Returning different data type depending on the data (C++) | [
"",
"c++",
"return-value",
""
] |
I'm doing some screen scraping using WATIJ, but it can't read HTML tables (throws NullPointerExceptions or UnknownObjectExceptions). To overcome this I read the HTML and run it through JTidy to get well-formed XML.
I want to parse it with XPath, but it can't find a `<table ...>` by `id` even though the table is there in the XML plain as day. Here is my code:
```
XPathFactory factory=XPathFactory.newInstance();
XPath xPath=factory.newXPath();
InputSource inputSource = new InputSource(new StringReader(tidyHtml));
XPathExpression xPathExpression=xPath.compile("//table[@id='searchResult']");
String expression = "//table[@id='searchResult']";
String table = xPath.evaluate(expression, inputSource);
System.out.println("table = " + table);
```
The table is an empty String.
The table is in the XML, however. If I print the `tidyHtml` String it shows
```
<table
class="ApptableDisplayTag"
id="searchResult"
style="WIDTH: 99%">
```
I haven't used XPath before so maybe I'm missing something.
Can anyone set me straight? Thanks. | The solution was to drop WATIJ and switch to Google WebDriver. WebDriver documents how different browsers handle case in xpath statements. | I don't know anything about JTidy, but I for WATIJ, I believe the reason you are getting the NullPointer and UnknownObject Exceptions is because your XPATH is using lower cased nodes. So say you are using "//table[@id='searchResult']" as the xpath to lookup the table in WATIJ. That won't actually work because "table" is in lower case. For WATIJ, you need to have all the node names in upper case, eg: "//TABLE[@id='searchResult']". As an example, say you want to print the number of rows of that table using WATIJ, you'd do the following:
```
import watij.runtime.ie.IE;
import static watij.finders.SymbolFactory.*;
public class Example {
public static void main(String[] args) {
IE ie = new IE();
ie.start("your_url_goes_here");
System.out.println(ie.table(xpath, "//TABLE[@id='searchResult']").rowCount());
ie.close();
}
}
```
This code or answer may not be right since I've only started using WATIJ today. Though I did run into this same exact problem with xpaths. Took me a couple of hours of searching/testing before I noticed how all the xpaths were cased on this page: [WATIJ User Guide](http://watij.com/wiki:user_guide) Once I changed the casing in my xpaths, WATIJ was able to locate the objects so this should work for you as well. | XPath can't find a table by id | [
"",
"java",
"xpath",
"watij",
""
] |
**Background:**
This is really a general best-practices question, but some background about the specific situation might be helpful:
We are developing a "connected" application for the iPhone. It will communicate with the backend application via REST services. In order to not have to prompt the user for a username and password every time they launch the application, we will expose a "Login" service that validates their username and password on initial launch and returns an authentication token that can be used for future web service requests for real data. The token may have an expiration time after which we'll ask them to re-authenticate with their username/password.
**The Question:**
What are the best practices for generating this sort of token to be used for authentication?
For example, we could...
* Hash (SHA-256, etc) a random string and store it in the database for the given user along with an expiration date. Do a simple lookup of the token on subsequent requests to make sure it matches.
* Encrypte the user id and some additional information (timestamp, etc) with a secret key. Decrypt the token on subsequent requests to make sure it was issued by us.
This feels like it must be a solved problem. | Based on the feedback from the other answers to this question, additional research, and offline discussions, here is what we ended up doing...
It was pointed out pretty quickly that the interaction model here is essentially exactly the same as the model used by Forms Authentication in ASP.NET when a "remember me" checkbox is checked. It's just not a web browser making the HTTP requests. Our "ticket" is equivilant to the cookie that Forms Authentication sets. Forms Authentication uses essentially an "encrypt some data with a secret key" approach by default.
In our login web service, we use this code to create a ticket:
```
string[] userData = new string[4];
// fill the userData array with the information we need for subsequent requests
userData[0] = ...; // data we need
userData[1] = ...; // other data, etc
// create a Forms Auth ticket with the username and the user data.
FormsAuthenticationTicket formsTicket = new FormsAuthenticationTicket(
1,
username,
DateTime.Now,
DateTime.Now.AddMinutes(DefaultTimeout),
true,
string.Join(UserDataDelimiter, userData)
);
// encrypt the ticket
string encryptedTicket = FormsAuthentication.Encrypt(formsTicket);
```
Then we have an operation behavior attribute for the WCF services that adds an IParameterInspector that checks for a valid ticket in the HTTP headers for the request. Developers put this operation behavior attribute on operations that require authentication. Here is how that code parses the ticket:
```
// get the Forms Auth ticket object back from the encrypted Ticket
FormsAuthenticationTicket formsTicket = FormsAuthentication.Decrypt(encryptedTicket);
// split the user data back apart
string[] userData = formsTicket.UserData.Split(new string[] { UserDataDelimiter }, StringSplitOptions.None);
// verify that the username in the ticket matches the username that was sent with the request
if (formsTicket.Name == expectedUsername)
{
// ticket is valid
...
}
``` | Building your own authentication system is always a "worst practice". That's the kind of thing best left to professionals who specialize in authentication systems.
If you're bent on building your own "expiring ticket from a login service" architecture rather than re-using an existing one, it's probably a good idea to at least familiarize yourself with the issues that drove the design of similar systems, like Kerberos. A gentle introduction is here:
<http://web.mit.edu/kerberos/dialogue.html>
It would also be a good idea to take a look at what security holes have been found in Kerberos (and similar systems) over the last 20 years and make sure you don't replicate them. Kerberos was built by security experts and carefully reviewed for decades, and still serious algorithmic flaws are being found in it, like this one:
<http://web.mit.edu/kerberos/www/advisories/MITKRB5-SA-2003-004-krb4.txt>
It's a lot better to learn from their mistakes than your own. | Generating cryptographically secure authentication tokens | [
"",
"c#",
"iphone",
"wcf",
"web-services",
"security",
""
] |
I'm thinking about the tokenizer here.
Each token calls a different function inside the parser.
What is more efficient:
* A map of std::functions/boost::functions
* A switch case | STL Map that comes with visual studio 2008 will give you O(log(n)) for each function call since it hides a tree structure beneath.
With modern compiler (depending on implementation) , A switch statement will give you O(1) , the compiler translates it to some kind of lookup table.
So in general , switch is faster.
**However** , consider the following facts:
The difference between map and switch is that : Map can be built dynamically while switch can't. Map can contain any arbitrary type as a key while switch is very limited to c++ Primitive types (char , int , enum , etc...).
By the way , you can use a hash map to achieve nearly O(1) dispatching (though , depending on the hash table implementation , it can sometimes be O(n) at worst case). Even though , switch will still be faster.
**Edit**
*I am writing the following only for fun and for the matter of the discussion*
I can suggest an nice optimization for you but it depends on the nature of your language and whether you can expect how your language will be used.
When you write the code:
You divide your tokens into two groups , one group will be of very High frequently used and the other of low frequently used. You also sort the high frequently used tokens.
For the high frequently tokens you write an if-else series with the highest frequently used coming first. for the low frequently used , you write a switch statement.
The idea is to use the CPU branch prediction in order to even avoid another level of indirection (assuming the condition checking in the if statement is nearly costless).
in most cases the CPU will pick the correct branch without any level of indirection . They will be few cases however that the branch will go to the wrong place.
Depending on the nature of your languege , Statisticly it may give a better performance.
**Edit** : Due to some comments below , Changed The sentence telling that compilers will allways translate a switch to LUT. | I would suggest reading [switch() vs. lookup table?](http://discuss.joelonsoftware.com/default.asp?joel.3.21194.19) from Joel on Software. Particularly, this response is interesting:
> " Prime example of people wasting time
> trying to optimize the least
> significant thing."
>
> Yes and no. In a VM, you typically
> call tiny functions that each do very
> little. It's the not the call/return
> that hurts you as much as the preamble
> and clean-up routine for each function
> often being a significant percentage
> of the execution time. This has been
> researched to death, especially by
> people who've implemented threaded
> interpreters.
In virtual machines, lookup tables storing computed addresses to call are usually preferred to switches. (direct threading, or "label as values". directly calls the label address stored in the lookup table) That's because it allows, under certain conditions, to reduce [branch misprediction](http://en.wikipedia.org/wiki/Branch_misprediction), which is extremely expensive in long-pipelined CPUs (it forces to flush the pipeline). It, however, makes the code less portable.
This issue has been discussed extensively in the VM community, I would suggest you to look for scholar papers in this field if you want to read more about it. Ertl & Gregg wrote a great article on this topic in 2001, [The Behavior of *Efficient* Virtual Machine Interpreters on Modern Architectures](http://ctho.ath.cx/toread/interpreters/ertl-europar01.pdf)
But as mentioned, I'm pretty sure that these details are not relevant for *your* code. These are small details, and you should not focus too much on it. Python interpreter is using switches, because they think it makes the code more readable. Why don't you pick the usage you're the most comfortable with? Performance impact will be rather small, you'd better focus on code readability for now ;)
**Edit**: If it matters, using a hash table will *always* be slower than a lookup table. For a lookup table, you use enum types for your "keys", and the value is retrieved using a single indirect jump. This is a single assembly operation. O(1). A hash table lookup first requires to calculate a hash, then to retrieve the value, which is way more expensive.
Using an array where the function addresses are stored, and accessed using values of an enum is good. But using a hash table to do the same adds an important overhead
To sum up, we have:
* cost(Hash\_table) >> cost(direct\_lookup\_table)
* cost(direct\_lookup\_table) ~= cost(switch) if your compiler translates switches into lookup tables.
* cost(switch) >> cost(direct\_lookup\_table) (O(N) vs O(1)) if your compiler does not translate switches and use conditionals, but I can't think of any compiler doing this.
* But inlined direct threading makes the code less readable. | What is more efficient a switch case or an std::map | [
"",
"c++",
"parsing",
"tokenize",
""
] |
Is there any way I can iterate backwards (in reverse) through a SortedDictionary in c#?
Or is there a way to define the SortedDictionary in descending order to begin with? | The SortedDictionary itself doesn't support backward iteration, but you have several possibilities to achieve the same effect.
1. Use `.Reverse`-Method (Linq). (This will have to pre-compute the whole dictionary output but is the simplest solution)
```
var Rand = new Random();
var Dict = new SortedDictionary<int, string>();
for (int i = 1; i <= 10; ++i) {
var newItem = Rand.Next(1, 100);
Dict.Add(newItem, (newItem * newItem).ToString());
}
foreach (var x in Dict.Reverse()) {
Console.WriteLine("{0} -> {1}", x.Key, x.Value);
}
```
2. Make the dictionary sort in descending order.
```
class DescendingComparer<T> : IComparer<T> where T : IComparable<T> {
public int Compare(T x, T y) {
return y.CompareTo(x);
}
}
// ...
var Dict = new SortedDictionary<int, string>(new DescendingComparer<int>());
```
3. Use `SortedList<TKey, TValue>` instead. The performance is not as good as the dictionary's (O(n) instead of O(logn)), but you have random-access at the elements like in arrays. When you use the generic IDictionary-Interface, you won't have to change the rest of your code.
Edit :: Iterating on SortedLists
You just access the elements by index!
```
var Rand = new Random();
var Dict = new SortedList<int, string>();
for (int i = 1; i <= 10; ++i) {
var newItem = Rand.Next(1, 100);
Dict.Add(newItem, (newItem * newItem).ToString());
}
// Reverse for loop (forr + tab)
for (int i = Dict.Count - 1; i >= 0; --i) {
Console.WriteLine("{0} -> {1}", Dict.Keys[i], Dict.Values[i]);
}
``` | The easiest way to define the SortedDictionary in the reverse order to start with is to provide it with an `IComparer<TKey>` which sorts in the reverse order to normal.
Here's some code from [MiscUtil](http://pobox.com/~skeet/csharp/miscutil) which might make that easier for you:
```
using System.Collections.Generic;
namespace MiscUtil.Collections
{
/// <summary>
/// Implementation of IComparer{T} based on another one;
/// this simply reverses the original comparison.
/// </summary>
/// <typeparam name="T"></typeparam>
public sealed class ReverseComparer<T> : IComparer<T>
{
readonly IComparer<T> originalComparer;
/// <summary>
/// Returns the original comparer; this can be useful
/// to avoid multiple reversals.
/// </summary>
public IComparer<T> OriginalComparer
{
get { return originalComparer; }
}
/// <summary>
/// Creates a new reversing comparer.
/// </summary>
/// <param name="original">The original comparer to
/// use for comparisons.</param>
public ReverseComparer(IComparer<T> original)
{
if (original == null)
{
throw new ArgumentNullException("original");
}
this.originalComparer = original;
}
/// <summary>
/// Returns the result of comparing the specified
/// values using the original
/// comparer, but reversing the order of comparison.
/// </summary>
public int Compare(T x, T y)
{
return originalComparer.Compare(y, x);
}
}
}
```
You'd then use:
```
var dict = new SortedDictionary<string, int>
(new ReverseComparer<string>(StringComparer.InvariantCulture));
```
(or whatever type you were using).
If you only ever want to iterate in one direction, this will be more efficient than reversing the ordering afterwards. | Reverse Sorted Dictionary in .NET | [
"",
"c#",
".net",
"dictionary",
"iteration",
"reverse",
""
] |
The question below is from Java SCJP5 book by Kathy Sierra and Bert Bates.
Given a method declared as:
```
public static <E extends Number> List<E> process(List<E> nums)
```
A programmer wants to use the method like this:
```
// INSERT DECLARATIONS HERE
output = process(input);
```
Which pair of declarations could be placed at // INSERT DECLARATIONS HERE to allow the code to compile? (Choose all that apply.)
A.
```
ArrayList<Integer> input = null;
ArrayList<Integer> output = null;
```
B.
```
ArrayList<Integer> input = null;
List<Integer> output = null;
```
C.
```
ArrayList<Integer> input = null;
List<Number> output = null;
```
D.
```
List<Number> input = null;
ArrayList<Integer> output = null;
```
E.
```
List<Number> input = null;
List<Number> output = null;
```
F.
```
List<Integer> input = null;
List<Integer> output = null;
```
G. None of the above.
Correct Answers given are: B, E, F and the explanation in the book states:
"The return type is definitely declared as List, NOT ArrayList so A,D are wrong. ......"
This is what I don't get...why it is that the return type MUST be List only and not ArrayList?? Just like the argument can be ArrayList then why cant return type also be arrayList?
Thanks | This is actually not specific to generics, but deals with types.
Easy way to think of it is, an `ArrayList` is a `List`, but an `List` is not necessarily an `ArrayList`.
`ArrayList` implements the `List` interface, so it can be treated as a `List`. However, just because something implements `List`, it is not an `ArrayList`. For example, `LinkedList` implements `List`, but is not an `ArrayList`.
For example the following are allowed:
```
List arrayList = new ArrayList();
List linkedList = new LinkedList();
```
This is because both `ArrayList` and `LinkedList` both implement the `List` interface, so they both can be handled as `List`s.
However, the following is not allowed:
```
ArrayList arrayList = new LinkedList();
```
Although both `ArrayList` and `LinkedList` implement `List`, they are not the same class. They may have similarities by implementing the methods of `List`, but they are completely separate classes. | Because ArrayList is a subclass of List, so the List returned by process is not guaranteed to be an ArrayList. For example, it could be a LinkedList instead. | Java polymorphism confusion | [
"",
"java",
"types",
"polymorphism",
""
] |
I am trying to query the google datastore for something like (with pm --> persistanceManager):
```
String filters = "( field == 'value' || field == 'anotherValue' )";
Query query = pm.newQuery(myType.class, filters);
```
When I execute - I am getting back: **App Engine datastore does not support operator OR**.
What's the best approach in people experience for this kind of queries?
Any help appreciated! | Perform multiple queries. The Datastore, like all other databases, isn't able to efficiently execute disjunctions. Unlike other databases, it exposes this difficulty to the user, to make it clear that what you're doing isn't efficient. Your only solution is to execute multiple queries - one for each or - and combine them. | I don't know if GAE's JDO and JPA implementations support this, but using the low-level API, you can use the operator IN for this, in one query.
```
Query query = new Query("Issue");
List<String> list = Arrays.asList("NEW", "OPEN", "ACCEPTED");
query.addFilter("status", FilterOperator.IN, list);
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
PreparedQuery preparedQuery = datastore.prepare(query);
for (Entity entity : preparedQuery.asIterable()) {
// should iterate over 'NEW', 'OPEN' and 'ACCEPTED' issues
}
``` | App Engine datastore does not support operator OR | [
"",
"java",
"google-app-engine",
"google-cloud-datastore",
"jdoql",
""
] |
found a interesting problem during testing our web application.
I have application on localhost (Windows) and online testing server (Linux). Both are connected to same DB (on Linux server). When I tried to edit one text field through form in application located on Linux server it crop diacritics from result and save it to DB without it. But when I tried the same action, with the same code on locahost (Windows) it save whole text with diacritics right as I wrote it.
I've tried to check PHP configuration, but I have exact same configuration on both machines.
Does anybody have an idea where should I have to look to find what problem should cause that ? | Found a problem of using one filter ont that field which acts differently on different system. | Sounds like one or more of the character settings on the MySql instance on your Windows machine is not set to UTF8, try executing this query:
```
show variables like '%character%'
```
Your output will be the character\_encoding related server variables, executing that on my database outputs:
```
character_set_client utf8
character_set_connection utf8
character_set_database utf8
character_set_filesystem binary
character_set_results utf8
character_set_server utf8
character_set_system utf8
character_sets_dir /usr/share/mysql/charsets/
```
My best guess is that one or more of those is set to **latin1**
Also, you might want to check the collation, i.e. execute this
```
show variables like '%collation%'
```
And you will get something like:
```
collation_connection utf8_general_ci
collation_database utf8_general_ci
collation_server utf8_general_ci
``` | diacritics problem in project made with Zend Framework | [
"",
"php",
"linux",
"apache",
"zend-framework",
"diacritics",
""
] |
Why are sealed types faster?
I am wondering about the deeper details about why this is true. | At the lowest level, the compiler can make a micro-optimization when you have sealed classes.
If you're calling a method on a sealed class, and the type is declared at compile time to be that sealed class, the compiler can implement the method call (in most cases) using the call IL instruction instead of the callvirt IL instruction. This is because the method target can not be overridden. Call eliminates a null check and does a faster vtable lookup than callvirt, since it doesn't have to check virtual tables.
This can be a very, very slight improvement to performance.
That being said, I would completely ignore that when deciding whether to seal a class. Marking a type sealed really should be a design decision, not a performance decision. Do you want people (including yourself) to potentially subclass from your class, now or in the future? If so, do not seal. If not, seal. That really should be the deciding factor. | Essentially, it's got to do with the fact that they don't need to have to worry about extensions to a virtual function table; the sealed types can't be extended, and therefore, the runtime doesn't need to be concerned about how they may be polymorphic. | Why are sealed types faster? | [
"",
"c#",
".net",
"performance",
"clr",
""
] |
I want to create a card playing game. I the use mousemove event to drag cards through the window. The problem is if I move the mouse over another card, it is stuck because the card underneath the mouse cursor gets the mouse events, so that the MouseMove event of the window isn't fired.
This is what I do:
```
private void RommeeGUI_MouseMove(object sender, MouseEventArgs e)
{
if (handkarte != null)
{
handkarte.Location = this.PointToClient(Cursor.Position);
}
}
```
I tried the following, but there was no difference:
```
SetStyle(ControlStyles.UserMouse,true);
SetStyle(ControlStyles.EnableNotifyMessage, true);
```
Iam looking for a way to implement an application-global event-handler or a way to implement so called event-bubbling. At least I want to make the mouse ignore certain controls. | In order to do this you will need to keep track of a few things in your code:
1. On which card the mouse is pointing
when the mouse button is pressed;
this is the card that you want to
move (use the MouseDown event)
2. Move the the card when the mouse is moved
3. Stop moving the card when the mouse button is released (use the
MouseUp event)
In order to just move around controls, there is no need to actually capture the mouse.
A quick example (using Panel controls as "cards"):
```
Panel _currentlyMovingCard = null;
Point _moveOrigin = Point.Empty;
private void Card_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
_currentlyMovingCard = (Panel)sender;
_moveOrigin = e.Location;
}
}
private void Card_MouseMove(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left && _currentlyMovingCard != null)
{
// move the _currentlyMovingCard control
_currentlyMovingCard.Location = new Point(
_currentlyMovingCard.Left - _moveOrigin.X + e.X,
_currentlyMovingCard.Top - _moveOrigin.Y + e.Y);
}
}
private void Card_MouseUp(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left && _currentlyMovingCard != null)
{
_currentlyMovingCard = null;
}
}
``` | What you could do, is send the MouseDown event to the event.function you want to call.
Say you got a "Label" ontop of a "Card", but you wan't to "go-through" it:
```
private void Label_MouseDown( object sender, MouseEventArgs)
{
// Send this event down the line!
Card_MouseDown(sender, e); // Call the card's MouseDown event function
}
```
Now the appropriate event-function is called, even though the bothersome label was clicked. | How to make a control "transparent" for the mouse or Route MouseMove event to parent? | [
"",
"c#",
"winforms",
"controls",
"mouse",
""
] |
I am about to begin reading tons of binary files, each with 1000 or more records. New files are added constantly so I'm writing a Windows service to monitor the directories and process new files as they are received. The files were created with a c++ program. I've recreated the struct definitions in c# and can read the data fine, but I'm concerned that the way I'm doing it will eventually kill my application.
```
using (BinaryReader br = new BinaryReader(File.Open("myfile.bin", FileMode.Open)))
{
long pos = 0L;
long length = br.BaseStream.Length;
CPP_STRUCT_DEF record;
byte[] buffer = new byte[Marshal.SizeOf(typeof(CPP_STRUCT_DEF))];
GCHandle pin;
while (pos < length)
{
buffer = br.ReadBytes(buffer.Length);
pin = GCHandle.Alloc(buffer, GCHandleType.Pinned);
record = (CPP_STRUCT_DEF)Marshal.PtrToStructure(pin.AddrOfPinnedObject(), typeof(CPP_STRUCT_DEF));
pin.Free();
pos += buffer.Length;
/* Do stuff with my record */
}
}
```
I don't think I need to use GCHandle because I'm not actually communicating with the C++ app, everything is being done from managed code, but I don't know of an alternative method. | For your particular application, only one thing will give you the definitive answer: Profile it.
That being said here are the lessons I've learned while working with large PInvoke solutions. The most effective way to marshal data is to marshal fields which are blittable. Meaning the CLR can simple do what amounts to a memcpy to move data between native and managed code. In simple terms, get all of the non-inline arrays and strings out of your structures. If they are present in the native structure, represent them with an IntPtr and marshal the values on demand into managed code.
I haven't ever profiled the difference between using Marshal.PtrToStructure vs. having a native API dereference the value. This is probably something you should invest in should PtrToStructure be revealed as a bottleneck via profiling.
For large hierarchies marshal on demand vs. pulling an entire structure into managed code at a single time. I've run into this issue the most when dealing with large tree structures. Marshalling an individual node is very fast if it's blittable and performance wise it works out to only marshal what you need at that moment. | Using `Marshal.PtrToStructure` is rather slow. I found the following article on CodeProject which is comparing (and benchmarking) different ways of reading binary data very helpful:
> [Fast Binary File Reading with C#](http://www.codeproject.com/KB/files/fastbinaryfileinput.aspx) | What's the most efficient way to marshal C++ structs to C#? | [
"",
"c#",
"performance",
"pinvoke",
"marshalling",
""
] |
Have this method call:
```
->
simpleJdbcTemplate.queryForInt(SQL,null);
->
```
`queryForInt()` method in the springs `SimpleJdbcTemplate` throws a `DataAccessException` which is a runtime exception. I want to propegate exceptions to the view tier of the application since Spring frame work Wraps Checked Exceptions inside `RuntimeExceptions` I stuck here.
How do I do this?
Explanation 1:
The value-add provided by the Spring Framework's JDBC abstraction framework- they say The Spring Framework takes care of all except 3 and 6. 3 and 6 need to be coded by an application developer
1. Define connection parameters
2. Open the connection
3. Specify the statement
4. Prepare and execute the statement
5. Set up the loop to iterate through the results (if any)
6. Do the work for each iteration
7. Process any exception
8. Handle transactions
9. Close the connection
But if I encounter a situation where the connection to the database losses after certain time the program started. Then a runtime exception will be thrown when a call to the above method made.since I don't handle the exception I cannot inform the user interface (view). | It depends if your view tier catches checked exceptions (any subclass of throwable that does not subclass RuntimeException or Error, or are not instances of RuntimeException or Error directly) or unchecked exceptions (RuntimeException or Errors or subclasses of these Throwable subclasses).
Generally, you'll either have something like this:
```
try {
//... processing
} catch(Exception/RuntimeException e) {
// propagate the exception to the view in a meaningful manner
}
```
If this is the case, for a runtime exception, you don't have to do anything - the block will catch the runtime exception.
If you want to convert it to checked, assuming you're using a version of Java that supports wrapped exceptions, all you have to do is:
```
try {
//...spring code
} catch(DataAccessException e) {
throw new Exception(e);
}
```
Then, your layer above this processing will catch it as a checked exception. | Just because Spring throws a runtime exception doesn't mean you cannot catch it. If you want to do something special for DataAccessExceptions, you can certainly do that:
```
try {
// query logic
} catch (DataAccessException ex) {
// handle the exception
}
```
If you're using Spring's MVC framework, it may be worth looking into the ExceptionResolver interface. It's a mechanism for deciding how to handle all those uncaught exceptions thrown by the lower layers of the application. It gives you one last chance to display a different view based on exceptions that are thrown. | Spring Frame work Wraps Checked Exceptions inside RuntimeExceptions | [
"",
"java",
"spring",
"exception",
"runtime",
""
] |
Is there a difference between `assertEquals` and `assertEqual` in the python `unittest.TestCase`?
And if there is not, why are there two functions? Only for convenience? | Actually, in Python 2.6, both `assertEqual` and `assertEquals` are convenience aliases to `failUnlessEqual`. The source declares them thus:
```
# Synonyms for assertion methods
assertEqual = assertEquals = failUnlessEqual
```
In [Python 3](http://docs.python.org/3/library/unittest.html#deprecated-aliases), to your point, `failUnlessEqual` is explicitly deprecated. `assertEquals` carries this comment :-)
```
# Synonyms for assertion methods
# The plurals are undocumented. Keep them that way to discourage use.
# Do not add more. Do not remove.
# Going through a deprecation cycle on these would annoy many people.
```
So, the upshot appears to be that you should use whatever you like for Python 2.x, but tend toward `assertEqual` for Python 3. | A 3.3 update: From [26.3.7.1.1. Deprecated aliases](http://docs.python.org/3.3/library/unittest.html#deprecated-aliases) :
> For historical reasons, some of the TestCase methods had one or more aliases that are now deprecated. The following table lists the correct names along with their deprecated aliases:
```
Method Name | Deprecated alias | Deprecated alias
--------------+------------------+-----------------
assertEqual() | failUnlessEqual | assertEquals
...
``` | assertEquals vs. assertEqual in python | [
"",
"python",
"unit-testing",
""
] |
I am building a iPhone app which uses c# web services. My c# web services takes user details and validates against my DB and returns xml files.
So Now the issue is how to encrypt user details(username and password are 10chars each) in objective c and decrypt in C#.
I am very new to cryptography, which method is the best. will it be possible to encrypt in Objective c and Decrypt in C#.
thanks.. | Thanks for the rapid replies. I appreciate your help. I found a blog which explains my problem. Here is the link for it.
<http://dotmac.rationalmind.net/2009/02/aes-interoperability-between-net-and-iphone/>
I am implementing it now. I will let you know the status soon.
Thanks a lot..
Happy coding.. | On the assumption that you're encrypting this information in order to protect it over the network, the best solution is to connect over SSL. This will address the problem without creating new complexities in the code. SSL handling is generally available in both .NET and Cocoa.
Is there some other reason that you're trying to encrypt this data? | String encryption in C# and Objective c | [
"",
"c#",
"objective-c",
"string",
"cryptography",
""
] |
In Windows using C#, how can I get the installation path of a software (for example consider NUnit or any other software like MS word, etc.) **from my project**? Also how to set the path variables that we set in Environment variables so that we can run the application just by giving in command prompt.
Like if I install NUnit in "C:\Program Files" I can run it by giving 'NUnit' in cmd prompt but if I install in a different location I can't do the same.
**I need to get the location or path of NUnit or any other software installed in my system (having Windows XP) from my project.**
EDIT:
Like I can get the path of installed program from registry.
HKEY\_CURRENT\_USER->SOFTWARE | Use the system and application classes. This will give you all sorts of information.
EG: Application.ExecutablePath
It also provides methods to do what you want to.
Edit: Also see registry read/write instructions [here](http://www.c-sharpcorner.com/UploadFile/sushmita_kumari/RegistryKeys102082006061720AM/RegistryKeys1.aspx?ArticleID=0ce07333-c9ab-4a6a-bc5d-44ea2523e232). | ```
Application.ExecutablePath (includes filename)
Application.StartupPath (not includes filename)
```
This will give you the path where the application started. Hopefully it will be the installation path. | How to get installation path of an application? | [
"",
"c#",
"windows",
""
] |
I have a date such as `2009-06-30` (30th june, 2009). I want to calculate a date which appears 2 months, and 3 days before or after the first date. Or, 3 months, 6 days, before or after, etc. How can I do this? Is there an easy way using `DATE_SUB()` or `DATE_ADD()` | `DATE_ADD(whatever, INTERVAL 2 MONTH) + INTERVAL 3 DAY` for example (just to show both of the syntax variants you can use in MySQL for this task) will give a date that's 2 months and 3 days after `whatever`. | You should be able to use [DateTime:sub](http://ca.php.net/manual/en/datetime.sub.php). To get the date 3 months and six days you should be able to use:
```
date_sub($myDate,"P3M6D");
``` | Tricky PHP/Mysql date calculation | [
"",
"php",
"mysql",
""
] |
Ok, here's what I'm trying to do... I know that itemgetter() sort could to alphabetical sort easy, but if I have something like this:
> [{'Name':'TOTAL', 'Rank':100},
> {'Name':'Woo Company', 'Rank':15},
> {'Name':'ABC Company', 'Rank':20}]
And I want it sorted alphabetically (by Name) + include the condition that the one with Name:'TOTAL' should be listed last in the sequence, like this:
> [{'Name':'ABC Company', 'Rank':20},
> {'Name':'Woo Company', 'Rank':15},
> {'Name':'TOTAL', 'Rank':100}]
How would I do that? | The best approach here is to decorate the sort key... Python will sort a tuple by the tuple components in order, so build a tuple key with your sorting criteria:
```
sorted(list_of_dicts, key=lambda d: (d['Name'] == 'TOTAL', d['Name'].lower()))
```
This results in a sort key of:
* (True, 'total') for {'Name': 'TOTAL', 'Rank': 100}
* (False, 'woo company') for {'Name': 'Woo Company', 'Rank': 15}
* (False, 'abc company') for {'Name': 'ABC Company', 'Rank': 20}
Since False sorts earlier than True, the ones whose names aren't TOTAL will end up together, then be sorted alphabetically, and TOTAL will end up at the end. | ```
>>> lst = [{'Name':'TOTAL', 'Rank':100}, {'Name':'Woo Company', 'Rank':15}, {'Name':'ABC Company', 'Rank':20}]
>>> lst.sort(key=lambda d: (d['Name']=='TOTAL',d['Name'].lower()))
>>> print lst
[{'Name': 'ABC Company', 'Rank': 20}, {'Name': 'Woo Company', 'Rank': 15}, {'Name': 'TOTAL', 'Rank': 100}]
``` | In Python how do I sort a list of dictionaries by a certain value of the dictionary + alphabetically? | [
"",
"python",
"list",
"sorting",
"dictionary",
""
] |
I have a string that has numbers
```
string sNumbers = "1,2,3,4,5";
```
I can split it then convert it to `List<int>`
```
sNumbers.Split( new[] { ',' } ).ToList<int>();
```
How can I convert string array to integer list?
So that I'll be able to convert `string[]` to `IEnumerable` | ```
var numbers = sNumbers?.Split(',')?.Select(Int32.Parse)?.ToList();
```
Recent versions of C# (v6+) allow you to do null checks in-line using the [null-conditional operator](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#null-conditional-operators--and-) | Better use `int.TryParse` to avoid exceptions;
```
var numbers = sNumbers
.Split(',')
.Where(x => int.TryParse(x, out _))
.Select(int.Parse)
.ToList();
``` | Split string, convert ToList<int>() in one line | [
"",
"c#",
"list",
"split",
""
] |
I have written a java axis2 1.4.1 web service and .net 3.5 WCF client and I am trying to catch the wsdl faults thrown.
Unlike .net 2.0 the .net 3.5 claims to support `wsdl:fault` and the service reference wizard does generate all the correct fault classes in the client proxy. But when I try to catch a fault it doesn't seem to correctly serialise so that I can only `catch (FaultException ex)` and not the type I actually threw using `FaultException<T>`
I had a look inside my reference.cs I can see wizard has added correct `FaultContract` to the my opeation.
```
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "3.0.0.0")]
[System.ServiceModel.ServiceContractAttribute(Namespace="http://www.mycomp.com/wsdl/Foo", ConfigurationName="FooServiceProxy.Foo")]
public interface Foo {
[System.ServiceModel.OperationContractAttribute(Action="http://www.mycomp.com/Foo/list", ReplyAction="*")]
[System.ServiceModel.FaultContractAttribute(typeof(TestWsdlFaultsApp.FooServiceProxy.SimpleFault), Action="http://www.mycomp.com/Foo/list", Name="simpleFault")]
[System.ServiceModel.XmlSerializerFormatAttribute()]
TestWsdlFaultsApp.FooServiceProxy.listResponse list(TestWsdlFaultsApp.FooServiceProxy.listRequest request);
}
```
Is there something else I need to do in .net to get this to work? or does WCF only support custom wsdl faults from a .net web service ?
Heres my wsdl
```
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="wsdl-viewer.xsl"?>
<wsdl:definitions name="FooImplDefinitions"
targetNamespace="http://www.mycomp.com/wsdl/Foo"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:tns="http://www.mycomp.com/wsdl/Foo"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<!-- TYPES -->
<wsdl:types>
<xs:schema targetNamespace="http://www.mycomp.com/wsdl/Foo"
elementFormDefault="qualified" attributeFormDefault="unqualified"
xmlns:security="http://www.mycomp.com/xsd/types/Security">
<!-- IMPORTS -->
<xs:import namespace="http://www.mycomp.com/xsd/types/Foo"
schemaLocation="Foo.xsd" />
<xs:import namespace="http://www.mycomp.com/xsd/types/Security"
schemaLocation="Security.xsd" />
<!-- HEADER ELEMENTS -->
<xs:element name="identity" type="security:TrustedIdentity" />
<!-- REQUEST/RESPONSE ELEMENTS -->
<xs:element name="listRequest">
<xs:complexType>
<xs:sequence>
<xs:element name="action" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="listResponse">
<xs:complexType>
<xs:sequence>
<xs:element name="stuff" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<!-- FAULT TYPES -->
<xs:complexType name="SimpleFault">
<xs:sequence>
<xs:element name="reason" type="xs:string"/>
</xs:sequence>
</xs:complexType>
<!-- FAULT ELEMENTS -->
<xs:element name="simpleFault" type="tns:SimpleFault"/>
</xs:schema>
</wsdl:types>
<!-- MESSAGES -->
<wsdl:message name="listRequest">
<wsdl:part element="tns:listRequest" name="parameters" />
<wsdl:part element="tns:identity" name="header" />
</wsdl:message>
<wsdl:message name="listResponse">
<wsdl:part element="tns:listResponse" name="return" />
</wsdl:message>
<wsdl:message name="simpleException">
<wsdl:part element="tns:simpleFault" name="fault"/>
</wsdl:message>
<!-- PORT TYPES -->
<wsdl:portType name="Foo">
<wsdl:operation name="list">
<wsdl:input message="tns:listRequest" />
<wsdl:output message="tns:listResponse" />
<wsdl:fault name="simpleFault" message="tns:simpleException" />
</wsdl:operation>
</wsdl:portType>
<!-- BINDINGS -->
<wsdl:binding name="FooBinding" type="tns:Foo">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" />
<wsdl:operation name="list">
<soap:operation soapAction="http://www.mycomp.com/Foo/list" />
<wsdl:input>
<soap:header message="tns:listRequest" part="header" use="literal" />
<soap:body parts="parameters" use="literal" />
</wsdl:input>
<wsdl:output>
<soap:body use="literal" />
</wsdl:output>
<wsdl:fault name="simpleFault">
<soap:fault name="simpleFault" use="literal" />
</wsdl:fault>
</wsdl:operation>
</wsdl:binding>
<!-- SERVICES -->
<wsdl:service name="FooServiceImpl">
<wsdl:port name="FooPort" binding="tns:FooBinding">
<soap:address
location="http://localhost:9001/Foo/FooServiceImpl" />
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
``` | Thanks john you set me on the right path, Problem was obvious: I was not setting the detail when I threw the fault in java (axis2).
DODGY CODE:
```
throw new SimpleException("SimpleFault thrown");
```
WORKING CODE:
```
SimpleFault fault = new SimpleFault();
fault.setReason("SimpleFault reason");
SimpleFaultE faultMessage = new SimpleFaultE();
faultMessage.setSimpleFault(fault);
SimpleException ex = new SimpleException("SimpleFault thrown");
ex.setFaultMessage(faultMessage);
throw ex;
```
So AXIS2 -> WCF wsdl:fault interop works just fine... | WCF should work with axis2 exceptions. I had it working, but I don't remember all the details.
When you use SOAP monitor or something like that, what do you see in the fault message body? | Does WCF FaultException<T> support interop with a Java web service Fault | [
"",
"java",
".net",
"wcf",
"interop",
"apache-axis",
""
] |
I have two functions like this:
```
function mysql_safe_query($format) {
$args = array_slice(func_get_args(),1);
$args = array_map('mysql_safe_string',$args);
$query = vsprintf($format,$args);
$result = mysql_query($query);
if($result === false) echo '<div class="mysql-error">',mysql_error(),'<br/>',$query,'</div>';
return $result;
}
function mysql_row_exists() {
$result = mysql_safe_query(func_get_args());
return mysql_num_rows($result) > 0;
}
```
The problem is that the second function won't work because it passes the args to the first one as an array, when it expects them as different parameters. Is there any way to get around this, preferably without modifying `mysql_safe_query`? | N.B. In PHP 5.6 you can now do this:
```
function mysql_row_exists(...$args) {
$result = mysql_safe_query(...$args);
return mysql_num_rows($result) > 0;
}
```
Also, for future readers, mysql\_\* is deprecated -- don't use those functions. | How about using:
```
$args = func_get_args();
call_user_func_array('mysql_safe_query', $args);
``` | Pass all arguments to another function | [
"",
"php",
"syntax",
""
] |
I am building a website in CakePHP that processes files uploaded though an XML-RPC API and though a web frontend. Files need to be scanned by ClamAV, thumbnails need to be generated, et cetera. All resource intensive work that takes some time for which the user should not have to wait. So, I am looking into asynchronous processing with PHP in general and CakePHP in particular.
I came across the [MultiTask plugin](http://blogs.bigfish.tv/adam/2009/02/16/new-cakephp-multitask-plugin/) for CakePHP that looks promising. I also came across various message queue implementations such as [dropr](https://www.dropr.org/) and [beanstalkd](http://xph.us/software/beanstalkd/). Of course, I will also need some kind of background process, probably implemented using a Cake Shell of some kind. I saw MultiTask using [PHP\_Fork](http://pear.php.net/package/PHP_Fork/docs) to implement a multithreaded PHP daemon.
I need some advice on how to fit all these pieces together in the best way.
* Is it a good idea to have a long-running daemon written in PHP? What should I watch out for?
* What are the advantage of external message queue implementations? The MultiTask plugin does not use an external message queue. It rolls it's own using a MySQL table to store tasks.
* What message queue should I use? dropr? beanstalkd? Something else?
* How should I implement the backend processor? Is a forking PHP daemon a good idea or just asking for trouble?
My current plan is either to use the MultiTask plugin or to edit it to use beanstald instead of it's own MySQL table implementation. Jobs in the queue can simply consist of a task name and an array of parameters. The PHP daemon would watch for incoming jobs and pass them out to one of it's child threads. The would simply execute the CakePHP Task with the given parameters.
Any opinion, advice, comments, gotchas or flames on this? | I've had excellent results with [BeanstalkD](http://kr.github.com/beanstalkd/) and a back-end written in PHP to retrieve jobs and then act on them. I wrapped the actual job-running in a bash-script to keep running if even if it exited (unless I do a '`exit(UNIQNUM);`', when the script checks it and will actually exit). In that way, the restarted PHP script clears down any memory that may have been used, and can start afresh every 25/50/100 jobs it runs.
A couple of the advantages of using it is that you can set priorities and delays into a BeanstalkD job - "run this at a lower priority, but don't start for 10 seconds". I've also queued a number of jobs up at the some time (run this now, in 5 seconds and again after 30 secs).
With the appropriate network configuration (and running it on an accessible IP address to the rest of your network), you can also run a beanstalkd deamon on one server, and have it polled from a number of other machines, so if there are a large number of tasks being generated, the work can be split off between servers. If a particular set of tasks needs to be run on a particular machine, I've created a 'tube' which is that machine's hostname, which should be unique within our cluster, if not globally (useful for file uploads). I found it worked perfectly for image resizing, often returning the finished smaller images to the file system before the webpage itself that would refer to it would refer to the URL it would be arriving at.
I'm actually about to start writing a series of articles on this very subject for my blog (including some techniques for code that I've already pushed several million live requests through) - My URL is linked from my [user profile](https://stackoverflow.com/users/6216/topbit) here, on Stackoverflow.
(I've written a [series of articles](http://www.phpscaling.com/tag/beanstalkd/) on the subject of Beanstalkd and queuing of jobs) | If you use a message queue like beanstalkd, you can start as many processes as you'd like (even on the same server). Each worker process will take one job from the queue and process it. You can add more workers and more servers if you need more capacity.
The nice thing about using a single threaded worker is that you don't have to deal with synchronization inside a process. The jobqueue will make sure no job will be handled twice. | Asynchronous processing or message queues in PHP (CakePHP) | [
"",
"php",
"multithreading",
"cakephp",
"asynchronous",
"message-queue",
""
] |
I am learning about C# refs right now.
Is it safe to assume that all variables that are assigned with a new are references and not values?
Example:
```
SomeType variable = new SomeType()
``` | No:
* Instance of classes are references, but instances of structs are values.
* Classes and structs can both be constructed using new
For example, `System.Drawing.Point` is a `struct` not a `class`:
```
Point point = new Point(1, 1); //point is a value, not a reference
``` | C# has two types of objects - Reference Types (defined with a class keyword) and Value Types (defined with a struct keyword).
* RTs are created on the managed heap. When you do a new() on a RT, you get back a reference to an object on the heap. Two RT variables can point to the same object.
* VTs are created on the stack (unless they are a field within a reference type). When you do a new() on a VT, you get a variable that is pointing to the beginning of the object on the stack. Two value type variables can't point to the same object - they're always two distinct objects on the stack. [This however is an implementation detail the important thing to remember is that VTs have value type semantics.](http://blogs.msdn.com/ericlippert/archive/2009/04/27/the-stack-is-an-implementation-detail.aspx)
Pick up a good book on C# for more details.. Its an important concept. | Safe to assume all C# variables initialized by new are references? | [
"",
"c#",
"reference",
"ref",
""
] |
At the risk of being flamed... what advantage does enforcing calls to methods rather than functions have in a context where the context is implicit.
Considering that PHP's syntax is so ugly for calling methods why would PHPUnit's creators have enforced its usage?
If the framework had set a global "currentTestCase" object and then transparently associated failed asserts with that object we could be writing:
```
assertEquals("blah", $text);
```
as opposed to the equivalent, but verbose:
```
$this->assertEquals("blah", $text);
```
What exactly do we get by using OO in this context.
Please enlighten me. | Because PHPUnit is derived from xUnit and that's how [xUnit](https://secure.wikimedia.org/wikipedia/en/wiki/XUnit) does it.
Why does xUnit do it that way? I'm glad you asked. The original reason, as Robert points out, is that xUnit comes from Smalltalk and was popularized by JUnit in Java. Both are OO-or-nothing languages so they had no choice.
This is not to say there are not other advantages. OO tests can be inherited. This means if you want to test a subclass you can run all the parent's tests and just override the handful of test methods for the behaviors you've changed. This gives you excellent coverage of subclasses without having to duplicate test code.
Its easy to add and override assert methods in PHPUnit. Just subclass `PHPUnit_Framework_TestCase`, write your own `assert` methods and have your test classes inherit from your new subclass. You can also write default `setup` and `teardown` methods.
Finally, it guarantees that the test framework's methods aren't going to clash with the thing that they're testing. If the test framework just dumped its functions into the test and you wanted to test something that had a `setup` method... well you're in trouble.
That said, I hear your pain. A big test framework can be annoying and cumbersome and brittle. Perl doesn't use an xUnit style, it uses a procedural style with short test function names. See [Test::More](http://search.cpan.org/perldoc?Test::More) for an example. Behind the scenes it does just what you suggested, there's a singleton test instance object which all the functions use. There's also a hybrid procedural assert functions with OO test methods module called [Test::Class](http://search.cpan.org/perldoc?Test::Class) which does the best of both worlds.
> Considering that PHP's syntax is so ugly for calling methods
I guess you don't like the `->`. I suggest you learn to live with it. OO PHP is so much nicer than the alternative. | One good reason is that `assertXXX` as a method name has a high risk for naming clash.
Another one is that it is derived from the *xUnit* family, which typically deals with object-oriented languages - Smalltalk initially. This makes it easier to related yourself to your "siblings" from e.g. Java and Ruby. | Why does PHPUnit insist on doing things the OO way? | [
"",
"php",
"oop",
"phpunit",
""
] |
How do you re-size a black and white image without any smoothing affect? I have a BarCode that is an image that is too big. I need to resize the image but with one caveat. The resulting image needs to be the same proportional size and the black and white bars can not turn into black,grey and white bars(which I believe is due to some smoothing occuring). Any example of how to do this in c#?? | @arul is correct. To be specific,
```
graphics.SmoothingMode = SmoothingMode.None;
graphics.DrawImage( barCodeImage, new Point( 0, 0 ) );
```
You might also want to check out the InterpolationMode, to see whether changing this value gives you results closer to what you want. | You can specify [the smoothing mode](http://msdn.microsoft.com/en-us/library/system.drawing.graphics.smoothingmode(VS.71).aspx) in the underlying [Graphics object](http://msdn.microsoft.com/en-us/library/system.drawing.graphics(VS.71).aspx). | How to resize image in C# without smoothing | [
"",
"c#",
".net",
"asp.net",
"optimization",
""
] |
This is how I register a `DependencyProperty`:
```
public static readonly DependencyProperty UserProperty =
DependencyProperty.Register("User", typeof (User),
typeof (NewOnlineUserNotifier));
public User User
{
get
{
return (User)GetValue(UserProperty);
}
set
{
SetValue(UserProperty, value);
}
}
```
The third parameter of the `DependencyProperty.Register` method requires you to specify the type of the Control where the Dependency Property resides in (in this case, my User Control is called `NewOnlineUserNotifier`).
My question is, **why do you actually specify the type of the owner, and what happens if you specify a different type than the actual owner's ?** | The type that you call the Register method from is not the de facto owner of the property, therefore you can't specify a different type than the actual owner since the type you specify *is* the actual owner.
An example where this may be useful is when you create a custom control that contains other controls. Previously with WinForms if you had some extra information that was only useful to that container, but semantically belonged to the child, then the best you could do was place that information in the hold-all "Tag" property. This both removed type safety and you were never sure that another class wouldn't try and store something else in the tag. Now with WPF dependency properties allow you to tie values to objects without the object itself needing to hold the value. A trivial example:
```
public class ButtonContainer : Control
{
public Button ChildButton { get; set; }
public static readonly DependencyProperty FirstOwnerProperty =
DependencyProperty.Register("FirstOwner", typeof(ButtonContainer),
typeof(Button));
public ButtonContainer()
{
ChildButton = new Button();
ChildButton.SetValue(FirstOwnerProperty, this);
}
}
```
Now the button has an extra property that only makes sense within the context of the ButtonContainer and can only be accessed within the context of the ButtonContainer - like a typesafe, encapsulated Tag.
Using the new class as follows:
```
ButtonContainer container1 = new ButtonContainer();
ButtonContainer container2 = new ButtonContainer();
container2.ChildButton = container1.ChildButton;
```
As the ChildButton is moved from one container to another the value of its FirstOwnerProperty travels with it as though it was a real member of the Button class. Container2 can call ChildButton.GetValue(FirstOwnerProperty) and find out which ButtonContainer originally created the button (why it might want to do this is left as an exercise for the reader...). All of this is possible without the need to subclass the button to a narrow speciality. | In a nutshell, when you're registering a DP, you're adding an object(DP) to a list attached to a class(owner). This operation only "lives" in the class where it is declared and often is not related to it. | WPF Dependency Properties: Why do I need to specify an Owner Type? | [
"",
"c#",
"wpf",
"dependency-properties",
""
] |
Assume I have a list of words, and I want to find the number of times each word appears in that list.
An obvious way to do this is:
```
words = "apple banana apple strawberry banana lemon"
uniques = set(words.split())
freqs = [(item, words.split().count(item)) for item in uniques]
print(freqs)
```
But I find this code not very good, because the program runs through the word list twice, once to build the set, and a second time to count the number of appearances.
Of course, I could write a function to run through the list and do the counting, but that wouldn't be so Pythonic. So, is there a more efficient and Pythonic way? | The [`Counter` class](http://docs.python.org/library/collections.html#collections.Counter) in the `collections` module is purpose built to solve this type of problem:
```
from collections import Counter
words = "apple banana apple strawberry banana lemon"
Counter(words.split())
# Counter({'apple': 2, 'banana': 2, 'strawberry': 1, 'lemon': 1})
``` | [defaultdict](http://docs.python.org/library/collections.html#defaultdict-objects) to the rescue!
```
from collections import defaultdict
words = "apple banana apple strawberry banana lemon"
d = defaultdict(int)
for word in words.split():
d[word] += 1
```
This runs in O(n). | Item frequency count in Python | [
"",
"python",
"count",
"frequency",
"counting",
""
] |
I have used a WPF RichTextBox to save a flowdocument from it as byte[] in database. Now i need to retrieve this data and display in a report RichTextBox as an rtf.
when i try to convert the byte[] using TextRange or in XAMLReader i get a FlowDocument back but how do i convert it to rtf string as the report RichTextBox only takes rtf.
Thanks
Arvind | You should not persist the FlowDocument directly as it should be considered the runtime representation of the document, not the actual document content. Instead, use [the TextRange class](http://msdn.microsoft.com/en-us/library/ms598701.aspx) to Save and Load to various formats including [Rtf](http://msdn.microsoft.com/en-us/library/system.windows.dataformats.rtf.aspx).
A quick sample on how to create a selection and save to a stream:
```
var content = new TextRange(doc.ContentStart, doc.ContentEnd);
if (content.CanSave(DataFormats.Rtf))
{
using (var stream = new MemoryStream())
{
content.Save(stream, DataFormats.Rtf);
}
}
```
To load content into a selection would be similar:
```
var content = new TextRange(doc.ContentStart, doc.ContentEnd);
if (content.CanLoad(DataFormats.Rtf))
{
content.Load(stream, DataFormats.Rtf);
}
``` | This works like a charm for me. Displays the result in an RTF box without difficulties.
```
public static string getDocumentAsXaml(IDocumentPaginatorSource flowDocument)
{
return XamlWriter.Save(flowDocument);
}
``` | How to convert FlowDocument to rtf | [
"",
"c#",
"rtf",
"flowdocument",
""
] |
Which is faster,
1. Using an XMLHTTP Get request and than using eval() for the reponsetext
2. Using the JSONP technique for loading a script and then evaling the innerHtml of the script.
3. Dynamically adding a script tag to the head of a document where the javascript object is being assigned to a variable?
My gut tells me this should be option 3, but I am not sure how eval compares with using a script tag. Any thoughts? | I agree JSON2 library is probably the way to go for eval()'ing. But for transport, it looks like the JSONP mechanism is faster than XMLHTTPRequest per some analysis that John Resig has done.
<http://ejohn.org/blog/native-json-support-is-required/> | It could vary based on browser and some other factors so I think if you really want the best performance you should do some tests and profile them. For one, eval() performance can vary tremendously depending on what you are eval()'ing | performance questions on dynamically loading Javascript Objects | [
"",
"javascript",
"json",
"scripting",
"loading",
"dynamic",
""
] |
Consider the following situation: -
I am using Linux.
I have doubt that my application has crashed.
I had not enabled core dump.
There is no information in the log.
How can I be sure that, after the system restart my app was started, but now it is not running, because it has crashed.
My app is configured as a service, written in C/C++.
In a way: **how can I get all the process/service names that have executed since the system start?** Is it even possible?
I know, I can enable logging and start the process again to get the crash. | Standard practice is to have a pid file for your daemon (/var/run/$NAME.pid), in which you can find its process id without having to parse the process tree manually. You can then either check the state of that process, or make your daemon respond to a signal (usually SIGHUP), and report its status. It's a good idea to make sure that this pid still belongs to your process too, and the easiest way is to check /proc/$PID/cmdline.
Addendum:
If you're only using newer fedora or ubuntu, your init system is [upstart](http://upstart.ubuntu.com/), which has monitoring and triggering capabilities built in.
As @emg-2 noted, BSD process accounting is available, but I don't think it's the correct approach for this situation. | This feature is included in Linux Kernel. It's called: [BSD process accounting.](http://www.linuxjournal.com/article/6144) | How to know if a process had been started but crashed in Linux | [
"",
"c++",
"c",
"linux",
"debugging",
"monitoring",
""
] |
i was wondering if is possible to import data from a live stream from a web site and perform computation on the data in real time? if this is possible what is the most efficient(computationally fast) way of doing it? thank you for any help or commpents. | you can use [cURL](http://curl.haxx.se/). A curl handle can have a function called each time new data comes in. | What type of data are you talking about streaming? Perhaps [socket programming](http://en.wikipedia.org/wiki/Berkeley_sockets) is what you need. | import data from web stream in c++ | [
"",
"c++",
"stream",
"live",
""
] |
I have something like this:
```
# a.py
import os
class A:
...
# b.py
import a
class B(A):
...
```
In class B (b.py) I'd like to be able to use the modules imported in a.py (os in this case). Is it possible to achieve this behavior in Python or should I import the modules in both files?
Edit: I'm not worried about the import times, my problem is the visual clutter that the block of imports puts on the files. I end up having stuff like this in every controller (RequestHandler):
```
from django.utils import simplejson
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext import db
```
That's what I'd like to avoid. | Yes you can use the imports from the other file by going a.os.
However, the pythonic way is to just import the exact modules you need without making a chain out of it (which can lead to circular references).
When you import a module, the code is compiled and inserted into a dictionary of names -> module objects. The dictionary is located at sys.modules.
```
import sys
sys.modules
>>> pprint.pprint(sys.modules)
{'UserDict': <module 'UserDict' from 'C:\python26\lib\UserDict.pyc'>,
'__builtin__': <module '__builtin__' (built-in)>,
'__main__': <module '__main__' (built-in)>,
'_abcoll': <module '_abcoll' from 'C:\python26\lib\_abcoll.pyc'>,
# the rest omitted for brevity
```
When you try to import the module again, Python will check the dictionary to see if its already there. If it is, it will return the already compiled module object to you. Otherwise, it will compile the code, and insert it in sys.modules.
Since dictionaries are implemented as hash tables, this lookup is very quick and takes up negligible time compared to the risk of creating circular references.
> Edit: I'm not worried about the import
> times, my problem is the visual
> clutter that the block of imports puts
> on the files.
If you only have about 4 or 5 imports like that, its not too cluttery. Remember, "Explicit is better than implicit". However if it really bothers you that much, do this:
```
<importheaders.py>
from django.utils import simplejson
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext import db
<mycontroller.py>
from importheaders import *
``` | Just import the modules again.
Importing a module in python is a very lightweight operation. The first time you import a module, python will load the module and execute the code in it. On any subsequent imports, you will just get a reference to the already-imported module.
You can verify this yourself, if you like:
```
# module_a.py
class A(object):
pass
print 'A imported'
# module_b.py
import module_a
class B(object):
pass
print 'B imported'
# at the interactive prompt
>>> import module_a
A imported
>>> import module_a # notice nothing prints out this time
>>> import module_b # notice we get the print from B, but not from A
B imported
>>>
``` | Can a Python module use the imports from another file? | [
"",
"python",
""
] |
After having read and used `Struts1` (made a web application using `Struts/Hibernate` last semester), I want to step forward in learning a better `MVC framework`. I have been wondering if it would be more prudent to learn `Struts2` now and Spring later, or skip `Struts2` for `Spring` directly? | I would suggest not treating the two as mutually exclusive. Struts2 competes with Spring MVC, but Struts2 uses Spring's injection container and AOP capabilities (Spring core and AOP).
Struts2 and Spring MVC are both working to achieve the same results: a clean MVC web framework. Deciding between the two will be to some extent personal preference. I personally did not like the "feel" of working with Spring MVC, while Struts2 and it's interceptor stack felt much more comfortable to work with.
I would suggest that you work though a few tutorial "hello world" type apps in each framework and see which feels more intuitive for you to work with. Both frameworks have strengths and weaknesses, but all of them can be overcome. | Adding to Rich Kroll points, Struts 2 provides rich tag libraries and the framework is very well organized. | Suggest a progression path from struts1 → (struts2, spring) | [
"",
"java",
"spring",
"struts2",
"struts",
""
] |
I read that they are conceptually equal. In practice, is there any occasion that
```
foo(T t)
```
is preferred over
```
foo(const T& t)
```
? and why?
---
Thanks for the answers so far, please note I am not asking the difference between by-ref and by-val.
Actually I was interested in the difference between **by-const-ref** and **by-val**.
I used to hold the oipinion that by-const-ref can replace by-value in call cases since even Herb Sutter and Bjarne said they are conceptually equal, and "by ref"(be it const) implies being faster. until recently, I read somewhere that by-val may be better optimized in some cases.
Then when and how? | Built-in types and small objects (such as STL iterators) should normally be passed by value.
This is partly to increase the compiler's opportunities for optimisation. It's surprisingly hard for the compiler to know if a reference parameter is aliasing another parameter or global - it may have to reread the state of the object from memory a number of times through the function, to be sure the value hasn't changed.
This is the reason for C99's `restrict` keyword (the same issue but with pointers). | If you want to locally modify `t` (without affecting the original) in the body of your method (say in the process of calculating something), the first method would be preferential. | is there any specific case where pass-by-value is preferred over pass-by-const-reference in C++? | [
"",
"c++",
""
] |
I have a huge text file with 25k lines.Inside that text file each line starts with "1 \t (linenumber)"
Example:
```
1 1 ITEM_ETC_GOLD_01 골드(소) xxx xxx xxx_TT_DESC 0 0 3 3 5 0 180000 3 0 1 0 0 255 1 1 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_money_small.bsr xxx xxx xxx 0 2 0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 표현할 골드의 양(param1이상) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
1 2 ITEM_ETC_GOLD_02 골드(중) xxx xxx xxx_TT_DESC 0 0 3 3 5 0 180000 3 0 1 0 0 255 1 1 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_money_normal.bsr xxx xxx xxx 0 2 0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1000 표현할 골드의 양(param1이상) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
1 3 ITEM_ETC_GOLD_03 골드(대) xxx xxx xxx_TT_DESC 0 0 3 3 5 0 180000 3 0 1 0 0 255 1 1 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_money_large.bsr xxx xxx xxx 0 2 0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10000 표현할 골드의 양(param1이상) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
1 4 ITEM_ETC_HP_POTION_01 HP 회복 약초 xxx SN_ITEM_ETC_HP_POTION_01 SN_ITEM_ETC_HP_POTION_01_TT_DESC 0 0 3 3 1 1 180000 3 0 1 1 1 255 3 1 0 0 1 0 60 0 0 0 1 21 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_bag.bsr item\etc\hp_potion_01.ddj xxx xxx 50 2 0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 120 HP회복양 0 HP회복양(%) 0 MP회복양 0 MP회복양(%) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
1 5 ITEM_ETC_HP_POTION_02 HP 회복약 (소) xxx SN_ITEM_ETC_HP_POTION_02 SN_ITEM_ETC_HP_POTION_02_TT_DESC 0 0 3 3 1 1 180000 3 0 1 1 1 255 3 1 0 0 1 0 110 0 0 0 2 39 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_bag.bsr item\etc\hp_potion_02.ddj xxx xxx 50 2 0 0 2 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 220 HP회복양 0 HP회복양(%) 0 MP회복양 0 MP회복양(%) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
1 6 ITEM_ETC_HP_POTION_03 HP 회복약 (중) xxx SN_ITEM_ETC_HP_POTION_03 SN_ITEM_ETC_HP_POTION_03_TT_DESC 0 0 3 3 1 1 180000 3 0 1 1 1 255 3 1 0 0 1 0 200 0 0 0 4 70 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_bag.bsr item\etc\hp_potion_03.ddj xxx xxx 50 2 0 0 3 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 370 HP회복양 0 HP회복양(%) 0 MP회복양 0 MP회복양(%) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
1 7 ITEM_ETC_HP_POTION_04 HP 회복약 (대) xxx SN_ITEM_ETC_HP_POTION_04 SN_ITEM_ETC_HP_POTION_04_TT_DESC 0 0 3 3 1 1 180000 3 0 1 1 1 255 3 1 0 0 1 0 400 0 0 0 7 140 -1 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 100 0 0 0 xxx item\etc\drop_ch_bag.bsr item\etc\hp_potion_04.ddj xxx xxx 50 2 0 0 4 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 570 HP회복양 0 HP회복양(%) 0 MP회복양 0 MP회복양(%) -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx -1 xxx 0 0
```
Question:How do I directly read, for example, line 5? | You can use my `LineReader` class (either the one in [MiscUtil](http://pobox.com/~skeet/csharp/miscutil) or a [simple version here](http://csharpindepth.com/ViewNote.aspx?NoteID=113)) to implement `IEnumerable<string>` and then use LINQ:
```
string line5 = new LineReader(file).Skip(4).First();
```
This assumes .NET 3.5, admittedly. Otherwise, open a `TextReader` (e.g. with `File.OpenText`) and just call `ReadLine()` four times to skip the lines you don't want, and then once more to read the fifth line.
There's no way of "shortcutting" this unless you know exactly how many bytes are in each line. | If you are dealing with a fixed-width data format (ie. you know all the lines to be the same length), you can multiply the length with your desired line number and use Stream.Seek to find the start point of the nth line.
If the lines are not fixed length, you need to find the right number of line breaks until you are at the beginning of the line you want. That would be easiest done with StreamReader.ReadLine. (You can make an extension method to make the file en IEnumerable<string> as Jon Skeet suggests - this would get you nicer syntax, but under the hood you will be using ReadLine).
If performance is an issue, it might be (a little bit) more efficient to scan for <CR><LF> byte sequences in the file manually using the Stream.Read method. I haven't tested that; but the StreamReader obviously need to do some work to construct a string out of the byte sequence - if you don't care about the first lines, this work can be saved, so theoretically you should be able to make a scanning method that performs better. This would be a lot more work for you, however. | Is there an option "go to line" in TextReader/StreamReader? | [
"",
"c#",
".net",
"text",
""
] |
If I have code like this:
```
<script>
function determine()
{
// ????
}
</script>
<a href="blah1" onclick="determine()">blah1</a>
<a href="blah2" onclick="determine()">blah2</a>
```
Is there a way in `determine()` to see which link was clicked?
(Yes, I know, the easy and correct thing to do would be to pass `this` to `determine()`, but in this case that's not going to be easy to do because of legacy code issues.)
**EDIT:** I probably should have mentioned this at the beginning...our site is not currently using (and cannot use, for the time being) jQuery, so jQuery answers (while valuable in general for this type of question) won't actually help me. | If you cannot change the onclick="determine()" in your HTML, but you can change the determine() function, then I think your best bet is to:
Leave the determine() function blank so it doesn't do anything.
Use javascript (as described by other answers) to add a real click handler to each link, and use the event to determine which link was clicked then execute the desired code. | Check out this [link](http://www.quirksmode.org/js/events_properties.html) from quirksmode. You can get the event target.
```
function doSomething(e) {
var targ;
if (!e) var e = window.event;
if (e.target) targ = e.target;
else if (e.srcElement) targ = e.srcElement;
if (targ.nodeType == 3) // defeat Safari bug
targ = targ.parentNode;
}
``` | Can I tell anything about a hyperlink that was clicked in JavaScript? | [
"",
"javascript",
"html",
"dom-events",
""
] |
I wanted to echo an image every after 3 post via XML here is my code :
```
<?php
// URL of the XML feed.
$feed = 'test.xml';
// How many items do we want to display?
//$display = 3;
// Check our XML file exists
if(!file_exists($feed)) {
die('The XML file could not be found!');
}
// First, open the XML file.
$xml = simplexml_load_file($feed);
// Set the counter for counting how many items we've displayed.
$counter = 0;
// Start the loop to display each item.
foreach($xml->post as $post) {
echo '
<div style="float:left; width: 180px; margin-top:20px; margin-bottom:10px;">
image file</a> <div class="design-sample-txt">'. $post->author.'</div></div>
';
// Increase the counter by one.
$counter++;
// Check to display all the items we want to.
if($counter >= 3) {
echo 'image file';
}
//if($counter == $display) {
// Yes. End the loop.
// break;
//}
// No. Continue.
}
?>
```
here is a sample first 3 are correct but now it doesn't loop idgc.ca/web-design-samples-testing.php | The easiest way is to use the modulus division operator.
```
if ($counter % 3 == 0) {
echo 'image file';
}
```
How this works:
Modulus division returns the remainder. The remainder is always equal to 0 when you are at an even multiple.
There is one catch: `0 % 3` is equal to 0. This could result in unexpected results if your counter starts at 0. | Going off of @Powerlord's answer,
> "There is one catch: 0 % 3 is equal to 0. This could result in
> unexpected results if your counter starts at 0."
You can still start your counter at 0 (arrays, querys), but offset it
```
if (($counter + 1) % 3 == 0) {
echo 'image file';
}
``` | PHP: How do you determine every Nth iteration of a loop? | [
"",
"php",
"html",
"loops",
""
] |
I work daily with Python 2.4 at my company. I used the versatile logarithm function 'log' from the standard math library, and when I entered log(2\*\*31, 2) it returned 31.000000000000004, which struck me as a bit odd.
I did the same thing with other powers of 2, and it worked perfectly. I ran 'log10(2\*\*31) / log10(2)' and I got a round 31.0
I tried running the same original function in Python 3.0.1, assuming that it was fixed in a more advanced version.
Why does this happen? Is it possible that there are some inaccuracies in mathematical functions in Python? | This is to be expected with computer arithmetic. It is following particular rules, such as [IEEE 754](http://en.wikipedia.org/wiki/IEEE_754), that probably don't match the math you learned in school.
If this *actually* matters, use Python's [decimal type](http://docs.python.org/library/decimal.html).
Example:
```
from decimal import Decimal, Context
ctx = Context(prec=20)
two = Decimal(2)
ctx.divide(ctx.power(two, Decimal(31)).ln(ctx), two.ln(ctx))
``` | You should read "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
<http://docs.sun.com/source/806-3568/ncg_goldberg.html> | Inaccurate Logarithm in Python | [
"",
"python",
"math",
"floating-point",
""
] |
How do I get a maximium daily value of a numerical field over a year in MS-SQL | This would query the daily maximum of value over 2008:
```
select
datepart(dayofyear,datecolumn)
, max(value)
from yourtable
where '2008-01-01' <= datecolumn and datecolumn < '2009-01-01'
group by datepart(dayofyear,datecolumn)
```
Or the daily maximum over each year:
```
select
datepart(year,datecolumn),
, datepart(dayofyear,datecolumn)
, max(value)
from yourtable
group by datepart(year,datecolumn), datepart(dayofyear,datecolumn)
```
Or the day(s) with the highest value in a year:
```
select
Year = datepart(year,datecolumn),
, DayOfYear = datepart(dayofyear,datecolumn)
, MaxValue = max(MaxValue)
from yourtable
inner join (
select
Year = datepart(year,datecolumn),
, MaxValue = max(value)
from yourtable
group by datepart(year,datecolumn)
) sub on
sub.Year = yourtable.datepart(year,datecolumn)
and sub.MaxValue = yourtable.value
group by
datepart(year,datecolumn),
datepart(dayofyear,datecolumn)
``` | You didn't mention which RDBMS or SQL dialect you're using. The following will work with T-SQL (MS SQL Server). It may require some modifications for other dialects since date functions tend to change a lot between them.
```
SELECT
DATEPART(dy, my_date),
MAX(my_number)
FROM
My_Table
WHERE
my_date >= '2008-01-01' AND
my_date < '2009-01-01'
GROUP BY
DATEPART(dy, my_date)
```
The DAY function could be any function or combination of functions which gives you the days in the format that you're looking to get.
Also, if there are days with no rows at all then they will not be returned. If you need those days as well with a NULL or the highest value from the previous day then the query would need to be altered a bit. | How do I get a maximium daily value of a numerical field over a year in SQL | [
"",
"sql",
"sql-server",
""
] |
I'm trying to find all occurrences of items in HTML page that are in between `<nobr>` and `</nobr>` tags.
EDIT:(nobr is an example. I need to find content between random strings, not always tags)
I tried this
```
var match = /<nobr>(.*?)<\/nobr>/img.exec(document.documentElement.innerHTML);
alert (match);
```
But it gives only one occurrence. + it appears twice, once with the `<nobr></nobr>` tags and once without them. I need only the version without the tags. | you need to do it in a loop
```
var match, re = /<nobr>(.*?)<\/nobr>/img;
while((match = re.exec(document.documentElement.innerHTML)) !== null){
alert(match[1]);
}
``` | use the DOM
```
var nobrs = document.getElementsByTagName("nobr")
```
and you can then loop through all nobrs and extract the innerHTML or apply any other action on them. | regular expression (javascript) How to match anything beween two tags any number of times | [
"",
"javascript",
"regex",
""
] |
i'm having a problem running an sql in ms-access. im using this code:
```
SELECT readings_miu_id, ReadDate, ReadTime, RSSI, Firmware, Active, OriginCol, ColID, Ownage, SiteID, PremID, prem_group1, prem_group2
INTO analyzedCopy2
FROM analyzedCopy AS A
WHERE ReadTime = (SELECT TOP 1 analyzedCopy.ReadTime FROM analyzedCopy WHERE analyzedCopy.readings_miu_id = A.readings_miu_id AND analyzedCopy.ReadDate = A.ReadDate ORDER BY analyzedCopy.readings_miu_id, analyzedCopy.ReadDate, analyzedCopy.ReadTime)
ORDER BY A.readings_miu_id, A.ReadDate ;
```
and before this i'm filling in the analyzedCopy table from other tables given certain criteria. for one set of criteria this code works just fine but for others it keeps giving me runtime error '3354'. the only diference i can see is that with the criteria that works, the table is around 4145 records long where as with the criteria that doesn't work the table that im using this code on is over 9000 records long. any suggestions?
is there any way to tell it to only pull half of the information and then run the same select string on the other half of the table im pulling from and add those results to the previous results from the first half?
The full text for run-time error '3354' is that it is "At most one record can be returned by this subquery."
I just tried to run this query on the first 4000 records and it failed again with the same error code so it can't be the ammount of records i would think. | See this:
<http://allenbrowne.com/subquery-02.html#AtMostOneRecord>
What is happening is your subquery is returning two identical records (based on the ORDER BY) and the TOP 1 actually returns two records (yes that's how access does the TOP statement). You need to add fields to the ORDER BY to make it unique - preferable an unique ID (you do have an unique PK don't you?)
As Andomar below stated DISTINCT TOP 1 will work as well. | What does MS-ACCESS return when you run the subquery?
```
SELECT TOP 1 analyzedCopy.ReadTime
FROM analyzedCopy
WHERE analyzedCopy.readings_miu_id = A.readings_miu_id
AND analyzedCopy.ReadDate = A.ReadDate
ORDER BY analyzedCopy.readings_miu_id, analyzedCopy.ReadDate,
analyzedCopy.ReadTime
```
If it returns multiple rows, maybe it can be fixed with DISTINCT:
```
SELECT DISTINCT TOP 1 analyzedCopy.ReadTime
FROM ... rest of query ...
``` | ms-access: runtime error 3354 | [
"",
"sql",
"ms-access",
""
] |
I've noticed many sites use this, without closing the tag.
```
<script type="text/javascript" src="editor.js">
```
This style is also [recommended](http://www.w3schools.com/js/js_whereto.asp) but is longer:
```
<script type="text/javascript" src="editor.js"></script>
```
Can I write it like this? Is it valid or is there a better way?
```
<script type="text/javascript" src="editor.js" />
``` | You always want to use
```
<script type="text/javascript" src="editor.js"></script>
```
Some browsers do not allow self-closing script tags.
For more information, see [Why don't self-closing script tags work?](https://stackoverflow.com/questions/69913/why-dont-self-closing-script-tags-work) | Use the second option. [Not all browsers](http://www.phpied.com/ie-script-tag-problem/) support the self-closing style. | Best way to link to external scripts in Document HEAD | [
"",
"javascript",
"external-script",
"html-head",
""
] |
I'm debugging part of a large project in Visual Studio 2005, and stepping through the code line by line.
```
int speed = this->values.speed;
int ref = this->values.ref_speed;
```
After stepping past the first line, values.speed has a value of 61, but for some reason, speed is getting assigned the value 58. After the second line, values.ref\_speed has a value of 58, but ref gets assigned the value 30.
When paused, you can see that the original values are in fact 61 and 58 respectively, but the values getting stored are different.
What is causing this behaviour? | This could happen if the definition of the values structure got changed in a header file and not all the object files got recompiled. Then the "map" of the structure your code in this file is using might not match the rest of the code's. That could explain why one of the variables appears to have the other's value.
Or the Visual Studio .pdb file didn't get updated for some reason, and Visual Studio is looking in the old place for the variable. | Crazy. Any chance you have a local variable of the same type as this->values whose name is also values (which would explain why you are referencing a member via this->)? | Assignments failing | [
"",
"c++",
"debugging",
"visual-studio-2005",
"variable-assignment",
""
] |
Here is the code I've written to create a scenario:
```
USE tempdb
GO
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'dbo.Emp') AND type in (N'U'))
DROP TABLE Emp
GO
CREATE TABLE Emp(
EmpID Int Identity(10,1) Primary Key,
EmpGroupID Int)
GO
INSERT INTO Emp(EmpGroupID) VALUES(1000)
INSERT INTO Emp(EmpGroupID) VALUES(1000)
INSERT INTO Emp(EmpGroupID) VALUES(1000)
INSERT INTO Emp(EmpGroupID) VALUES(2000)
INSERT INTO Emp(EmpGroupID) VALUES(2000)
INSERT INTO Emp(EmpGroupID) VALUES(2000)
INSERT INTO Emp(EmpGroupID) VALUES(3000)
GO
SELECT * FROM Emp
ORDER BY EmpGroupID,EmpID
```
What I need is for each group to have a counter variable, incrementing by 1, such that all the rows for Group 1000 have counter=1, groupid=2000 has counter=2, groupid=3000 has counter=3.
```
SELECT ?,EmpID,EmpGroupID
FROM Emp
ORDER BY EmpGroupID,EmpID
-- The result I'm looking for is:
1,10,1000
1,11,1000
1,12,1000
2,13,2000
2,14,2000
2,15,2000
3,16,3000
``` | You're describing a dense ranking of groups:
```
SELECT
DENSE_RANK() OVER (ORDER BY EmpGroupID) as Counter,
EmpID,
EmpGroupID
FROM Emp
ORDER BY EmpGroupID,EmpID
```
And here's some reference material: <http://msdn.microsoft.com/en-us/library/ms189798.aspx> | You mean, you need a query that produces textual output with the commas as shown?
Try:
```
SELECT Counter + ',' + EmpGroupID + ',' + EmpID
FROM Table
ORDER BY EmpGroupID
``` | SQL Select Counter by Group | [
"",
"sql",
"count",
""
] |
I have this query:
```
select distinct id,name from table1
```
For a given ID, the name will always be the same. Both fields are indexed. There's no separate table that maps the id to the name. The table is very large (10 of millions of rows), so the query could take some time.
This query is very fast, since it's indexed:
```
select distinct name from table1
```
Likewise for this query:
```
select distinct id from table1
```
Assuming I can't get the database structure changed (a very safe assumption) what's a better way to structure the first query for performance?
Edit to add a sanitized desc of the table:
```
Name Null Type
------------------------------ -------- ----------------------------
KEY NOT NULL NUMBER
COL1 NOT NULL NUMBER
COL2 NOT NULL VARCHAR2(4000 CHAR)
COL3 VARCHAR2(1000 CHAR)
COL4 VARCHAR2(4000 CHAR)
COL5 VARCHAR2(60 CHAR)
COL6 VARCHAR2(150 CHAR)
COL7 VARCHAR2(50 CHAR)
COL8 VARCHAR2(3 CHAR)
COL9 VARCHAR2(3 CHAR)
COLA VARCHAR2(50 CHAR)
COLB NOT NULL DATE
COLC NOT NULL DATE
COLD NOT NULL VARCHAR2(1 CHAR)
COLE NOT NULL NUMBER
COLF NOT NULL NUMBER
COLG VARCHAR2(600 CHAR)
ID NUMBER
NAME VARCHAR2(50 CHAR)
COLH VARCHAR2(3 CHAR)
20 rows selected
``` | [LATEST EDIT]
My **ORIGINAL ANSWER** regarding creating the appropriate index on (name,id) to replace the index on (name) is below. (That wasn't an answer to the original question, which disallowed any database changes.)
Here are statements that I have *not* yet tested. There's probably some obvious reason these won't work. I'd never actually *suggest* writing statements like this (at the risk of being drummed thoroughly for such ridiculous suggestion.)
If these queries even return result sets, the ressult set will only resemble the result set from the OP query, almost **by accident**, taking advantage of a **quirky guarantee** about the data that Don has provided us. This statement is NOT equivalent to the original SQL, these statements are designed for the **special case** as described by Don.
```
select m1.id
, m2.name
from (select min(t1.rowid) as min_rowid
, t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
, (select min(t2.rowid) as min_rowid
, t2.name from table1 t2
where t2.name is not null
group by t2.name
) m2
where m1.min_rowid = m2.min_rowid
order
by m1.id
```
Let's unpack that:
* **m1** is an inline view that gets us a list of distinct id values.
* **m2** is an inline view that gets us a list of distinct name values.
* materialize the views **m1** and **m2**
* match the ROWID from **m1** and **m2** to match `id` with `name`
Someone else suggested the idea of an index merge. I had previously dismissed that idea, an optimizer plan to match 10s of millions of rowids without eliminating any of them.
With sufficiently low cardinality for id and name, and with the right optimizer plan:
```
select m1.id
, ( select m2.name
from table1 m2
where m2.id = m1.id
and rownum = 1
) as name
from (select t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
order
by m1.id
```
Let's unpack that
* **m1** is an inline view that gets us a list of distinct id values.
* materialize the view **m1**
* for each row in **m1**, query table1 to get the name value from a single row (stopkey)
**IMPORTANT NOTE**
These statements are FUNDAMENTALLY different that the OP query. They are designed to return a DIFFERENT result set than the OP query. The *happen* to return the desired result set because of a quirky guarantee about the data. Don has told us that a `name` is determined by `id`. (Is the converse true? Is `id` determined by `name`? Do we have a STATED GUARANTEE, not necessarily enforced by the database, but a guarantee that we can take advantage of?) For any `ID` value, every row with that `ID` value will have the same `NAME` value. (And we are also guaranteed the converse is true, that for any `NAME` value, every row with that `NAME` value will have the same `ID` value?)
If so, maybe we can make use of that information. If `ID` and `NAME` appear in distinct pairs, we only need to find one particular row. The "pair" is going to have a matching ROWID, which conveniently happens to be available from each of the existing indexes. What if we get the minimum ROWID for each `ID`, and get the minimum ROWID for each `NAME`. Couldn't we then match the `ID` to the `NAME` based on the ROWID that contains the pair? I think it might work, given a low enough cardinality. (That is, if we're dealing with only hundreds of ROWIDs rather than 10s of millions.)
[/LATEST EDIT]
[EDIT]
The question is now updated with information concerning the table, it shows that the `ID` column and the `NAME` column both allow for NULL values. If Don can live without any NULLs returned in the result set, then adding the IS NOT NULL predicate on both of those columns may enable an index to be used. (NOTE: in an Oracle (B-Tree) index, NULL values do NOT appear in the index.)
[/EDIT]
**ORIGINAL ANSWER:**
create an appropriate index
```
create index table1_ix3 on table_1 (name,id) ... ;
```
Okay, that's **not** the answer to the **question you asked**, but it's the right answer to fixing the performance problem. (You specified no changes to the database, but in this case, changing the database is the right answer.)
Note that if you have an index defined on `(name,id)`, then you (very likely) don't need an index on `(name)`, sine the optimizer will consider the leading `name` column in the other index.
(UPDATE: as someone more astute than I pointed out, I hadn't even considered the possibility that the existing indexes were bitmap indexes and not B-tree indexes...)
---
Re-evaluate your need for the result set... do you need to return `id`, or would returning `name` be sufficient.
```
select distinct name from table1 order by name;
```
For a particular name, you could submit a second query to get the associated `id`, if and when you needed it...
```
select id from table1 where name = :b1 and rownum = 1;
```
---
If you you really *need* the specified result set, you can try some alternatives to see if the performance is any better. I don't hold out much hope for any of these:
```
select /*+ FIRST_ROWS */ DISTINCT id, name from table1 order by id;
```
or
```
select /*+ FIRST_ROWS */ id, name from table1 group by id, name order by name;
```
or
```
select /*+ INDEX(table1) */ id, min(name) from table1 group by id order by id;
```
UPDATE: as others have astutely pointed out, with this approach we're testing and comparing performance of alternative queries, which is a sort of hit or miss approach. (I don't agree that it's random, but I would agree that it's hit or miss.)
UPDATE: tom suggests the ALL\_ROWS hint. I hadn't considered that, because I was really focused on getting a query plan using an INDEX. I suspect the OP query is doing a full table scan, and it's probably not the scan that's taking the time, it's the sort unique operation (<10g) or hash operation (10gR2+) that takes the time. (Absent timed statistics and event 10046 trace, I'm just guessing here.) But then again, maybe it is the scan, who knows, the high water mark on the table could be way out in a vast expanse of empty blocks.
It almost goes without saying that the statistics on the table should be up-to-date, and we should be using SQL\*Plus AUTOTRACE, or at least EXPLAIN PLAN to look at the query plans.
But none of the suggested alternative queries really address the performance issue.
It's possible that hints will influence the optimizer to chooze a different plan, basically satisfying the ORDER BY from an index, but I'm not holding out much hope for that. (I don't think the FIRST\_ROWS hint works with GROUP BY, the INDEX hint may.) I can see the potential for such an approach in a scenario where there's gobs of data blocks that are empty and sparsely populated, and ny accessing the data blocks via an index, it could actually be significantly fewer data blocks pulled into memory... but that scenario would be the exception rather than the norm.
---
UPDATE: As Rob van Wijk points out, making use of the Oracle trace facility is the most effective approach to identifying and resolving performance issues.
Without the output of an EXPLAIN PLAN or SQL\*Plus AUTOTRACE output, I'm just guessing here.
I suspect the performance problem you have right now is that the table data blocks have to be referenced to get the specified result set.
There's no getting around it, the query can not be satisfied from just an index, since there isn't an index that contains both the `NAME` and `ID` columns, with either the `ID` or `NAME` column as the leading column. The other two "fast" OP queries can be satisfied from index without need reference the row (data blocks).
Even if the optimizer plan for the query was to use one of the indexes, it still has to retrieve the associated row from the data block, in order to get the value for the other column. And with no predicate (no WHERE clause), the optimizer is likely opting for a full table scan, and likely doing a sort operation (<10g). (Again, an EXPLAIN PLAN would show the optimizer plan, as would AUTOTRACE.)
I'm also assuming here (big assumption) that both columns are defined as NOT NULL.
You might also consider defining the table as an index organized table (IOT), especially if these are the only two columns in the table. (An IOT isn't a panacea, it comes with it's own set of performance issues.)
---
You can try re-writing the query (unless that's a database change that is also verboten) In our database environments, we consider a query to be as much a part of the database as the tables and indexes.)
Again, without a predicate, the optimizer will likely not use an index. There's a chance you could get the query plan to use one of the existing indexes to get the first rows returned quickly, by adding a hint, test a combination of:
```
select /*+ INDEX(table1) */ ...
select /*+ FIRST_ROWS */ ...
select /*+ ALL_ROWS */ ...
distinct id, name from table1;
distinct id, name from table1 order by id;
distinct id, name from table1 order by name;
id, name from table1 group by id, name order by id;
id, min(name) from table1 group by id order by id;
min(id), name from table1 group by name order by name;
```
With a hint, you may be able to influence the optimizer to use an index, and that may avoid the sort operation, but overall, it make take more time to return the entire result set.
(UPDATE: someone else pointed out that the optimizer might choose to merge two indexes based on ROWID. That's a possibility, but without a predicate to eliminate some rows, that's likely going to be a much more expensive approach (matching 10s of millions ROWIDs) from two indexes, especially when none of the rows are going to be excluded on the basis of the match.)
But all that theorizing doesn't amount to squat without some performance statistics.
---
Absent altering anything else in the database, the only other hope (I can think of) of you speeding up the query is to make sure the sort operation is tuned so that the (required) sort operation can be performed in memory, rather than on disk. But that's not really the right answer. The optimizer may not be doing a sort operation at all, it may be doing a hash operation (10gR2+) instead, in which case, that should be tuned. The sort operation is just a guess on my part, based on past experience with Oracle 7.3, 8, 8i, 9i.)
A serious DBA is going to have more issue with you futzing with the `SORT_AREA_SIZE` and/or `HASH_AREA_SIZE` parameters for your session(s) than he will in creating the correct indexes. (And those session parameters are "old school" for versions prior to 10g automatic memory management magic.)
**Show your DBA the specification for the result set, let the DBA tune it.** | A query cannot be tuned by looking at it, or randomly suggesting some equivalent queries, regardless how well meant they are.
You, we or the optimizer needs to know statistics about your data. And then you can measure with tools like EXPLAIN PLAN or SQL*Trace/tkprof or even the simple autotrace tool from SQL*Plus.
Can you show us the output of this:
```
set serveroutput off
select /*+ gather_plan_statistics */ distinct id,name from table1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
```
And how does your entire table1 look like? Please show a describe output.
Regards,
Rob. | Faster 'select distinct thing_id,thing_name from table1' in oracle | [
"",
"sql",
"performance",
"oracle",
"select",
""
] |
Basically I would like to have some dictionary that is an abstaction over legacy `#define` directives.
I have an old header file that contains 6000+ defines, that are used as flag parametersome function and these defines denote one type of entity `parameter`.
In C i have
```
GetParameter(... , T_CTITLE, ...);
```
In Java i would like to call
```
Connector.getParameter(Parameter.CTITLE, ...);
```
And Parameter would encapsulate all logic that is associated with getting the parameter from library.
Parameter instances are automatically extracted from header and converted to java code, but the problem is that Parameter class gets too big - i.e. i get `code too large` compile error (let me underline: there are more that 6000 parameters).
And I would be thrilled to do this abstraction in a way that enables IDE o use autocompletion, and would wery much dislike idea of storing Parameter objects in say `HashMap`.
EDIT: Parameter Class is defined in following way:
```
public Parameter{
/** logic **/
public static final Parameter<T> parameter1 = new Parameter<T>("NAME", "T", 0xAAB);
...
public static final Parameter<T> parameter6000 = new Parameter<T>("FOO", "T", 0xZZZ);
}
``` | An obvious hack would be to either partition into a big inheritance chain, or better partition into interfaces (no need for the `public static final` noise) and one interface to inherit them all.
You could save space by making the creation code smaller. Instead of:
```
new Parameter<T>("NAME", "T", 0xAAB)
```
A minimalist approach would be:
```
parameter("NAME T AAB")
```
For details of limitations, see [section 4.10](http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#88659) of the JVM Spec (2nd Ed). To see what your compiled code is like, use [`javap`](http://java.sun.com/javase/6/docs/technotes/tools/solaris/javap.html) `-c`. | Maybe I'm not understanding what you want to do correctly, but this looks like a perfect use for an Enum to me. Since you can add functions to Enums they should be able to do what you want, as long as your version of Java is recent enough (1.5+). They serialize too!
And yes, it works with autocomplete, although a list of 6000 is big.
I don't know if there is a limit to the size of an Enum, but you could find out.
Ex:
```
public enum Parameter {
NAME("Pending", "T", 0xAAB), FOO("Foo", "T", 0x1FC);
private final String displayValue;
private final char myChar;
private final int someNum;
private Parameter(String display, char c, int num) {
this.displayValue = display;
this.myChar = c;
this.someNum = num;
}
public String getDisplayValue() {
return displayValue;
}
public char getMyChar() {
return myChar;
}
public int getSomeNum() {
return someNum;
}
}
```
Now this lets you do the kind of thing you want. Ex:
```
System.out.println("Hi, the value is " + Parameter.NAME.getSomeNum());
```
Since they don't change during run-time (after all, #DEFINEs can't), an ENUM should fit the bill.
As for the pure size, it might behoove you do try to catogorize them slightly, and put them in a couple of Enum groups.
This gives you the ability to associate the meta-data (the numbers), do auto complete, ==, and such. | Elegant way of holding large static typesafe dictionary in java - or avoiding code too large | [
"",
"java",
"c",
"compiler-construction",
"abstraction",
""
] |
I know its possible to get a part of a .txt, then convert it to an integer, then store it in a variable, but is it possible to to that in a single declaration. (The variable needs to be global).
Ie:
```
[data.txt]
1020
[convert_data.cpp]
#include<fstream>
fstream convert("data.txt");
//way to declare something equal to A PARTICULAR POINT in data.txt
int main()
{
//how would I take this block of code and simplify it to two DECLARATIONS (not
//function calls), or, if that's not possible or not practical, how would I make
//n and m (or var1 and var2) global AND CONSTANT?
char var1[5];
convert.getline(var1,2);
char var2[5];
convert.getline(var2,2);
const int n=atoi(var1);
const int m=atoi(var2);
return 0;
}
``` | No, you can't do it in a single declaration. You can write a function to do that, and then you can initialize local variables with that function, but you can't initialize global variables with it. If you want to initialize global variables, you'll need to explicitly do so during your program's initialization routine, or you could do it in the constructor of a global object (not recommended). | From the point of view of the compiler, a *constant* is something that has a fixed value at compile time. Since you're reading your data from a text file (which can be changed after the program has been compiled), the numbers you read from the text file cannot be constants in your program.
Having said that, you *could* do something like this:
```
class ConstInteger {
public:
ConstInteger(fstream &f) {
char s[5];
f.getline(s,2);
value = atoi(s);
}
public operator int() const { return value; }
private:
int value;
};
fstream convert("data.txt");
ConstInteger n(convert);
```
The above relies on the initialisation the C++ compiler does for global classes. As a consequence of this, you are largely at the mercy of the C++ compiler with respect to class initialisation order. Also, error handling in this arrangement may be troublesome. (For example, what happens if `data.txt` does not exist?) | Is there a function that returns the character/string at a point in a .txt? | [
"",
"c++",
"text",
"int",
"constants",
"global",
""
] |
I want to use some C++ STL collections in my Objective-C iPhone app. Apparently this is possible by giving files the extension ".mm" . What are some of the quirks/surprises associated with this?
I want to use the basic containers that I'm familiar with (vector, queue, set, ...)
Cheers! | See [Using C++ With Objective-C](http://developer.apple.com/documentation/Cocoa/Conceptual/ObjectiveC/Articles/ocCPlusPlus.html) for a detailed list of what you can and can't do. You can do most things that you would expect. You just can't do things like have a C++ class inherit from an Objective-C class or vice-versa, you can't mix C++ exceptions with Objective-C exceptions, and C++ introduces several new keywords not present in Objective-C. | The major quirk of Objective-C++ is that if you don't pass -fobjc-call-cxx-cdtors to g++, it won't call the constructor and destructor of C++ instance variables in of ObjC objects. So remember to turn that option on and you should be good. | What Are Some Quirks/Surprises with Using .mm Files in Objective-C? | [
"",
"c++",
"objective-c",
"objective-c++",
""
] |
If you are windows programmer and you want to program something new where you
are going to use some new API with which you are not that familiar then you can type MSDN on your web browser and you get immediately what you need. Nicely grouped API functions where you can see what to include and what to link.
I am looking for something similar in the Linux world. I want to sleep my function for some milliseconds and I type "man sleep" then I get the explanation of shell command "sleep". But I don't want that. I am programming and I just want see the programmatical usage of that function.
So the question is: **Is there a central, clickable and browsable documentation of C, C++ standard libraries AND linux system calls which are not part of the C/C++ standard but quite often used in linux programming ?**
Thanks in advance,
G. | Man is broken down into sections If you type "man man" you can see them.
```
1 Executable programs or shell commands
2 System calls (functions provided by the kernel)
3 Library calls (functions within program libraries)
4 Special files (usually found in /dev)
5 File formats and conventions eg /etc/passwd
6 Games
7 Miscellaneous (including macro packages and conven‐
tions), e.g. man(7), groff(7)
8 System administration commands (usually only for root)
9 Kernel routines [Non standard]
```
So since you wnat the library call version of sleep() you would write "man 3 sleep". Also "info" is another way to access the same information.
You can also do a search with "man -k sleep", which will list everything matching sleep.
There are hyperlinked man pages scattered around the internet if you want to bookmark them.
For C++ APIs there are some good sites that many people have bookmarked and open a good portion of the time.
The important thing to remember is that unlike Windows, no one really owns or controls Linux. You can build any kind of distribution you want with many different kernel options. It makes things less tidy in some ways but far more flexible in others. | Well in your case you could have typed "man 3 sleep"...
Konqueror (the KDE web/file browser) lets you type "#XXX" in the bar to look up the man page for XXX, and "##XXX" to look up the info page for XXX. Unlike man, it gives you the choice between which man page you want to choose if there is more than one. They are interlinked together, so looking up "sleep", you will see in the "SEE ALSO" section, signal, and you can click it to go to its man page.
I don't know of anything like this for C++, but there are several good websites with documentation:
<http://www.cplusplus.com/reference/>
<http://www.sgi.com/tech/stl/>
(just to name a few that I use regularly) | Central Clickable MSDN like Linux System/C/C++ Standard Library Documentation | [
"",
"c++",
"c",
"linux",
""
] |
I wrote some Java code to generate pseudo-random numbers but I'm running into what I think are rounding problems getting it to match up with the same code in Ruby. Here's what I have in Java:
```
public class randTest {
private final static Double A = Math.pow(5,13);
private final static Integer S = 314159265;
private final static Double minVal = Math.pow(2, -46);
private final static Double maxVal = Math.pow(2, 46);
private final static Double newMax = 10.0;
private static Double r(Integer k) {
Double powS = Math.pow(A, k) * S;
Double xk = powS % maxVal.intValue();
Double result = minVal * xk;
System.out.println("k = " + k + ", pows = " + powS + ", xk = " + xk + ", result = " + result);
return result;
}
}
```
And the same thing in Ruby (my working reference implementation):
```
A = 5 ** 13
S = 314159265
MIN_VAL = 2 ** -46
MAX_VAL = 2 ** 46
NEW_MAX = 10
def r(k) # was generate_random
powS = (A ** k) * S
xk = powS % MAX_VAL
result = MIN_VAL * xk
puts "k = #{k}, pows = #{powS}, xk = #{xk}, result = #{result}"
return result
end
```
For some reason, the output is vastly different (but the Ruby one is correct). Here's me calling r(4):
Java:
```
k = 4, pows = 6.9757369880463215E44, xk = 1.512592341E9, result = 2.1495230001278287E-5
```
Ruby:
```
k = 4, pows = 697573698804632158498861826956272125244140625, xk = 55279057169489, result = 0.785562650228954
```
Any idea as to why `powS` would be computed correctly in both but not `xk`? Note that on the Java version I needed to use `maxVal.intValue()` instead of `maxVal`, otherwise it returns zero. I've also tried replacing the `Double`s with `BigDecimal`s to no avail as well. | The problems is that Double can not hold the value you are trying to put into it, so it's getting truncated.
For large values like this, you need to use [java.math.BigDecimal](http://file:///D:/Documents/JDK-6-JavaDocs/docs/api/java/math/BigDecimal.html), which allows for an arbitrarily large precision for decimal values.
Here's you're Java sample redone using BigDecimal:
```
public class RandTest {
private final static BigDecimal A = new BigDecimal(5).pow(13);
private final static BigDecimal S = new BigDecimal(314159265);
private final static BigDecimal minVal =
new BigDecimal(2).pow(-46, new MathContext(100));
private final static BigDecimal maxVal = new BigDecimal(2).pow(46);
private final static BigDecimal newMax = new BigDecimal(10.0);
private static BigDecimal r(Integer k) {
BigDecimal powS = A.pow(k).multiply(S);
BigDecimal xk = powS.remainder(maxVal);
BigDecimal result = minVal.multiply(xk);
System.out.println("k = " + k + ", pows = " + powS + ", xk = " + xk
+ ", result = " + result);
return result;
}
}
```
This version property returns the correct result you're looking for. | You're getting truncation errors when you call `maxVal.intValue()`
Take a look at BigDecimal and BigInteger to achieve the same as your ruby snippet.
BTW: If you use [groovy](http://groovy.codehaus.org/) which sits on top of Java, then this uses BigDecimal out of the box.
Example code:
```
public class Rounding {
private final static BigDecimal A = BigDecimal.valueOf(Math.pow(5, 13));
private final static int S = 314159265;
private final static BigDecimal minVal = BigDecimal.valueOf(Math
.pow(2, -46));
private final static BigDecimal maxVal = BigDecimal
.valueOf(Math.pow(2, 46));
private final static BigDecimal newMax = BigDecimal.valueOf(10);
public static void main(final String[] args) {
r(4);
}
private static void r(final int k) {
final BigDecimal powS = A.pow(k).multiply(BigDecimal.valueOf(S));
final BigDecimal xk = powS.remainder(new BigDecimal(maxVal
.toBigInteger()));
final BigDecimal result = minVal.multiply(xk);
System.out.println("k = " + k + ", pows = " + powS + ", xk = " + xk
+ ", result = " + result);
}
}
```
Produces:
```
k = 4, pows = 697573698804632158498861826956272125244140625, xk = 55279057169489, result = 0.785562650228953900455100455956
``` | Java - Problems Rounding Numbers | [
"",
"java",
"ruby",
""
] |
I have a simple mysql DB and use this PHP code to update it.
```
mysql_query("REPLACE INTO `$db_table` (username, live, datetime, ip)
VALUES ('$username', '1', '$timeofentry', '$ip')");
```
I use REPLACE INTO along with a primary key on "username" to let users bump themselves to the top of the most recent list...
I would like to add a bump count. The number of times an entry has been updated (or "replaced into").
How would I go about doing this?
Thanks a lot! | You can use [INSERT ... ON DUPLICATE KEY UPDATE](http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html) which performs an actual update of existing rows.
```
$mysql = mysql_connect(..
...
$username = mysql_real_escape_string(...
$ip = mysql_real_escape_string(...
...
$query = "
INSERT INTO
`$db_table`
(username, live, datetime, ip)
VALUES
(
'$username',
'1',
'$timeofentry',
'$ip'
)
ON DUPLICATE KEY UPDATE
ip = '$ip',
bumpCount = bumpCount + 1
";
$result = mysql_query($query, $mysql);
``` | First, you need to add another column to your table to keep the count.
Second, you should probably use the UPDATE statement instead of REPLACE.
REPLACE will actually delete the row, then INSERT a new one which isn't very efficient.
```
UPDATE `$db_table` SET datetime = NOW(), ip = '$IP',
bumpCount = bumpCount + 1 WHERE username = '$username' LIMIT 1;
``` | counting the number of updates to a row in my mysql table | [
"",
"php",
"mysql",
""
] |
Hi I have an ajax script which validates the users information put into textboxes. This is working fine in Internet Explorer, but in FireFox I get an error in getting the gender from a listbox. I have got the gender to be placed into a hidden textbox, for easier processing. The error I get in FF is
dd is null
[Break on this error] theindex = dd.options[dd.selectedIndex].value;
My function in javascript is below, this is loaded on body load or once the gender selected is changed,
function get\_gender()
{
{
```
var dd = document.getElementById("gender_select");
theindex = dd.options[dd.selectedIndex].value;
thevalue = dd.options[dd.selectedIndex].text;
}
document.getElementById("gender_text").value = thevalue;
}
```
One other problem I am having is hidding a div box, this works fine in every other browser but not IE. It should only show the div box once an error is given, but in IE the div box is always shown.
I am using this line to do this:
document.getElementById("username\_div").style.visibility = "hidden";
Rather than pasting all my code the live page can be viewed at
<http://elliottstocks.com/assignment/sign_up/>
Ignore the login box, this works fine.
Any comments/help will be appreciated. Thanks alot =) | getElementByID requires the HTML element to have an ID, just a name isn't good enough.
```
<select name="gender_select" id="gender_select" onChange="get_gender()">
<option>Male</option> <option>Female</option>
</select>
``` | For the null error change:
```
<select onchange="get_gender()" name="gender_select">
```
to
```
<select onchange="get_gender()" name="gender_select" id="gender_select">
```
`document.getElementById` is looking for an element in the DOM that has a given `id` attribute. The reason it works in IE is because it allows selecting by `name` attribute as well. | Ajax problems in IE/Firefox | [
"",
"javascript",
"ajax",
""
] |
I need to trim the last octet from an ip address using php. Basically I'm trying to just remove any digits after the third dot. I'm wondering if there is an out of the box solution for this? as my regex abilities are basic at best. Many thanks. | ```
$trimmed = implode(".", array_slice(explode(".", $ip), 0, 3));
```
or
```
$trimmed = substr($ip, 0, strrpos($ip, "."));
```
or possibly
```
$trimmed = preg_replace("/(\d{1,3})\.(\d{1,3}).(\d{1,3}).(\d{1,3})/", '$1.$2.$3', $ip);
```
A more mathematical approach that doesn't remove the last digit but rather replaces it with a 0:
```
$newIp = long2ip(ip2long("192.168.0.10") & 0xFFFFFF00);
``` | This will remove the last digits and the dot.
```
$trimmed = preg_replace('/\.\d{1,3}$/', '', $ip);
``` | trim ip address octet | [
"",
"php",
""
] |
If I have a private variable that I want to have some internal validation on, and I want to keep that validation in one place, I put it behind a getter/setter and only access it thorugh that getter/setter. That's useful when dealing with public properties, because the other code cannot access the private variable, but when I'm dealing with object inside the class itself, is there any way to enforce the getter/setter?
```
private int _eyeOrientation;
private int eyeOrientation
{
get
{
return _eyeOrientation;
}
set
{
if (value < 0)
{
_eyeOrientation = 0;
}
else
{
_eyeOrientation = value % 360;
}
}
}
```
The issue here being that the other functions in the class may accidentally modify
`_eyeOrientation = -1;`
which would throw the program into a tizzy. Is there any way to get that to throw a compiler error? | Sounds like you need a angle type.
```
// Non mutable Angle class with a normalized, integer angle-value
public struct Angle
{
public Angle(int value)
{
Value = value;
}
private angle;
public Value
{
get { return angle; }
private set { angle = Normalize(value); }
}
public static int Normalize(int value)
{
if (value < 0) return 360 - (value % 360);
return value % 360;
}
}
public class SomeClass
{
public Angle EyeOrientation { get; set; }
}
```
If you have a certain kind of value, like angles, money, weight or whatever, it is always a good praxis to make it a own type, even if the value itself is stored in a int, decimal etc. This type makes your interfaces clearer and typesafe. It is not the same if you expect an Angle or a integer value as argument of some method. | In general, you shouldn't worry about this. Class members can still use the properties, if you don't want to put the checking in the class itself.
If your class is getting so large that you no longer trust methods inside the class, I'd think that it's time to start refactoring and breaking this into smaller classes that are more easily managable. | force get/set access of private variables for private properties | [
"",
"c#",
""
] |
I have a grid view that has a check box column, and I want to trigger a drawing event as soon as the value of the cell is toggled. I tried the ValueChaged and the CellEndEdit and BeginEdit, and chose the selection mode as CellSelect. As for the the first 2 events, the event was triggered upon the finishing of the edit mode, like moving out of the current cell, or going back and forth. It's just a weird behavior.
Is there anything that triggers the event on the grid view as soon as the cell value is changed? | A colleague of mine recommends trapping the CurrentCellDirtyStateChanged event. See <http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.currentcelldirtystatechanged.aspx>. | I use the CellContentClick event, which makes sure the user clicked the checkbox. It DOES fire multiple times even if the user stays in the same cell. The one issue is that the Value does not get updated, and always returns "false" for unchecked. The trick is to use the .EditedFormattedValue property of the cell instead of the Value property. The EditedFormattedValue will track with the check mark and is what one wishes the Value had in it when the CellContentClick is fired.
No need for a timer, no need for any fancy stuff, just use CellContentClick event and inspect the EditedFormattedValue to tell what state the checkbox is going into / just went into. If EditedFormattedValue = true, the checkbox is getting checked. | Triggering a checkbox value changed event in DataGridView | [
"",
"c#",
"winforms",
"events",
"datagridview",
"checkbox",
""
] |
Can I code once in J2ME and run it on any mobile phone just like I can using the .Net compact framework? | The J2ME code that you write needs a certain environment on the phone in terms of the runtime and the classes that are accessible to it. Depending on what you use in your application you would be able to run it on some phones and it wont work on other. For e.g. let say you a J2ME application that uses MIDP 2.0 and CDLC 1.1. Now, your app will only work on phones that support these profiles.
You normally target a certain segment of mobile phones when writing mobile phone applications. You dont expect an acceleratometer application to work on phones that dont have an acceleratometer! ( I dont think J2ME has any support for acceleratometers though ;) ) | You will run into differences in how each provider has implemented their JVM. Since each phone's hardware is different, they all work slightly differently. Most stuff will work, but there might be some differences. | J2ME - Code once and run everywhere? | [
"",
"java",
"java-me",
""
] |
I'm playing around with Box2D for fun right now, and after getting the hang of some of the concepts I decided to make my own test for the test bed (Box2D comes with a set of examples and has a simple extendable Test class for making your own tests). I started by grabbing one of the other tests, ripping out everything but the function signatures, and inserting some of my own code.
However, there's no #includes to any of Box2D's libraries so it doesn't compile (but only my file errors, remove my test file and it compiles fine). I figured I must have accidentally deleted them when I was moving stuff around, but upon checking the other test files there's no #includes anywhere to be seen. Each one of the files uses datastructures and functions that are declared in various Box2D header files. How does this compile at all?
For example, this is one of the prepackaged tests stripped of the constructor body and some comments at the top:
```
#ifndef CHAIN_H
#define CHAIN_H
class Chain : public Test
{
public:
Chain()
{
// Since b2BodyDef isn't defined in this file, and no
// other files are included how does this even compile?
b2BodyDef bd;
// rest of constructor...
}
static Test* Create()
{
return new Chain;
}
};
#endif
``` | Each cpp file gets compiled. Before it is compiled though, the preprocessor runs. The preprocesser deals with all of the keywords starting with #, like #include. The preprocessor takes the text of any #include'd files and replaces the #include statement with all the text in the file it includes. If the #include'd file includes other files, their text is fetched too.
After the preprocessor has run, you end up with a great big text file called a translation unit. This is what gets compiled.
So.. probably as other people have said. The cpp file somewhere includes the Box2D stuff before it includes chain.h, so everything works. There is often an option on the compiler or the project settings that will get the preprocessor to create a file with all of the text in the translation unit so you can see it. This is sometimes useful to track down errors with #includes or macros. | Perhaps the the header that does define b2BodyDef is #included in the .cpp before this header? There are obviously other headers involved, or you would not be able to refer to class Test. | Making references to classes you havn't #include'd (C++) | [
"",
"c++",
"include",
"header-files",
""
] |
I think the general idea of PHP being able to have common integer 64-bit (as opposed to use math packages) is to use 64-bit hardware and 64-bit PHP. Does someone know the specifics? For example, won't the Core2Duo machine be able to support it? What about the 32-bit version of OS like Vista or OS X, can they support it too? | As long as you're on a 64 bit OS, and install 64 bit binaries, you're good to go.
e.g., my dev box is centos, and I installed php-\*.x86\_64 packages.
When I run:
```
$ php -r 'echo PHP_INT_MAX;'
```
I get:
```
9223372036854775807
```
If 64 bit binaries aren't available for your platform, apparently there's only one configure option you need to remember while compiling: [–with-libdir=/lib64](http://www.blackbeagle.com/web-hosting/compiling-php-on-a-64-bit-system/)
If you're using windows, there are [plenty of resources out there](http://www.google.com/#q=php+64+bit+compile+windows) re: 64 bit PHP on Windows. | * A 32bit OS can't support 64bit software.
* Core2Duo is 64bit (and can also run in 32bit mode)
* There is a [PHPx64 Project](http://www.fusionxlan.com/PHPx64.php) for windows x64, but I.m not sure if it will give you 64bit integers. | what are the specifics of setting up PHP so that integers will be 64-bit? | [
"",
"php",
"64-bit",
""
] |
How do I create a `div` element in **jQuery**? | You can use `append` (to add at last position of parent) or `prepend` (to add at fist position of parent):
```
$('#parent').append('<div>hello</div>');
// or
$('<div>hello</div>').appendTo('#parent');
```
Alternatively, you can use the `.html()` or `.add()` as mentioned in a [different answer](https://stackoverflow.com/a/867941/59087). | As of jQuery 1.4 you can pass attributes to a self-closed element like so:
```
jQuery('<div>', {
id: 'some-id',
class: 'some-class some-other-class',
title: 'now this div has a title!'
}).appendTo('#mySelector');
```
Here it is in the *[Docs](http://api.jquery.com/jQuery/#jQuery2)*
Examples can be found at *[jQuery 1.4 Released: The 15 New Features you Must Know](http://net.tutsplus.com/tutorials/javascript-ajax/jquery-1-4-released-the-15-new-features-you-must-know/)* . | Creating a div element in jQuery | [
"",
"javascript",
"jquery",
"html",
"append",
"jquery-append",
""
] |
For my first table i have questions like this:
```
qid | question | date
1 blah 22-05-2009
```
and then i have the table comments
```
cid | qid
1 1
2 1
3 1
```
so then in my questions table i could have an added column which had total\_comments which would be three
Ive tryed using this code
```
SELECT
questions.qid,
questions.question,
questions.date,
sum(comments.qid) AS total_money
FROM
questions
INNER JOIN comments ON comments.qid = questions.qid
ORDER BY questions.date
LIMIT 1
```
but it errors and only grabs the first row when there is a row with a greater date? Thanks in advance | If i understand correctly, I think you want:
```
SELECT questions.qid, question, date, SUM(comments.qid)
FROM Questions
LEFT OUTER JOIN Comments ON Questions.qid = Comments.qid
GROUP BY Questions.qid, Question, Date
ORDER BY Questions.Date
``` | Try:
```
;WITH comment_summary AS (
SELECT comments.qid
,COUNT(*) AS comment_count
FROM comments
GROUP BY comments.qid
)
SELECT questions.qid
,questions.question
,questions.date
,ISNULL(comment_summary.comment_count, 0) AS comment_count
FROM questions
LEFT JOIN comment_summary
ON comment_summary.qid = questions.qid
ORDER BY questions.date
```
Or, if your SQL dialect, doesn't support CTEs:
```
SELECT questions.qid
,questions.question
,questions.date
,ISNULL(comment_summary.comment_count, 0) AS comment_count
FROM questions
LEFT JOIN (
SELECT comments.qid
,COUNT(*) AS comment_count
FROM comments
GROUP BY comments.qid
) AS comment_summary
ON comment_summary.qid = questions.qid
ORDER BY questions.date
``` | Get total amount of rows from different table with matching ID | [
"",
"sql",
"join",
""
] |
I am writing a multi-threaded solution in Java to connect two systems, A & B. System A is completly serial, not threaded, and will supply data to send to system B. System B accepts data from multiple sources asynchronously all at the same time.
I am using ThreadPoolExecutor to manage the threads. I am using a static singleton instance of a class, TP, that wraps around ThreadPoolExecutor (which is also static) because system A cannot create new objects but it can make calls to static objects.
Here is where I am stuck. I am doing some very basic testing of the setup before I go all out. I created two classes for testing, T1 & T2. Each of these classes imports the class TP (where the static singleton is created). T1 adds some objects to the TP queue and then T2 adds some more.
Even though the TP object is declared as static, it looks like there are two versions running in parallell. The objects submitted to the queue by T2 are being executed before the object submitted by T1 have all been executed. Also, since neither T1 or T2 calls shutdown() on the ThreadPoolExector, they both hang and never terminate.
How can I create a daemon static instance of a tread that basically wakes up whenever I send something to be processed, even from different Java executables? | If you're running two separate processes, then you've got two separate types and two separate instances, regardless of whether it's a singleton.
If you want two different processes to talk to each other, you'll need to address that issue entirely separately. There are plenty of different IPC mechanisms available - networking, named pipes (tricky from Java IIRC), memory mapped files, a simple shared directory where one process places tasks for the other to process etc.
It's also not clear exactly what's hanging, or how your thread-pool is configured. If the problem really is the threading side (rather than the IPC side) then please post a short but *complete* program which demonstrates the problem. | If the size of the thread pool is greater than 1 then there is no guarantee that all the T1 objects will be processed first. | Running a threaded class as a daemon | [
"",
"java",
"multithreading",
"daemon",
""
] |
I want to develop a small utility for windows and I prefer doing that in c# because it is easier (I'm a java developer).
The utility will be available for download by many people and I assume some of them will not have the .net framework installed (is this assumption correct, say I target win xp and above?)
My question is: can a c# application be compiled in a way that it will not require the .net framework installed? | Normally, you will need the .NET Framework being installed on the target system. There is no simple way around that.
However, certain third-party tools such as Xenocode or Salamander allow you to create stand-alone applications. See this related question:
> [Is there some way to compile a .NET application to native code?](https://stackoverflow.com/questions/45702/is-there-some-way-to-compile-a-net-application-to-native-code)
As these solutions are not straight-forward and require commercial products I would recommend you to create a simple Visual Studio *Setup and Deployment* project. In the properties of the project you should include the .NET Framework as a pre-requisite. The setup.exe created will then automatically download and install the .NET Framework prior to installing your application. | No, it will need the .Net framework installed. Note though that you will need only the redistributable version, not the SDK. | Can c# compiled app run on machine where .net is not installed? | [
"",
"c#",
".net",
"compilation",
"native",
""
] |
Has anyone used apc\_define\_constants or hidef vs using define. Any true benefits or possible bugs in the latest versions?
apc-define-constants - <https://www.php.net/manual/en/function.apc-define-constants.php>
hidef - <http://pecl.php.net/package/hidef> | I've used `apc_define_constants` quite a bit without any trouble - I've never tried hidef because of its 'alpha' status. | I know this is an oldy, but since it comes up high in both SO's own search engine and Google:
a nice article about the relative performance of the methods mentioned by the OP can be found here: <http://shwup.blogspot.com/2010/04/about-constants.html> | apc_define_constants vs hidef vs define | [
"",
"php",
"performance",
""
] |
If I run my C++ application with the following main() method everything is OK:
```
int main(int argc, char *argv[])
{
cout << "There are " << argc << " arguments:" << endl;
// Loop through each argument and print its number and value
for (int i=0; i<argc; i++)
cout << i << " " << argv[i] << endl;
return 0;
}
```
I get what I expect and my arguments are printed out.
However, if I use \_tmain:
```
int _tmain(int argc, char *argv[])
{
cout << "There are " << argc << " arguments:" << endl;
// Loop through each argument and print its number and value
for (int i=0; i<argc; i++)
cout << i << " " << argv[i] << endl;
return 0;
}
```
It just displays the first character of each argument.
What is the difference causing this? | `_tmain` does not exist in C++. `main` does.
`_tmain` is a Microsoft extension.
`main` is, according to the C++ standard, the program's entry point.
It has one of these two signatures:
```
int main();
int main(int argc, char* argv[]);
```
Microsoft has added a wmain which replaces the second signature with this:
```
int wmain(int argc, wchar_t* argv[]);
```
And then, to make it easier to switch between Unicode (UTF-16) and their multibyte character set, they've defined `_tmain` which, if Unicode is enabled, is compiled as `wmain`, and otherwise as `main`.
As for the second part of your question, the first part of the puzzle is that your main function is wrong. `wmain` should take a `wchar_t` argument, not `char`. Since the compiler doesn't enforce this for the `main` function, you get a program where an array of `wchar_t` strings are passed to the `main` function, which interprets them as `char` strings.
Now, in UTF-16, the character set used by Windows when Unicode is enabled, all the ASCII characters are represented as the pair of bytes `\0` followed by the ASCII value.
And since the x86 CPU is little-endian, the order of these bytes are swapped, so that the ASCII value comes first, then followed by a null byte.
And in a char string, how is the string usually terminated? Yep, by a null byte. So your program sees a bunch of strings, each one byte long.
In general, you have three options when doing Windows programming:
* Explicitly use Unicode (call wmain, and for every Windows API function which takes char-related arguments, call the `-W` version of the function. Instead of CreateWindow, call CreateWindowW). And instead of using `char` use `wchar_t`, and so on
* Explicitly disable Unicode. Call main, and CreateWindowA, and use `char` for strings.
* Allow both. (call \_tmain, and CreateWindow, which resolve to main/\_tmain and CreateWindowA/CreateWindowW), and use TCHAR instead of char/wchar\_t.
The same applies to the string types defined by windows.h:
LPCTSTR resolves to either LPCSTR or LPCWSTR, and for every other type that includes char or wchar\_t, a -T- version always exists which can be used instead.
Note that all of this is Microsoft specific. TCHAR is not a standard C++ type, it is a macro defined in windows.h. wmain and \_tmain are also defined by Microsoft only. | \_tmain is a macro that gets redefined depending on whether or not you compile with Unicode or ASCII. It is a Microsoft extension and isn't guaranteed to work on any other compilers.
The correct declaration is
```
int _tmain(int argc, _TCHAR *argv[])
```
If the macro UNICODE is defined, that expands to
```
int wmain(int argc, wchar_t *argv[])
```
Otherwise it expands to
```
int main(int argc, char *argv[])
```
Your definition goes for a bit of each, and (if you have UNICODE defined) will expand to
```
int wmain(int argc, char *argv[])
```
which is just plain wrong.
std::cout works with ASCII characters. You need std::wcout if you are using wide characters.
try something like this
```
#include <iostream>
#include <tchar.h>
#if defined(UNICODE)
#define _tcout std::wcout
#else
#define _tcout std::cout
#endif
int _tmain(int argc, _TCHAR *argv[])
{
_tcout << _T("There are ") << argc << _T(" arguments:") << std::endl;
// Loop through each argument and print its number and value
for (int i=0; i<argc; i++)
_tcout << i << _T(" ") << argv[i] << std::endl;
return 0;
}
```
Or you could just decide in advance whether to use wide or narrow characters. :-)
**Updated 12 Nov 2013:**
Changed the traditional "TCHAR" to "\_TCHAR" which seems to be the latest fashion. Both work fine.
**End Update** | What is the difference between _tmain() and main() in C++? | [
"",
"c++",
"unicode",
"arguments",
""
] |
I have two *specific* C# coding conventions I've been practicing with mixed feelings.
I'd be curious to hear what people think. They are:
**#1. Name instances after the class it's an instance of, camelCased**
**#2: "Matching property names"**
Here's the rationale:
**#1. Name instances after the class it's an instance of, camelCased**
I use this as my default setting for naming convention. Of course, there are exceptions. But used consistently it *dramatically* improves code clarity and maintainability. The code looks like this:
```
var dataConnection = new DataConnection();
//not: var dataConn, dbConn, sqlConn, myDbPickOfTheDay etc.
FileInfo fileInfo = new FileInfo();
Timer timer = new Timer();
//etc.
```
I'm at the point where code like this causes me physical pain:
```
DataConnection dbConn = new DataConnection();
```
I can't stress enough how this convention has taken away the pain and anger of the variable name game.
This convention is in sharp contrast to attempting to name things in ways that try to indicate what the instance is doing, which amounts to trying to creatively embed the business case in code. Code has a way of getting refactored to the point where those original names are misleading at best.
To me this convention is gold. It also prevents the horrible practice of slightly tweaked variable names for the same thing.
An example of this convention in practice is:
```
class Person { ...
public string FirstName { get; set; }
//and then
var person = new Person();
person.FirstName = firstName; //where firstName comes from somewhere else.
```
Very easy to read. Very boring. Boring is good when it comes to maintaining code.
*However*, this convention leads me to #2:
**#2 "Matching property names"** ( for lack of a better title )
Here's an example:
```
public class ShoppingCart { ..
private Item item;
public Item Item { //<---- ?
get { return this.item; } ...
```
The compiler is perfectly happy with this. And, in fact, it exposes a very nice interface:
```
//notice how tempting it is to just say "ShoppingCart cart ... "
ShoppingCart shoppingCart = new ShoppingCart();
shoppingCart.Item = item;
```
Now, the alternative is to be creative -- You actually need to drum up two good variable names for Item: the public property name and the private member variable name.
How many times have you seen this and just want to retire immediately?
```
public class ShoppingCart { ..
private Item cartItem;
public Item ShoppingCartItem {
get { return this.cartItem; } ..
///....
ShoppingCart userShoppingCart = new ShoppingCart();
userShoppingCart.ShoppingCartItem = userSelection;
```
I feel strongly enough about convention #1 that I think I can live with #2.
What do you think ? | in case you were not aware and if you care , C# already has a naming standard
<http://msdn.microsoft.com/en-us/library/xzf533w0(VS.71).aspx>
Also, looking at your conventions again ... here's some more suggestions.
* fileInfo looks pretty next to FileInfo but it has no meaning other than it's type which I can quickly get by mousing over the type or in intellisense. I would suggest naming your variables with meaning and some context if available. remoteWebServerLog, localWebServerLog, or even localWebServerLogFileInfo if you like the type in the name.
If I can hand off any advice from coming back to code you've written 6+ mos later. You will be scratching your head trying to figure out and track down what the heck all your dbConn and fileInfo's are. What file? What db? Lots of apps have several dbs, is this dbConn to the OrdersDB or the ShoppingCartDB?
* Class naming should be more descriptive. Wwould prefer ShoppingCartItem over Item. If every ListBox, DropDown etc named their collection items "Item" you'd be colliding with a lot of namespaces and would be forced to litter your code with MyNameSpace.ShoppingCart.Item.
Having said all that ... even after years of coding I still screw up and don't follow the rules 100% of the time. I might have even used FileInfo fi = ... but that is why I love my Resharper "Refactor->Rename" command and I use it often. | Obviously, you can't name every System.String in your project *string\*,* but for things you don't use a lot of, esp. things you only need one of, and whose function *in your code* is obvious from its name, these naming conventions are perfectly acceptable.
They're what I do, anyway.
I would go with a more specific name for, say, the Timer object. What's it a timer for?
But I would definitely name a DataConnection dataConnection.
\*Even if "string" wasn't a keyword... | Two C# naming conventions: What do you think? | [
"",
"c#",
"naming-conventions",
""
] |
C# 2008
I have been working on this for a while now, and I am still confused about the use of finalize and dispose methods in code. My questions are below:
1. I know that we only need a finalizer while disposing unmanaged resources. However, if there are managed resources that make calls to unmanaged resources, would it still need to implement a finalizer?
2. However, if I develop a class that doesn't use any unmanaged resource - directly or indirectly, should I implement the `IDisposable` to allow the clients of that class to use the 'using statement'?
Would it be feasible to implement IDisposable just to enable clients of your class to use the using statement?
```
using(myClass objClass = new myClass())
{
// Do stuff here
}
```
3. I have developed this simple code below to demonstrate the Finalize/dispose use:
```
public class NoGateway : IDisposable
{
private WebClient wc = null;
public NoGateway()
{
wc = new WebClient();
wc.DownloadStringCompleted += wc_DownloadStringCompleted;
}
// Start the Async call to find if NoGateway is true or false
public void NoGatewayStatus()
{
// Start the Async's download
// Do other work here
wc.DownloadStringAsync(new Uri(www.xxxx.xxx));
}
private void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
{
// Do work here
}
// Dispose of the NoGateway object
public void Dispose()
{
wc.DownloadStringCompleted -= wc_DownloadStringCompleted;
wc.Dispose();
GC.SuppressFinalize(this);
}
}
```
Question about the source code:
1. Here I have not added the finalizer, and normally the finalizer will be called by the GC, and the finalizer will call the Dispose. As I don't have the finalizer, when do I call the Dispose method? Is it the client of the class that has to call it?
So my class in the example is called NoGateway and the client could use and dispose of the class like this:
```
using(NoGateway objNoGateway = new NoGateway())
{
// Do stuff here
}
```
Would the Dispose method be automatically called when execution reaches the end of the using block, or does the client have to manually call the dispose method? i.e.
```
NoGateway objNoGateway = new NoGateway();
// Do stuff with object
objNoGateway.Dispose(); // finished with it
```
2. I am using the `WebClient` class in my `NoGateway` class. Because `WebClient` implements the `IDisposable` interface, does this mean that `WebClient` indirectly uses unmanaged resources? Is there a hard and fast rule to follow this? How do I know that a class uses unmanaged resources? | The recommended IDisposable pattern is [here](https://msdn.microsoft.com/en-us/library/fs2xkftw(v=vs.110).aspx). When programming a class that uses IDisposable, generally you should use two patterns:
When implementing a sealed class that doesn't use unmanaged resources, you simply implement a Dispose method as with normal interface implementations:
```
public sealed class A : IDisposable
{
public void Dispose()
{
// get rid of managed resources, call Dispose on member variables...
}
}
```
When implementing an unsealed class, do it like this:
```
public class B : IDisposable
{
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// get rid of managed resources
}
// get rid of unmanaged resources
}
// only if you use unmanaged resources directly in B
//~B()
//{
// Dispose(false);
//}
}
```
Notice that I haven't declared a finalizer in `B`; you should only implement a finalizer if you have actual unmanaged resources to dispose. The CLR deals with finalizable objects differently to non-finalizable objects, even if `SuppressFinalize` is called.
So, you shouldn't declare a finalizer unless you have to, but you give inheritors of your class a hook to call your `Dispose` and implement a finalizer themselves if they use unmanaged resources directly:
```
public class C : B
{
private IntPtr m_Handle;
protected override void Dispose(bool disposing)
{
if (disposing)
{
// get rid of managed resources
}
ReleaseHandle(m_Handle);
base.Dispose(disposing);
}
~C() {
Dispose(false);
}
}
```
If you're not using unmanaged resources directly (`SafeHandle` and friends doesn't count, as they declare their own finalizers), then don't implement a finalizer, as the GC deals with finalizable classes differently, even if you later suppress the finalizer. Also note that, even though `B` doesn't have a finalizer, it still calls `SuppressFinalize` to correctly deal with any subclasses that do implement a finalizer.
When a class implements the IDisposable interface, it means that somewhere there are some unmanaged resources that should be got rid of when you've finished using the class. The actual resources are encapsulated within the classes; you don't need to explicitly delete them. Simply calling `Dispose()` or wrapping the class in a `using(...) {}` will make sure any unmanaged resources are got rid of as necessary. | The official pattern to implement `IDisposable` is hard to understand. I believe this one is [better](http://codecrafter.blogspot.com/2010/01/better-idisposable-pattern.html):
```
public class BetterDisposableClass : IDisposable {
public void Dispose() {
CleanUpManagedResources();
CleanUpNativeResources();
GC.SuppressFinalize(this);
}
protected virtual void CleanUpManagedResources() {
// ...
}
protected virtual void CleanUpNativeResources() {
// ...
}
~BetterDisposableClass() {
CleanUpNativeResources();
}
}
```
An [even better](http://codecrafter.blogspot.com/2010/01/revisiting-idisposable.html) solution is to have a rule that you **always** have to create a wrapper class for any unmanaged resource that you need to handle:
```
public class NativeDisposable : IDisposable {
public void Dispose() {
CleanUpNativeResource();
GC.SuppressFinalize(this);
}
protected virtual void CleanUpNativeResource() {
// ...
}
~NativeDisposable() {
CleanUpNativeResource();
}
}
```
With [`SafeHandle`](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.safehandle.aspx) and its derivatives, these classes should be *very rare*.
The result for disposable classes that don't deal directly with unmanaged resources, even in the presence of inheritance, is powerful: **they don't need to be concerned with unmanaged resources anymore**. They'll be *simple* to implement and to understand:
```
public class ManagedDisposable : IDisposable {
public virtual void Dispose() {
// dispose of managed resources
}
}
``` | Use of Finalize/Dispose method in C# | [
"",
"c#",
".net",
"idisposable",
"finalizer",
""
] |
I've got two MySQL queries that both insert data into a table. Both have the following format:
```
CREATE TABLE IF NOT EXISTS `data` (
`id` BIGINT NOT NULL AUTO_INCREMENT UNIQUE,
PRIMARY KEY (`id`)
)
SELECT `field1`, `field2`
WHERE `active` = 1
```
The only differences between the two queries are how `field1` and `field2` are determined, and some minor differences in the conditions clause. Both run up to 12K and more records.
Now, what will be more efficient:
A. Run both queries separately:
```
if (mysql_query($query1)) {
return mysql_query($query2);
}
return false;
```
B. OR combine the two queries with a UNION, and run once:
```
$query = 'SELECT `field1`, `field2` WHERE `active` = 1
UNION
SELECT DO_ONE(`field1`), DO_TWO(`field2`) WHERE `active` = 1
ORDER BY `field1`';
return mysql_query('CREATE TABLE IF NOT EXISTS `data` (
`id` BIGINT NOT NULL AUTO_INCREMENT UNIQUE,
PRIMARY KEY (`id`)
) ' . $query)
```
The data from the one query is useless without the data from the other, so both need to succeed. `DO_ONE` and `DO_TWO` are user defined MySQL functions that change the field data according to some specs. | Aaronmccall's answer is probably the best in general -- the UNION approach does it all in one SQL call. In general that will be the most "efficient", but there could be side issues that could come into play and affect the measure of "efficient" for your particular application.
Specifically, if the UNION requires a temporary table to gather the intermediate results and you are working with very large sets of data, then doing two separate straight SELECTs into the new table might turn out being more efficient in your particular case. This would depend on the internal workings, optimizations done, etc within the database engine (which could change depending on the version of the database engine you are using).
Ultimately, the only way to answer your question on such a specific question like this might be to do timings for your particular application and environment.
You also might want to consider that the difference between the time required for two separate queries vs an "all in one" query might be insignificant in the grand scheme of things... you are probably talking about a difference of a few milliseconds (or even microseconds?) unless your mysql database is on a separate server with huge latency issues. If you are doing thousands of these calls in one shot, then the difference might be significant, but if you are only doing one or two of these calls and your application is spending 99.99% of its time executing other things, then the difference between the two probably won't even be noticed.
---Lawrence | Your options do different things. First one returns the results from the second query if the first query executes correctly (which is BTW independent of the results that it returns, it can be returning an empty rowset). Second one returns the results from the first query and the second query together. First option seems to me pretty useless, probably what you want to achieve is what you did with the UNION (unless I missunderstood you).
EDIT: After reading your comment, I think you are after something like this:
SELECT true where (EXISTS(SELECT field1, field2 ...) AND EXISTS (SELECT Field1, field2 ...)).
That way you will have only one query to the DB, which scales better, takes less resources from the connection pool and doesn't double the impact of latency if you have your DB engine in a different server, but you will still interrupt the query if the first condition fails, which is the performance improvement that you where looking for with the nested separated queries.
As an optimization, try to have first the condition that will execute faster, in case they are not the same. I assume that if one of them requires those field calculations would be slower. | Unite two MySQL queries with a UNION or programmatically | [
"",
"php",
"mysql",
"performance",
"union",
""
] |
For those of you working on Semantic Web development, which C# tools do you use for reasoning, parsing, etc.? The idea is to build a central repository of all C# APIs currently available. Sort of like I did [here](https://stackoverflow.com/questions/654771/algorithms-and-data-structures-that-are-not-mainstream-closed). Please post links, if you can, so I am able to summarize correctly. | A nearly comprehensive list of .net (c# or whatever) semantic web tools could be found at [W3C SemanticWebTools page](http://esw.w3.org/topic/SemanticWebTools#head-d8245ac3b165f69548a586e7eaed613b18706ace) or [AI3 swtools list](http://www.mkbergman.com/?page_id=325) | You could try my Library [dotNetRDF](http://www.dotnetrdf.org) - it is free and open source and provides a state of the spec SPARQL implementation including almost all current draft SPARQL 1.1 features | RDF/OWL/SPARQL/Triple Stores/Reasoners and other Semantic Web APIs for C#? | [
"",
"c#",
"semantic-web",
""
] |
This applies to subclasses of Applet, Servlet, Midlet, etc.
Why do they not need a `main()`? If I wanted to create a `Craplet` class that starts at `init()` or something similar, is it bad design, or how would I go about doing it? | It is actually good design but not obvious and what you want to do would have no effect so it is a little counter intuitive.
These types of applications live their lives in containers and as such their entry points are determined by the standards those containers must adhere to. The designers of these standards chose not to call the entry point main. You would place your functionality in an overridden method. All applets have the following four methods:
```
public void init();
public void start();
public void stop();
public void destroy();
```
They have these methods because their superclass, `java.applet.Applet`, has these methods.
The superclass does not have anything but dummy code in these:
```
public void init() {}
```
If you want to derive a class to extend or change the name of `init()` you should Implement your class and have your method call `init()`. This would use polymorphism to let you call the method whatever you like. Unless you are writing servlet container you are likely wasting your time. | Applets and Servlets do not start their own process. Instead they run inside a container. Therefore, they do no need a static main method (which starts the process), but a way to interact with their container. | Why do applets not need a main()? | [
"",
"java",
"applet",
"program-entry-point",
""
] |
When using mysql\_fetch\_assoc in PHP, how can I make it return the correct data types? Right now it appears to convert everything to strings, I'd prefer if it left the Ints as Ints, and somehow designated the Date/Time as either Object or somehow different than strings.
The reason for this is that I am using PHP as a backend to a Flex application, and Flex has some features such as automatically detecting return types, which don't work that well if everything comes in as a string. | I think a good strategy here is to programatically determine the datatype of each column in a table, and cast the returned results accordingly. This will allow you to interact with your database in a more consistent and simple manner while still giving you the control you need to have your variables storing the correct datatype.
**One possible solution:** You could use mysql\_fetch\_field() to get an object that holds meta-data about the table column and then cast your string back to the desired type.
```
//run query and get field information about the row in the table
$meta = mysql_fetch_field($result, $i);
//get the field type of the current column
$fieldType = $meta->type
```
A full example can be found here: <https://www.php.net/manual/en/function.mysql-fetch-field.php>
Since PHP is loosely typed, you should have a relatively easy time with this.
If you are using OO (object-oriented) techniques, you could create a class with this functionality in the setter() methods so you don't have to have duplicate code. | Just contributing a **small improvement** to mastermind202's answer to handle more data types. Thanks mastermind for doing the heavy lifting!
```
function cast_query_results($rs) {
$fields = mysqli_fetch_fields($rs);
$data = array();
$types = array();
foreach($fields as $field) {
switch($field->type) {
case MYSQLI_TYPE_NULL:
$types[$field->name] = 'null';
break;
case MYSQLI_TYPE_BIT:
$types[$field->name] = 'boolean';
break;
case MYSQLI_TYPE_TINY:
case MYSQLI_TYPE_SHORT:
case MYSQLI_TYPE_LONG:
case MYSQLI_TYPE_INT24:
case MYSQLI_TYPE_LONGLONG:
$types[$field->name] = 'int';
break;
case MYSQLI_TYPE_FLOAT:
case MYSQLI_TYPE_DOUBLE:
$types[$field->name] = 'float';
break;
default:
$types[$field->name] = 'string';
break;
}
}
while($row=mysqli_fetch_assoc($rs)) array_push($data,$row);
for($i=0;$i<count($data);$i++) {
foreach($types as $name => $type) {
settype($data[$i][$name], $type);
}
}
return $data;
}
```
Example usage:
```
$db = mysqli_connect(...);
$rs = mysqli_query($db, "SELECT ...");
$results = cast_query_results($rs);
```
Returns an associative array of rows with properly typed fields | Make mysql_fetch_assoc automatically detect return data types? | [
"",
"php",
"mysql",
"database",
"types",
""
] |
I'm working on a system to use a `SqlServerCe` with `NHibernate`. From my driver program, if I add the `System.Data.SqlServerCe` assembly as a reference, I can create and run queries against a database just fine. When trying to use `NHibernate`, though, I get the following exception:
> A first chance exception of type 'System.IO.FileNotFoundException' occurred in mscorlib.dll
> Additional information: Could not load file or assembly 'System.Data.SqlServerCe' or one of its dependencies. The system cannot find the file specified.
I've traced the exception to a call to `Assembly.Load("System.Data.SqlServerCe")`, which seems like it should work. The `System.Data.SqlServerCe` assembly is in the GAC (I've also tried to add it as a local reference with `CopyLocal=true`, to no avail), and I can use its members fine, so why can't I explicitly load it? When I open the assembly in Reflector, it has trouble loading the `System.Transactions` reference (I've also tried adding it as a local reference, again to no avail), so loading that assembly might be the problem, rather than the `System.Data.SqlServerCe assembly`.
Is this a common problem? System misconfiguration, maybe? | Apparently this can be solved by adding a <qualifyAssembly> element to the app.config file. Adding the following has my app running smoothly:
```
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<qualifyAssembly partialName="System.Data.SqlServerCe" fullName="System.Data.SqlServerCe, Version=3.5.1.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" />
</assemblyBinding>
</runtime>
```
Thanks! | This is most probably connected to some system (mis)configuration.
However, by design SQL Server CE is just a single DLL, which may be shipped together with your product.
This means that you can just set `Copy local` to `True` in the reference properties of `System.Data.SqlServerCe`, and you are done. | Could not load file or assembly | [
"",
"c#",
".net",
"nhibernate",
""
] |
When a class in Java doesn't override **hashCode()**,
printing an instance of this class gives a nice unique number.
The Javadoc of Object says about **hashCode()**:
> *As much as is reasonably practical, the hashCode method defined by class Object does return distinct integers for distinct objects.*
But when the class overrides **hashCode()**, how can I get
its unique number? | [System.identityHashCode(yourObject)](http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#identityHashCode%28java.lang.Object%29) will give the 'original' hash code of yourObject as an integer. Uniqueness isn't necessarily guaranteed. The Sun JVM implementation will give you a value which is related to the original memory address for this object, but that's an implementation detail and you shouldn't rely on it.
EDIT: Answer modified following Tom's comment below re. memory addresses and moving objects. | The javadoc for Object specifies that
> This is typically implemented by converting the internal address of the object into an integer, but this implementation technique is not required by the JavaTM programming language.
If a class overrides hashCode, it means that it wants to generate a specific id, which will (one can hope) have the right behaviour.
You can use [System.identityHashCode](http://docs.oracle.com/javase/7/docs/api/java/lang/System.html#identityHashCode(java.lang.Object)) to get that id for any class. | How to get the unique ID of an object which overrides hashCode()? | [
"",
"java",
"identity",
"hashcode",
""
] |
In Java, I'd like to have something as:
```
class Clazz<T> {
static void doIt(T object) {
// ...
}
}
```
But I get
```
Cannot make a static reference to the non-static type T
```
I don't understand generics beyond the basic uses and thus can't make much sense of that. It doesn't help that I wasn't able to find much info on the internet about the subject.
Could someone clarify if such use is possible, by a similar manner? Also, why was my original attempt unsuccessful? | You can't use a class's generic type parameters in static methods or static fields. The class's type parameters are only in scope for instance methods and instance fields. For static fields and static methods, they are shared among all instances of the class, even instances of different type parameters, so obviously they cannot depend on a particular type parameter.
It doesn't seem like your problem should require using the class's type parameter. If you describe what you are trying to do in more detail, maybe we can help you find a better way to do it. | Java doesn't know what `T` is until you instantiate a type.
Maybe you can execute static methods by calling `Clazz<T>.doit(something)` but it sounds like you can't.
The other way to handle things is to put the type parameter in the method itself:
```
static <U> void doIt(U object)
```
which doesn't get you the right restriction on U, but it's better than nothing.... | Static method in a generic class? | [
"",
"java",
"generics",
"static-methods",
""
] |
It seems cx\_Oracle doesn't.
Any other suggestion for handling xml with Oracle and Python is appreciated.
Thanks. | I managed to do this with cx\_Oracle.
I used the sys.xmltype.createxml() function in the statement that inserts the rows in a table with XMLTYPE fields; then I used prepare() and setinputsizes() to specify that the bind variables I used for XMLTYPE fields were of cx\_Oracle.CLOB type. | I managed to get this to work by wrapping the XMLElement call in a call to `XMLType.GetClobVal()`:
For example:
```
select xmltype.getclobval(xmlelement("rowcount", count(1)))
from...
```
No idea of the limitations yet but it got me out of trouble. Found the relelvant info on Oracle site:
[Mastering Oracle+Python, Part 1: Querying Best Practices](http://www.oracle.com/technology/pub/articles/prez-python-queries.html) | Is there an Oracle wrapper for Python that supports xmltype columns? | [
"",
"python",
"xml",
"oracle",
"xmltype",
""
] |
I have an application, written in Python, which is used by a fairly technical audience (scientists).
I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture.
I am looking for something **extremely lightweight**. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.)
Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration? | Mine is, basically, a directory called "plugins" which the main app can poll and then use [imp.load\_module](https://docs.python.org/library/imp.html#imp.load_module) to pick up files, look for a well-known entry point possibly with module-level config params, and go from there. I use file-monitoring stuff for a certain amount of dynamism in which plugins are active, but that's a nice-to-have.
Of course, any requirement that comes along saying "I don't need [big, complicated thing] X; I just want something lightweight" runs the risk of re-implementing X one discovered requirement at a time. But that's not to say you can't have some fun doing it anyway :) | `module_example.py`:
```
def plugin_main(*args, **kwargs):
print args, kwargs
```
`loader.py`:
```
def load_plugin(name):
mod = __import__("module_%s" % name)
return mod
def call_plugin(name, *args, **kwargs):
plugin = load_plugin(name)
plugin.plugin_main(*args, **kwargs)
call_plugin("example", 1234)
```
It's certainly "minimal", it has absolutely no error checking, probably countless security problems, it's not very flexible - but it should show you how simple a plugin system in Python can be..
You probably want to look into the [imp](http://docs.python.org/library/imp.html) module too, although you can do a lot with just `__import__`, `os.listdir` and some string manipulation. | Building a minimal plugin architecture in Python | [
"",
"python",
"architecture",
"plugins",
""
] |
I'm looking for the pros/cons of pulling jQuery & other JS libraries from Google API's cloud as opposed to downloading files and deploying directly.
What say you?
---
### My decision
The likelihood of the lib already cached on the users system is the overriding factor for me, so I'm going with a permalink to googleapis.com (e.g. ajax.googleapis.com/ajax/libs/…). I agree with others here that loss of access to the Google server cloud is a minimal concern. | Pros: It may already be cached on the user's system. Google has big pipes. You don't pay for the bandwidth.
Cons: You now have two different ways for your site to become unavailable: A service interruption on your server or one on Google's server. | ### Con
* Users in countries embargoed by the U.S. (e.g. Iran) won't get a response from Google | Should I link to Google API's cloud for JS libraries? | [
"",
"javascript",
"jquery",
"dependencies",
""
] |
How would one go about retrieving a network device's netmask (In Linux preferably, but if it's cross-platform then cool)? I know how in C on Linux but I can't find a way in Python -- minus ctypes perhaps. That or parsing ifconfig. Any other way?
```
ioctl(socknr, SIOCGIFNETMASK, &ifreq) // C version
``` | [This](http://code.activestate.com/recipes/439094/) works for me in Python 2.2 on Linux:
```
iface = "eth0"
socket.inet_ntoa(fcntl.ioctl(socket.socket(socket.AF_INET, socket.SOCK_DGRAM), 35099, struct.pack('256s', iface))[20:24])
``` | The [`netifaces`](https://pypi.org/project/netifaces/) module deserves a mention here. Straight from the docs:
```
>>> netifaces.interfaces()
['lo0', 'gif0', 'stf0', 'en0', 'en1', 'fw0']
>>> addrs = netifaces.ifaddresses('en0')
>>> addrs[netifaces.AF_INET]
[{'broadcast': '10.15.255.255', 'netmask': '255.240.0.0', 'addr': '10.0.1.4'}, {'broadcast': '192.168.0.255', 'addr': '192.168.0.47'}]
```
Works on Windows, Linux, OS X, and probably other UNIXes. | Retrieving network mask in Python | [
"",
"python",
""
] |
I am working on integrating with the Photobucket API and I came across this in their [api docs](http://pic.photobucket.com/dev_help/WebHelpPublic/Content/Getting%20Started/Consumer%20Authentication.htm):
> *"Sort the parameters by name
> lexographically [sic] (byte ordering, the
> standard sorting, not natural or case
> insensitive). If the parameters have
> the same name, then sort by the value."*
What does that mean? How do I sort something lexicographically? byte ordering?
The rest of their docs have been ok so far, but (to me) it seems like this line bears further explanation. Unfortunately there was none to be had.
Anyway, I'm writing the application in **Python** (it'll eventually become a Django app) in case you want to recommend specific modules that will handle such sorting for me ^\_^ | The word should be "lexicographic"
<http://www.thefreedictionary.com/Lexicographic>
Dictionary order. Using the letters as they appear in the strings.
As they suggest, don't fold upper- and lower-case together. Just use the Python built-in list.sort() method. | I think that here lexicographic is a "alias" for ascii sort?
```
Lexicographic Natural
z1.doc z1.doc
z10.doc z2.doc
z100.doc z3.doc
z101.doc z4.doc
z102.doc z5.doc
z11.doc z6.doc
z12.doc z7.doc
z13.doc z8.doc
z14.doc z9.doc
z15.doc z10.doc
z16.doc z11.doc
z17.doc z12.doc
z18.doc z13.doc
z19.doc z14.doc
z2.doc z15.doc
z20.doc z16.doc
z3.doc z17.doc
z4.doc z18.doc
z5.doc z19.doc
z6.doc z20.doc
z7.doc z100.doc
z8.doc z101.doc
z9.doc z102.doc
``` | Sort lexicographically? | [
"",
"python",
"api",
"sorting",
"photobucket",
""
] |
When I try to do things like this:
```
char* prefix = "Sector_Data\\sector";
char* s_num = "0";
std::strcat(prefix, s_num);
std::strcat(prefix, "\\");
```
and so on and so forth, I get a warning
```
warning C4996: 'strcat': This function or variable may be unsafe. Consider using strcat_s instead.
```
Why is strcat considered unsafe, and is there a way to get rid of this warning without using strcat\_s?
Also, if the only way to get rid of the warning is to use strcat\_s, how does it work (syntax-wise: apparently it does not take two arguments). | Because the buffer, prefix, could have less space than you are copying into it, causing a buffer overrun.
Therefore, a hacker could pass in a specially crafted string which overwrites the return address or other critical memory and start executing code in the context of your program.
strcat\_s solves this by forcing you to pass in the length of the buffer into which you are copying the string; it will truncate the string if necessary to make sure that the buffer is not overrun.
google strcat\_s to see precisely how to use it. | If you are using c++, why not avoid the whole mess and use `std::string`. The same example without any errors would look like this:
```
std::string prefix = "Sector_Data\\sector";
prefix += "0";
prefix += "\\"
```
no need to worry about buffer sizes and all that stuff. And if you have an API which takes a `const char *`, you can just use the `.c_str()` member;
```
some_c_api(prefix.c_str());
``` | Why does MSVC++ consider "std::strcat" to be "unsafe"? (C++) | [
"",
"c++",
"warnings",
"strcat",
"strcat-s",
""
] |
I have an HTML table with a link in the first column. I want to allow the user to click anywhere in the row to activate that link. At the same time, I would like to preserve the middle click and ctrl+click functionality of opening a new tab/window. Here is an example of the table:
```
<table id="row_link">
<tbody>
<tr>
<td><a href="link1.html">link</a></td>
<td>info 1</td>
</tr>
<tr>
<td><a href="link2.html">link</a></td>
<td>info 2</td>
</tr>
</tbody>
</table>
```
Using jQuery I can allow the user to left click anywhere in a row:
```
$("table#row_link tbody tr").click(function () {
window.location = $(this).find("a:first").attr("href");
});
```
This of course disables the standard middle click and ctrl+click functionality of opening a new tab. Is there a better way to allow users to click on the entire row while preserving the standard middle click and ctrl+clcik behavior? | Unfortunately there is no way to simulate a link and all associated behaviour in every browser. Therefore, the only way to achieve what you want is to have a link that follows the cursor around the `<tr>` element; this link would be invisible so, to the user, it looks like they're clicking on the `<tr>` but they're actually clicking on a hidden link. Using this method, the middle-button, ctrl+click and any other behaviours are left intact!
Here's a DEMO: **<http://jsbin.com/ufugo>**
And here's the code:
```
$("table tr").each(function(){
var $link = $('a:first', this).clone(true),
dim = {
x: [
$(this).offset().left,
$(this).offset().left + $(this).outerWidth()
],
y: [
$(this).offset().top,
$(this).offset().top + $(this).outerHeight()
]
}
$link
.click(function(){
$(this).blur();
})
.css({
position: 'absolute',
display: 'none',
// Opacity:0 means it's invisible
opacity: 0
})
.appendTo('body');
$(this).mouseover(function(){
$link.show();
});
$(document).mousemove(function(e){
var y = e.pageY,
x = e.pageX;
// Check to see if cursor is outside of <tr>
// If it is then hide the cloned link (display:none;)
if (x < dim.x[0] || x > dim.x[1] || y < dim.y[0] || y > dim.y[1]) {
return $link.hide();
}
$link.css({
top: e.pageY - 5,
left: e.pageX - 5
})
});
});
```
## EDIT:
I created a jQuery plugin using a slightly better aproach than above: **<http://james.padolsey.com/javascript/table-rows-as-clickable-anchors/>** | **EDIT**
This is simple problem that has a simple solution. I don't see a need for nasty hacks that might break on some browsers or take processing time. Especially because there is a neat and easy CSS solution.
First here is a [demo](http://nadiana.com/sites/default/files/example/clickable.html)
Inspired by [@Nick solution](https://stackoverflow.com/questions/569355/html-table-row-link/570005#570005) for a very similar issue, I'm proposing a simple css+jquery solution.
First, here is the mini-plugin I wrote. The plugin will wrap every cells with a link:
```
jQuery.fn.linker = function(selector) {
$(this).each(function() {
var href = $(selector, this).attr('href');
if (href) {
var link = $('<a href="' + $(selector, this).attr('href') + '"></a>').css({
'text-decoration': 'none',
'display': 'block',
'padding': '0px',
'color': $(this).css('color')
})
$(this).children()
.css('padding', '0')
.wrapInner(link);
}
});
};
```
And here is a usage example:
```
$('table.collection tr').linker('a:first');
```
And All the CSS you need:
```
table.collection {
border-collapse:collapse;
}
```
It's as simple as that.
---
You can use the event object to check the mouse click type. This [article](http://abeautifulsite.net/notebook/99) is discussing a similar issue.
Anyway, here is how to do it:
```
$("table#row_link tbody tr").click(function () {
if((!$.browser.msie && e.button == 0) || ($.browser.msie && e.button == 1)){
if (!e.ctrlKey) {
// Left mouse button was clicked without ctrl
window.location = $(this).find("a:first").attr("href");
}
}
});
``` | Click Entire Row (preserving middle click and ctrl+click) | [
"",
"javascript",
"jquery",
"events",
""
] |
I’ve been experiencing a performance problem with deleting blobs in derby, and was wondering if anyone could offer any advice.
This is primarily with 10.4.2.0 under windows and solaris, although I’ve also tested with the new 10.5.1.1 release candidate (as it has many lob changes), but this makes no significant difference.
The problem is that with a table containing many large blobs, deleting a single row can take a long time (often over a minute).
I’ve reproduced this with a small test that creates a table, inserts a few rows with blobs of differing sizes, then deletes them.
The table schema is simple, just:
create table blobtest( id integer generated BY DEFAULT as identity, b blob )
and I’ve then created 7 rows with the following blob sizes : 1024 bytes, 1Mb, 10Mb, 25Mb, 50Mb, 75Mb, 100Mb.
I’ve read the blobs back, to check they have been created properly and are the correct size.
They have then been deleted using the sql statement ( “delete from blobtest where id = X” ).
If I delete the rows in the order I created them, average timings to delete a single row are:
1024 bytes: 19.5 seconds
1Mb: 16 seconds
10Mb: 18 seconds
25Mb: 15 seconds
50Mb: 17 seconds
75Mb: 10 seconds
100Mb: 1.5 seconds
If I delete them in reverse order, the average timings to delete a single row are:
100Mb: 20 seconds
75Mb: 10 seconds
50Mb: 4 seconds
25Mb: 0.3 seconds
10Mb: 0.25 seconds
1Mb: 0.02 seconds
1024 bytes: 0.005 seconds
If I create seven small blobs, delete times are all instantaneous.
It thus appears that the delete time seems to be related to the overall size of the rows in the table more than the size of the blob being removed.
I’ve run the tests a few times, and the results seem reproducible.
So, does anyone have any explanation for the performance, and any suggestions on how to work around it or fix it? It does make using large blobs quite problematic in a production environment… | I have exact the same issue you have.
I found that when I do DELETE, derby actually "read through" the large segment file completely. I use Filemon.exe to observe how it run.
My file size it 940MB, and it takes 90s to delete just a single row.
I believe that derby store the table data in a single file inside. And some how a design/implementation bug that cause it read everything rather then do it with a proper index.
I do batch delete rather to workaround this problem.
I rewrite a part of my program. It was "where id=?" in auto-commit.
Then I rewrite many thing and it now "where ID IN(?,.......?)" enclosed in a transaction.
The total time reduce to 1/1000 then it before.
I suggest that you may add a column for "mark as deleted", with a schedule that do batch actual deletion. | As far as I can tell, **Derby will only store BLOBs inline with the other database data,** so you end up with the BLOB split up over a ton of separate DB page files. This BLOB storage mechanism is good for ACID, and good for smaller BLOBs (say, image thumbnails), but breaks down with larger objects. According to the Derby docs, **turning autocommit off when manipulating BLOBs may also improve performance**, but this will only go so far.
I **strongly suggest you migrate to H2 or another DBMS if good performance on large BLOBs is important**, and the BLOBs must stay within the DB. You can use the SQuirrel SQL client and its DBCopy plugin to directly migrate between DBMSes (you just need to point it to the Derby/JavaDB JDBC driver and the H2 driver). I'd be glad to help with this part, since I just did it myself, and haven't been happier.
Failing this, **you can move the BLOBs out of the database and into the filesystem.** To do this, you would replace the BLOB column in the database with a BLOB size (if desired) and location (a URI or platform-dependent file string). When creating a new blob, you create a corresponding file in the filesystem. The location could be based off of a given directory, with the primary key appended. For example, your DB is in "DBFolder/DBName" and your blobs go in "DBFolder/DBName/Blob" and have filename "BLOB\_PRIMARYKEY.bin" or somesuch. To edit or read the BLOBs, you query the DB for the location, and then do read/write to the file directly. Then you log the new file size to the DB if it changed. | Performance problem on Java DB Derby Blobs & Delete | [
"",
"java",
"performance",
"jdbc",
"derby",
""
] |
How do I return the entire url of a page including get.
$\_SERVER['HTTP\_REFERER'] and php\_self doesn't do it.
they return www.domain.com/example
instead of www.domain.com/example?user=2 | Try:
```
echo $_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
```
If you don't wish to return the domain, but just the internal url and get variables you can omit $\_SERVER['HTTP\_HOST']. | One other thing, `$_SERVER` is an array, so are `$_GET`, `$_POST`, `$_SESSION` and `$_COOKIE`
So if you're not sure if the data is contained within those variables, then try something like this.
```
echo "<pre>";
print_r($_SERVER);
echo "</pre>";
``` | php command that returns entire url including get action | [
"",
"php",
""
] |
What's the best way to render a chunk of HTML in an application? We have a rich text editor control (from Karamasoft) in a web page, and need to generate a PDF with records saved from the control (with custom page headers, page footers, and record headers) so I need to be able to render the html so it can be "drawn" onto the page to be saved as a pdf.... is there any staright forward simple way to do this? | [HTML Renderer](http://htmlrenderer.codeplex.com) is a library of 100% managed code that draws beautifully formatted HTML. | Without using any libraries, you can use the [Literal](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.literal.aspx) control that allows you to inject the HTML you wish to display to the user. | How to render HTML chunk? | [
"",
"c#",
".net",
"html",
""
] |
I am working on an application that loads plugins at startup from a subdirectory, and currently i am doing this by using reflection to iterate over the types of each assembly and to find public classes implementing the IPluginModule interface.
Since Reflection involves a performance hit, and i expect that there will be several plugins after a while, i wondered if it would be useful to define a custom attribute applied at the assembly level, that could be checked before iterating over the types (possibly about a dozen types in an assembly, including 1 implementor of IPluginModule).
The attribute, if present, could then provide a method to return the needed types or instances, and iterating over the types would then only be a fallback mechanism. Storing the type info in a configuration file is not an option.
Would this improve performance, or does it just not matter compared to the time to actually takes to load the assembly from storage? Also, would this usage be appropriate for an attribute at all? | I will answer your question with a question: Why are you worried about this?
You're worrying about a *potential* performance hit in a *one time* operation because there *might* be several plugins at a later date.
Unless your application startup time is excessively long to a user, I wouldn't waste time thinking about it - there are probably much better things that you can work on to improve your application. | You could also have the plugable types in a configuration, so you know the exact classes instead of looping through all classes. Would have to have some configuration utility for this option...but could possibly get a good increase in performance depending on the number of classes you are looping through. | Reflection vs. Attributes in plugin architecture | [
"",
"c#",
".net",
"reflection",
"architecture",
"plugins",
""
] |
I am trying here to make a few left joins into a linq query but I'd say I rather have no idea how to materialize this idea.
Basically here is the 3 database structures I want to play with.
```
<tags>
id | name
<events_tags>
tag_id | event_id
<events>
id | name | some-other-fields
```
so for each events there is a one-to-many relation with tags, an event can then have one or more tags.
I'd like to know how to search an event based on a tag or how can I, based from an event id know the associated tags ? | To search event by tag, I think you can write something like:
```
var tagsIds = from t in DataContext.Tags
where t.Name == "sometag"
select t.id;
var eventsByTag = from et in DataContext.EventTags
where tagsIds.Contains(et.tag_id)
select et.Event;
```
To get the tags for an event:
```
var tagsByEvent = from et in myEvent.EventTags
select et.Tag;
```
For the latter, for convenience, you can put it in a property of Events:
```
public List<Tag> Tags
{
get
{
List<Tag> tags = (from et in this.EventTags
select et.Tag).ToList();
return tags;
}
}
```
And just refer to myEvent.Tags where you need them. | Are you wanting to do a many to many join here, looks that way....
Linq to sql does not support this...here is a great article
<http://blogs.msdn.com/mitsu/archive/2007/06/21/how-to-implement-a-many-to-many-relationship-using-linq-to-sql.aspx>
And this one from Scott Guthrie is useful in getting to grips with the basics
<http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx>
hope that helps | join query with linq | [
"",
"c#",
"linq",
".net-3.5",
"c#-3.0",
"left-join",
""
] |
I am trying to create a panel which will have a set of "buttons" on it.
These buttons should have the following behaviour:
1. Appear similar to a tag (with
rounded edges)
2. Contain a red
cross to remove the filter/tag from
the panel, similar to the way internet
explorer tabs have an embedded cross to close the individual tab.
3. allow the user to click
on the tag and respond like a normal
button (as long as the click is not
in the red cross)
Number 1 is no problem, this is just appearance, however, regarding numbers 2 and 3, I am not sure if there is already code out there do to something similar...and I dont really want to reinvent the wheel if I can avoid it!
My question is: Does anyone know if there is something out there in infragistics which will do this simply, or will I need to write this myself by subclassing winform buttons?
Thanks in advance! | Is this new development or maintenance of an existing project?
If it is maintenance, you have a somewhat tougher time ahead. You'll implement a `UserControl`, probably segmented into two buttons. Use docking to get the behavior as correct as possible. The far right button would contain your cross image; the left (which would need to auto-expand as you resize the control) would contain your primary button behavior. Play with the visual styles until you get them right (EG, removing borders, etc).
If this is new development, and you haven't gotten too far into it, you might *consider* using Windows Presentation Framework (WPF) instead of WinForms. It will be easier to build the control and get it to look exactly how you want it. WPF includes an extremely powerful control compositing system which allows you to layer multiple controls on top of each other and have them work exactly as you'd expect, and it carries the added advantage of allowing full visual control out-of-the-box.
Either way, this is more work than dropping in an external component ... I've used Infragistics for years, and I can't think of anything they have which is comparable. The closest, but **only** if you're building an MDI application and these controls are for window navigation, is the Tabbed MDI window management tools -- and there, only the tabs (which replace window title bars) have this behavior. | your probably going to have to make a costume control for this type of work. | How can I create a button with an embedded close button | [
"",
"c#",
"winforms",
"infragistics",
""
] |
Anyone knows if multiply operator is faster than using the Math.Pow method? Like:
```
n * n * n
```
vs
```
Math.Pow ( n, 3 )
``` | Basically, you should **benchmark** to see.
### Educated Guesswork (unreliable):
*In case it's not optimized to the same thing by some compiler...*
It's very likely that `x * x * x` is faster than `Math.Pow(x, 3)` as `Math.Pow` has to deal with the problem in its general case, dealing with fractional powers and other issues, while `x * x * x` would just take a couple multiply instructions, so it's very likely to be faster. | I just reinstalled windows so visual studio is not installed and the code is ugly
```
using System;
using System.Diagnostics;
public static class test{
public static void Main(string[] args){
MyTest();
PowTest();
}
static void PowTest(){
var sw = Stopwatch.StartNew();
double res = 0;
for (int i = 0; i < 333333333; i++){
res = Math.Pow(i,30); //pow(i,30)
}
Console.WriteLine("Math.Pow: " + sw.ElapsedMilliseconds + " ms: " + res);
}
static void MyTest(){
var sw = Stopwatch.StartNew();
double res = 0;
for (int i = 0; i < 333333333; i++){
res = MyPow(i,30);
}
Console.WriteLine("MyPow: " + sw.ElapsedMilliseconds + " ms: " + res);
}
static double MyPow(double num, int exp)
{
double result = 1.0;
while (exp > 0)
{
if (exp % 2 == 1)
result *= num;
exp >>= 1;
num *= num;
}
return result;
}
}
```
The results:
csc /o test.cs
test.exe
```
MyPow: 6224 ms: 4.8569351667866E+255
Math.Pow: 43350 ms: 4.8569351667866E+255
```
Exponentiation by squaring (see <https://stackoverflow.com/questions/101439/the-most-efficient-way-to-implement-an-integer-based-power-function-powint-int>) is much faster than Math.Pow in my test (my CPU is a Pentium T3200 at 2 Ghz)
EDIT: .NET version is 3.5 SP1, OS is Vista SP1 and power plan is high performance. | Math.Pow vs multiply operator (performance) | [
"",
"c#",
".net",
"performance",
"pow",
""
] |
I'm currently developing a custom control and realize that my code is being run twice. It is not really a huge issue (it is only a Focus method call). However, I would like to understand it.
From reading the [MSDN description for click | onclick event](http://msdn.microsoft.com/en-us/library/ms536913(VS.85).aspx), it states that:
> Fires when the user clicks the left mouse button on the object.
So I added the OnClick event and the MouseClick events to handle both left and right clicking. But after debugging the code I found that the OnClick handles both left and right click events.
Why is OnClick handling both and do I need to keep both events in my code for some reason I'm overlooking?
```
protected override void OnClick(EventArgs e)
{
this.Focus();
base.OnClick(e);
}
private void CustomControl_MouseClick(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Right)
{
rightClickMenu(e);
}
}
``` | [According to MSDN](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.control.click), the Click event is called not only when the mouse is clicked, but also when the Enter button is pressed. If you only need to handle mouse clicks, I'd move all of your code in the MouseClick event. You can't do it the other way around because the Click event doesn't tell you which mouse button (if any) was clicked. | First of all, your link is incorrect, it links to HTML and DHTML Reference, not WinForms :)
Correct link is [Control.MouseClick event](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.control.mouseclick "Control.MouseClick event")
You need to override only one method. If you want to handle only mouse clicks - override OnMouseClick() and don't handle MouseClick event, otherwise - override OnClick() and don't override OnMouseClick(). | Correct usage of OnClick vs. MouseClick events in Windows Forms applications using C# | [
"",
"c#",
".net",
"winforms",
"events",
""
] |
Here is a function similar to the one I've defined:
```
void Function( BYTE *data );
```
What I would like to do is something like this:
```
Function( new BYTE { 0x00, 0x00 } );
``` | You cannot use the array initialiser syntax with dynamically allocated arrays using `new`. You could do something like this:
```
BYTE *ary=new BYTE[2];
ary[0] = 0;
ary[1] = 0;
Function(ary);
delete [] ary;
```
But why are you using dynamically allocated memory here? Is the array held onto outside of the scope of the current function? If not, you can use an array allocated on the stack:
```
BYTE ary[2] = {0};
Function(ary);
```
In C++, a preferred method is to use the STL class `std::vector` which acts like a dynamically allocated (but type safe) array:
```
std::vector<BYTE> ary(2);
Function(&ary[0]);
``` | ```
BYTE foo[] = { 0x00, 0x00 };
Function( foo );
```
C++0x will introduce initializer list syntax that will allow something closer to what you wanted above. | C++ - Passing Arrays To Methods | [
"",
"c++",
"arrays",
"argument-passing",
""
] |
So, I am trying to use `CArray` like this :
```
CArray<CPerson,CPerson&> allPersons;
int i=0;
for(int i=0;i<10;i++)
{
allPersons.SetAtGrow(i,CPerson(i));
i++;
}
```
But when compiling my program, I get this error :
> "error C2248: 'CObject::CObject' :
> cannot access private member declared
> in class 'CObject' c:\program
> files\microsoft visual studio
> 9.0\vc\atlmfc\include\afxtempl.h"
I don't even understand where this is coming from.
HELP! | The error you are getting is because you are trying to use a `CArray` as a return value from what I can gather. If you change it from returning a `CArray` to taking a reference parameter instead, that will compile.
Try this:
```
class CPerson
{
public:
CPerson();
CPerson(int i);
void operator=(const CPerson& p) {}
private:
char* m_strName;
};
CPerson::CPerson()
{}
CPerson::CPerson(int i)
{
sprintf(m_strName,"%d",i);
}
void aFunction(CArray<CPerson,CPerson&> &allPersons)
{
for(int i=0;i<10;i++)
{
allPersons.SetAtGrow(i,CPerson(i));
i++;
}
}
``` | Are you using any of the Copy constructor or assignment operator of CObject ? ( CArray is derived from CObject)
For instance:
```
CArray<CPerson,CPerson&> allPersons;
//do something
// This gives the error C2248, cannot access Copy constructor of CObject.
CArray<CPerson,CPerson&> aTemp = allPersons;
```
OR
Are you doing this?
```
CArray<CPerson,CPerson&> allPersons;
...
CArray<CPerson,CPerson&> aTemp;
//Error, as Assignment operator is private
aTemp = allPersons;
```
**EDIT**:
If you want to copy the elements in CArray, write a helper method CopyArray() and copy the elements manually.
```
CopyArray(sourceArray, DestArray&)
{
for each element in SourceArray
add the element to DestArray.
}
``` | error using CArray | [
"",
"c++",
"mfc",
""
] |
The following code snippet has a memory leak that I spent too much time chasing down. The problem is that inside Foo(), the local variable x\_ hides the member variable x\_. It's quite annoying too, because the compiler could have warned me about it. Is there a flag in GCC for such a warning? (For the curious: I have arrived at the buggy code by first using a local variable, then changing it to a member variable, but forgetting to remove the type declaration.)
```
struct A {
A() x_(NULL) {}
~A() {
delete x_;
}
void Foo() {
HugeThingy* x_ = new HugeThingy();
x_->Bar("I. Need. Garbage. Collection. Now.");
}
HugeThingy* x_;
DISALLOW_COPY_AND_ASSIGN(A); // Macro to prevent copy/assign.
}
``` | Use -Wshadow.
By the way, neither -W nor -Wall enables -Wshadow.
It's nice to have the compiler help avoid this kind of problem, but that won't even be necessary if you use conventions that help avoid creating it in the first place, such reserving names of the form `x_` for member variables, not local variables. | FWIW I wouldn't have this problem because I use a naming convention to distinguish member data from local variables: my member data identifiers are invariably prefixed with `m_`. | Warning about hiding member variables? | [
"",
"c++",
"gcc",
""
] |
I have a WCF Service hosted in IIS/ASP.NET that accepts HTTP Post (*not form post*) of serialized objects.
If the client sends malformed requests (eg they're not serializing the object correctly) I'd like to log the message sent up.
We're already using ELMAH to capture unhandled exceptions, so simply attaching the post data would be the easiest option.
I can get the current HttpContext during an exception, however this does only contains the HTTP Header information.
My question is this: Is there some way of capturing the original HTTP POST request body? Or, failing that - a better way (without a reverse proxy) of capturing the input that caused the error?
Edit: Just to clarify, running packet-level capturing at all times isn't really suitable. I'm after a solution that I can deploy to Production servers, and which will have clients outside our control or ability to monitor.
Edit #2: A suggestion was made to access the Request.InputStream - this doesn't work if you're trying to read after WCF has read the request off the stream.
A sample piece of code to see how I've tried using this is here.
```
StringBuilder log = new StringBuilder();
var request = HttpContext.Current.Request;
if (request.InputStream != null)
{
log.AppendLine(string.Format("request.InputStream.Position = \"{0}\"", request.InputStream.Position));
if (request.InputStream.Position != 0)
{
request.InputStream.Seek(0, System.IO.SeekOrigin.Begin);
}
using (StreamReader sr = new StreamReader(request.InputStream))
{
log.AppendLine(string.Format("Original Input: \"{0}\"", sr.ReadToEnd()));
}
}
else
{
log.AppendLine("request.Inputstream = null");
}
log.ToString();
```
The ouput of log.ToString() is:
```
request.InputStream.Position = "0"
Original Input: ""
``` | By the time it gets to your service the request is processed and not available to you.
However ... you could attach a [message inspector](http://weblogs.asp.net/paolopia/archive/2007/08/23/writing-a-wcf-message-inspector.aspx). Message Inspectors allow you to fiddle with the message before it reaches your operation implementations. You could create a buffered copy of the message, and copy it into the OperationContext.Current.
Ugly hack of course, and it will mean memory overhead as now two copies of the message are floating about for every request. | Did you look at the System.Web.Request.InputStream Property? It should have exactly what you want.
How to "rewind" the InputStream Property.
```
if (Request.InputStream.Position != 0)
{
Request.InputStream.Seek(0, System.IO.SeekOrigin.Begin);
}
```
Another option you should look into is capturing this information with an HTTPModule on the BeginRequest event. The data should be there at BeginRequest event because I do not believe WCF picks up the request until after PostAuthenticateEvent. | Capturing raw HTTP POST Data during Exception | [
"",
"c#",
"asp.net",
"wcf",
"elmah",
"error-reporting",
""
] |
On my windows form, I need to programatically set the width of columns in the grid view. I am using the following code:
```
this.dgridv.Columns[columnName].Width = columnWidth;
```
The above stmt runs without any error. But, the column width remains unchanged. If I insert a breakpoint and check the value of width after the stmt runs, it is still 100, which I guess is the default value of datagrid column width.
Is there something else I need to do apart from this? Are there any values I need to set before changing the column width?
Any help or pointers on this are highly appreciated.
Thanks :) | This worked:
```
this.dgridv.Columns[columnName].AutoSizeMode= DataGridViewAutoSizeColumnMode.None;
this.dgridv.Columns[columnName].Width = columnWidth;
```
To reset it back, I am using:
```
dgvCol.AutoSizeMode = DataGridViewAutoSizeColumnMode.DisplayedCells;
``` | Have you got the `AutoSizeColumnsMode` set to Fill?
If you have you'll need to set the `FillWeight` property instead. This isn't a simple width but the proportion of `width / no. columns` that this column takes up.
If you haven't resized the columns it will be 100.0.
If the column has been widened it will be > 100.0.
If the column has been shrunk it will be < 100.0.
Widening one column, by definition, shrinks the rest. | Unable to change the gridview column width programatically | [
"",
"c#",
"winforms",
"gridview",
".net-2.0",
""
] |
If I create a new HashMap and a new List, and then place the List inside the Hashmap with some arbitrary key and then later call `List.clear()` will it affect what I've placed inside the HashMap?
The deeper question here being: When I add something to a HashMap, is a new object copied and placed or is a reference to the original object placed?
Thanks! | What's happening here is that you're placing a *pointer* to a list in the hashmap, not the list itself.
When you define
```
List<SomeType> list;
```
you're defining a pointer to a list, not a list itself.
When you do
```
map.put(somekey, list);
```
you're just storing a copy of the *pointer*, not the list.
If, somewhere else, you follow that pointer and modify the object at its end, anyone holding that pointer will still be referencing the same, modified object.
Please see <http://javadude.com/articles/passbyvalue.htm> for details on pass-by-value in Java. | Java is pass-by-reference-by-value.
Adding the list to the hash map simply adds the reference to hash map, which points to the same list. Therefore, clearing the list directly will indeed clear the list you're referencing in the hashmap. | Changing value after it's placed in HashMap changes what's inside HashMap? | [
"",
"java",
"hashmap",
"pass-by-reference",
""
] |
I have a question that's not really a problem, but something that made me a little curious.
I have a class with two methods in it. One is a static method and the other one is an instance method. The methods have the same name.
```
public class BlockHeader
{
public static BlockHeader Peek(BinaryReader reader)
{
// Create a block header and peek at it.
BlockHeader blockHeader = new BlockHeader();
blockHeader.Peek(reader);
return blockHeader;
}
public virtual void Peek(BinaryReader reader)
{
// Do magic.
}
}
```
When I try to build my project I get an error saying:
> The call is ambiguous between the
> following methods or properties:
> 'MyApp.BlockHeader.Peek(System.IO.BinaryReader)'
> and
> 'MyApp.BlockHeader.Peek(System.IO.BinaryReader)'
**I know that the method signatures are virtually the same, but I can't see how I possibly could call a static method directly from an instance member.**
I assume that there is a very good reason for this, but does anyone know what that reason is? | The general policy of the C# design is to force you to specify wherever there is potential ambiguity. In the face of refactoring tools that allow one to rejig whether things are static or not at the drop of a hat, this stance is great - especially for cases like this. You'll see many other cases like this (override vs virtual, new for shadowing etc.).
In general, removing this type of room for confusion will make the code clearer and forces you to keep your house in order.
EDIT: [A good post from Eric Lippert discusses another reason for this ambiguity leading to the error you saw](http://blogs.msdn.com/ericlippert/archive/2009/07/06/color-color.aspx) | Here's a excerpt from the C# 3.0 language specification.
> The signature of a method must be unique in the class in which the method is declared. The signature of a method consists of the name of the method, the number of type parameters and the number, modifiers, and types of its parameters. The signature of a method does not include the return type.
The 'static' modifier is not part of the signature so your example violates this rule of unique signatures.
I don't know the reason behind the rule, though. | Question about ambiguous calls in C# | [
"",
"c#",
".net",
"ambiguous-call",
""
] |
In the world of Java, BEA (now Oracle) has created LiquidVM which doesn't require an OS. Likewise, there are a variety of open source projects including [SANOS](http://www.jbox.dk/sanos/), [JNODE OS](http://www.jnode.org/), [Project Guest VM](http://research.sun.com/projects/dashboard.php?id=185), [JavaOS](http://en.wikipedia.org/wiki/JavaOS), etc.
Is there an equivalent being created for .NET? | Some googling found out:
* [Singularity](http://research.microsoft.com/en-us/projects/singularity/) (a Microsoft research project)
* [Midori](http://en.wikipedia.org/wiki/Midori_(operating_system)) (another Microsoft research project, which aims to replace or integrate with a future version of Windows, especially on mobile devices)
* [SharpOS](http://www.sharpos.org/doku.php) (an open-source GPL OS in C#)
* [Cosmos](http://www.gocosmos.org/index.en.aspx) (an open-source BSD OS in C#)
As to how mature those systems are, you'll have to check by yourself ;). | check out the [.NET Micro Framework](http://www.microsoft.com/netmf/about/default.mspx)
unlike the projects suggested by Trillian which are projects to create a managed CLR OS (not what the question asked.) The .NET Micro Framework is the .NET CLR without an OS. It is commerially supported by microsoft and can be developed for using Visual Studio. | .NET CLR that does not require an operating system? | [
"",
"java",
".net",
"linux",
"mono",
"kernel",
""
] |
I'm creating objects dynamically and inserting them into an html table, the objects are either labels or linkbuttons, if they are linkbuttons i need to subscribe an eventhandler to the click event, but I'm struggling to find a way to actually add the handler. The code so far is:
```
WebControl myControl;
if _createLabel)
{
myControl = new Label();
}
else
{
myControl = new LinkButton();
}
myControl.ID = "someID";
myControl.GetType().InvokeMember("Text", BindingFlags.SetProperty, null, myControl, new object[] { "some text" });
if (!_createLabel)
{
// somehow do myControl.Click += myControlHandler; here
}
``` | Something like that will work.
[myControl](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.webcontrol.aspx).[GetType()](http://msdn.microsoft.com/en-us/library/system.object.gettype.aspx).[GetEvent("Click")](http://msdn.microsoft.com/en-us/library/system.type.getevent.aspx).[AddEventHandler(myControl, myControlHandler)](http://msdn.microsoft.com/en-us/library/system.reflection.eventinfo.addeventhandler.aspx); | The following will work:
```
LinkButton lnk = myControl as LinkButton;
if (lnk != null)
{
lnk.Click += myControlHandler;
}
``` | Can I late bind to event handlers in C#? | [
"",
"c#",
"event-handling",
"late-binding",
""
] |
Why do I need to use a versioning system or repository? I code from scratch by myself and make web code changes along with database changes on reasonably large projects. | You don't have to do it - but I found out that it makes developing much easier.
It helped me
* to cut a lot of commented code out of my programs
* to get back to an old version (find out why it worked with an older version and doesn't work with the current one)
* with my backup strategy
After the learning curve I'm pretty sure you are going to like it | **Definitely yes** - I have often coded on my own in the past, and a proper versioning system has proved invaluable on countless occasions.
Also see [Good excuses NOT to use version control](https://stackoverflow.com/questions/132520/good-excuses-not-to-use-version-control) | Is a versioning system or code repository necessary for a single developer? | [
"",
"php",
"svn",
""
] |
A common task when calling web resources from a code is building a query string to including all the necessary parameters. While by all means no rocket science, there are some nifty details you need to take care of like, appending an `&` if not the first parameter, encoding the parameters etc.
The code to do it is very simple, but a bit tedious:
```
StringBuilder SB = new StringBuilder();
if (NeedsToAddParameter A)
{
SB.Append("A="); SB.Append(HttpUtility.UrlEncode("TheValueOfA"));
}
if (NeedsToAddParameter B)
{
if (SB.Length>0) SB.Append("&");
SB.Append("B="); SB.Append(HttpUtility.UrlEncode("TheValueOfB")); }
}
```
This is such a common task one would expect a utility class to exist that makes it more elegant and readable. Scanning MSDN, I failed to find one—which brings me to the following question:
What is the most elegant clean way you know of doing the above? | If you look under the hood the QueryString property is a NameValueCollection. When I've done similar things I've usually been interested in serialising AND deserialising so my suggestion is to build a NameValueCollection up and then pass to:
```
using System.Linq;
using System.Web;
using System.Collections.Specialized;
private string ToQueryString(NameValueCollection nvc)
{
var array = (
from key in nvc.AllKeys
from value in nvc.GetValues(key)
select string.Format(
"{0}={1}",
HttpUtility.UrlEncode(key),
HttpUtility.UrlEncode(value))
).ToArray();
return "?" + string.Join("&", array);
}
```
I imagine there's a super elegant way to do this in LINQ too... | You can create a new writeable instance of `HttpValueCollection` by calling `System.Web.HttpUtility.ParseQueryString(string.Empty)`, and then use it as any `NameValueCollection`. Once you have added the values you want, you can call `ToString` on the collection to get a query string, as follows:
```
NameValueCollection queryString = System.Web.HttpUtility.ParseQueryString(string.Empty);
queryString.Add("key1", "value1");
queryString.Add("key2", "value2");
return queryString.ToString(); // Returns "key1=value1&key2=value2", all URL-encoded
```
The `HttpValueCollection` is internal and so you cannot directly construct an instance. However, once you obtain an instance you can use it like any other `NameValueCollection`. Since the actual object you are working with is an `HttpValueCollection`, calling ToString method will call the overridden method on `HttpValueCollection`, which formats the collection as a URL-encoded query string.
After searching SO and the web for an answer to a similar issue, this is the most simple solution I could find.
**.NET Core**
If you're working in .NET Core, you can use the `Microsoft.AspNetCore.WebUtilities.QueryHelpers` class, which simplifies this greatly.
<https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.webutilities.queryhelpers>
Sample Code:
```
const string url = "https://customer-information.azure-api.net/customers/search/taxnbr";
var param = new Dictionary<string, string>() { { "CIKey", "123456789" } };
var newUrl = new Uri(QueryHelpers.AddQueryString(url, param));
``` | How to build a query string for a URL in C#? | [
"",
"c#",
".net",
"url",
"query-string",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.