Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
This topic resembles [this thread](https://stackoverflow.com/questions/44205/direct-tcp-ip-connections-in-p2p-apps)
I'm rather new to the topic of network programming, never having done anything but basic TCP/UDP on a single local machine. Now I'm developing an application that will need P2P network support. More specifically I will need the application to connnect and communicate across the internet preferably without the use of a server to do the matchmaking between the clients.
I'm aware and assuming that almost all users are behind a router which complicates the process since neither clients will be able to initialize a direct connection to the other.
I know UPnP is an option to allow port forwarding without having the users configure this manually, but as of now this is not an option. Is they any way to achieve my goal or will I need that server? | You'll need a server to exchange IP address and such. As the other thread literally points out, the only way of guaranteeing a connection is to proxy through a server. Most peer to peer systems use **UPnP** and **NAT Hole Punching** (this method needs a server relaying port information and only works with UDP) to establish a connection in most cases.
**NAT Hole Punching** works by both clients establishing a connection to a server, then the both try to connect directly to a port that the other has relayed to the other. Most UDP NAT remember the IP address and port for a short time, so although the data never made it to the other end (not that this matters with UDP) the other client will try to connect a few moments later to that report as the NAT would expect the reply. | A very good reading, made just for you :-), is [RFC 5128, "State of Peer-to-Peer (P2P) Communication across Network Address Translators (NATs)"](http://www.ietf.org/rfc/rfc5128.txt). | Direct P2P connection | [
"",
"c#",
"networking",
"p2p",
""
] |
Is there a way to create an NTFS junction point in Python? I know I can call the `junction` utility, but it would be better not to rely on external tools. | I answered this in a [similar question](https://stackoverflow.com/questions/1447575/symlinks-on-windows/7924557#7924557), so I'll copy my answer to that below. Since writing that answer, I ended up writing a python-only (if you can call a module that uses ctypes python-only) module to creating, reading, and checking junctions which can be found in [this folder](https://github.com/Juntalis/ntfslink-python/tree/master/ntfslink). Hope that helps.
Also, unlike the answer that utilizes uses the **CreateSymbolicLinkA** API, the linked implementation should work on any Windows version that supports junctions. CreateSymbolicLinkA is only supported in Vista+.
**Answer:**
[python ntfslink extension](https://github.com/juntalis/ntfslink-python)
Or if you want to use pywin32, you can use the previously stated method, and to read, use:
```
from win32file import *
from winioctlcon import FSCTL_GET_REPARSE_POINT
__all__ = ['islink', 'readlink']
# Win32file doesn't seem to have this attribute.
FILE_ATTRIBUTE_REPARSE_POINT = 1024
# To make things easier.
REPARSE_FOLDER = (FILE_ATTRIBUTE_DIRECTORY | FILE_ATTRIBUTE_REPARSE_POINT)
# For the parse_reparse_buffer function
SYMBOLIC_LINK = 'symbolic'
MOUNTPOINT = 'mountpoint'
GENERIC = 'generic'
def islink(fpath):
""" Windows islink implementation. """
if GetFileAttributes(fpath) & REPARSE_FOLDER:
return True
return False
def parse_reparse_buffer(original, reparse_type=SYMBOLIC_LINK):
""" Implementing the below in Python:
typedef struct _REPARSE_DATA_BUFFER {
ULONG ReparseTag;
USHORT ReparseDataLength;
USHORT Reserved;
union {
struct {
USHORT SubstituteNameOffset;
USHORT SubstituteNameLength;
USHORT PrintNameOffset;
USHORT PrintNameLength;
ULONG Flags;
WCHAR PathBuffer[1];
} SymbolicLinkReparseBuffer;
struct {
USHORT SubstituteNameOffset;
USHORT SubstituteNameLength;
USHORT PrintNameOffset;
USHORT PrintNameLength;
WCHAR PathBuffer[1];
} MountPointReparseBuffer;
struct {
UCHAR DataBuffer[1];
} GenericReparseBuffer;
} DUMMYUNIONNAME;
} REPARSE_DATA_BUFFER, *PREPARSE_DATA_BUFFER;
"""
# Size of our data types
SZULONG = 4 # sizeof(ULONG)
SZUSHORT = 2 # sizeof(USHORT)
# Our structure.
# Probably a better way to iterate a dictionary in a particular order,
# but I was in a hurry, unfortunately, so I used pkeys.
buffer = {
'tag' : SZULONG,
'data_length' : SZUSHORT,
'reserved' : SZUSHORT,
SYMBOLIC_LINK : {
'substitute_name_offset' : SZUSHORT,
'substitute_name_length' : SZUSHORT,
'print_name_offset' : SZUSHORT,
'print_name_length' : SZUSHORT,
'flags' : SZULONG,
'buffer' : u'',
'pkeys' : [
'substitute_name_offset',
'substitute_name_length',
'print_name_offset',
'print_name_length',
'flags',
]
},
MOUNTPOINT : {
'substitute_name_offset' : SZUSHORT,
'substitute_name_length' : SZUSHORT,
'print_name_offset' : SZUSHORT,
'print_name_length' : SZUSHORT,
'buffer' : u'',
'pkeys' : [
'substitute_name_offset',
'substitute_name_length',
'print_name_offset',
'print_name_length',
]
},
GENERIC : {
'pkeys' : [],
'buffer': ''
}
}
# Header stuff
buffer['tag'] = original[:SZULONG]
buffer['data_length'] = original[SZULONG:SZUSHORT]
buffer['reserved'] = original[SZULONG+SZUSHORT:SZUSHORT]
original = original[8:]
# Parsing
k = reparse_type
for c in buffer[k]['pkeys']:
if type(buffer[k][c]) == int:
sz = buffer[k][c]
bytes = original[:sz]
buffer[k][c] = 0
for b in bytes:
n = ord(b)
if n:
buffer[k][c] += n
original = original[sz:]
# Using the offset and length's grabbed, we'll set the buffer.
buffer[k]['buffer'] = original
return buffer
def readlink(fpath):
""" Windows readlink implementation. """
# This wouldn't return true if the file didn't exist, as far as I know.
if not islink(fpath):
return None
# Open the file correctly depending on the string type.
handle = CreateFileW(fpath, GENERIC_READ, 0, None, OPEN_EXISTING, FILE_FLAG_OPEN_REPARSE_POINT, 0) \
if type(fpath) == unicode else \
CreateFile(fpath, GENERIC_READ, 0, None, OPEN_EXISTING, FILE_FLAG_OPEN_REPARSE_POINT, 0)
# MAXIMUM_REPARSE_DATA_BUFFER_SIZE = 16384 = (16*1024)
buffer = DeviceIoControl(handle, FSCTL_GET_REPARSE_POINT, None, 16*1024)
# Above will return an ugly string (byte array), so we'll need to parse it.
# But first, we'll close the handle to our file so we're not locking it anymore.
CloseHandle(handle)
# Minimum possible length (assuming that the length of the target is bigger than 0)
if len(buffer) < 9:
return None
# Parse and return our result.
result = parse_reparse_buffer(buffer)
offset = result[SYMBOLIC_LINK]['substitute_name_offset']
ending = offset + result[SYMBOLIC_LINK]['substitute_name_length']
rpath = result[SYMBOLIC_LINK]['buffer'][offset:ending].replace('\x00','')
if len(rpath) > 4 and rpath[0:4] == '\\??\\':
rpath = rpath[4:]
return rpath
def realpath(fpath):
from os import path
while islink(fpath):
rpath = readlink(fpath)
if not path.isabs(rpath):
rpath = path.abspath(path.join(path.dirname(fpath), rpath))
fpath = rpath
return fpath
def example():
from os import system, unlink
system('cmd.exe /c echo Hello World > test.txt')
system('mklink test-link.txt test.txt')
print 'IsLink: %s' % islink('test-link.txt')
print 'ReadLink: %s' % readlink('test-link.txt')
print 'RealPath: %s' % realpath('test-link.txt')
unlink('test-link.txt')
unlink('test.txt')
if __name__=='__main__':
example()
```
Adjust the attributes in the CreateFile to your needs, but for a normal situation, it should work. Feel free to improve on it.
It should also work for folder junctions if you use MOUNTPOINT instead of SYMBOLIC\_LINK.
You may way to check that
```
sys.getwindowsversion()[0] >= 6
```
if you put this into something you're releasing, since this form of symbolic link is only supported on Vista+. | Since Python 3.5 there's a function `CreateJunction` in `_winapi` module.
```
import _winapi
_winapi.CreateJunction(source, target)
``` | Create NTFS junction point in Python | [
"",
"python",
"windows",
"ntfs",
"junction",
""
] |
I always mess up how to use `const int*`, `const int * const`, and `int const *` correctly. Is there a set of rules defining what you can and cannot do?
I want to know all the do's and all don'ts in terms of assignments, passing to the functions, etc. | Read it backwards (as driven by [Clockwise/Spiral Rule](http://c-faq.com/decl/spiral.anderson.html)):
* `int*` - pointer to int
* `int const *` - pointer to const int
* `int * const` - const pointer to int
* `int const * const` - const pointer to const int
Now the first `const` can be on either side of the type so:
* `const int *` == `int const *`
* `const int * const` == `int const * const`
If you want to go really crazy you can do things like this:
* `int **` - pointer to pointer to int
* `int ** const` - a const pointer to a pointer to an int
* `int * const *` - a pointer to a const pointer to an int
* `int const **` - a pointer to a pointer to a const int
* `int * const * const` - a const pointer to a const pointer to an int
* ...
If you're ever uncertain, you can use a tool like [cdecl+](https://cdecl.plus/?q=int%20const%20%2A%20const%20%2A%2A%20const%20%2A%20const) to convert declarations to prose automatically.
To make sure we are clear on the meaning of `const`:
```
int a = 5, b = 10, c = 15;
const int* foo; // pointer to constant int.
foo = &a; // assignment to where foo points to.
/* dummy statement*/
*foo = 6; // the value of a can´t get changed through the pointer.
foo = &b; // the pointer foo can be changed.
int *const bar = &c; // constant pointer to int
// note, you actually need to set the pointer
// here because you can't change it later ;)
*bar = 16; // the value of c can be changed through the pointer.
/* dummy statement*/
bar = &a; // not possible because bar is a constant pointer.
```
`foo` is a variable pointer to a constant integer. This lets you change what you point to but not the value that you point to. Most often this is seen with C-style strings where you have a pointer to a `const char`. You may change which string you point to but you can't change the content of these strings. This is important when the string itself is in the data segment of a program and shouldn't be changed.
`bar` is a constant or fixed pointer to a value that can be changed. This is like a reference without the extra syntactic sugar. Because of this fact, usually you would use a reference where you would use a `T* const` pointer unless you need to allow `NULL` pointers. | For those who don't know about Clockwise/Spiral Rule:
Start from the name of the variable, move clockwisely (in this case, move backward) to the next **pointer** or **type**. Repeat until expression ends.
Here is a demo:
[](https://i.stack.imgur.com/TrYIP.png)
[](https://i.stack.imgur.com/BTIdX.png)
[](https://i.stack.imgur.com/Pm5U9.png)
[](https://i.stack.imgur.com/UEzZh.png)
[](https://i.stack.imgur.com/Oq17m.png) | What is the difference between const int*, const int * const, and int const *? | [
"",
"c++",
"c",
"pointers",
"constants",
"c++-faq",
""
] |
```
<html>
<head>
<title>{% block title %}{% endblock %}</title>
</head>
<body>
<h1>{% block title %}{% endblock %}</h1>
</body>
</html>
```
This is my template, more or less. The h1 heading is always the same as the title tag. The above snippet of code is not valid because there can't be two blocks with the same name. How do I handle this without repeating myself?
---
edit to clarify: I have a ton of child templates which are inherited from this one template, and so making a new {{title}} variable for each template is not a very good solution. Previously I had it set up like this:
base.html:
```
<title>{% block title %}{% endblock %}</title>
```
then in base\_view.html (extending base.html):
```
<h1>{% block title %}{% endblock %}</h1>
```
then in base\_object.html (extending base\_view.html):
```
{% block title %}my title goes here{% endblock %}
```
and it just worked somehow. I refactored my templates so theres just base.html, and base\_object.html How can I get this functionality back? | It looks like your layout is solid. You have a `base.html` template that defines the basic structure and outer layout for each page in your app. You also have `base_object.html` that extends this template.
You'd like each page to have a unique title and a matching h1 (I think). This best way to do this is to define two separate blocks in your base.html template.
```
<html>
<head>
<title>{% block title %}Default Title{% endblock %}</title>
</head>
<body>
<h1>{% block h1 %}{% endblock %}</h1>
</body>
</html>
```
In your child templates, you need to override both of these if you'd like them to be identical. I know you feel this is counter-intuitive, but it is necessary due to the way template inheritance is handled in Django.
Source: [The Django template language](http://docs.djangoproject.com/en/dev/topics/templates/#id1)
> Finally, note that you can't define multiple `{% block %}` tags with the same name in the same template. This limitation exists because a block tag works in "both" directions. That is, a block tag doesn't just provide a hole to fill -- it also defines the content that fills the hole in the parent. If there were two similarly-named `{% block %}` tags in a template, that template's parent wouldn't know which one of the blocks' content to use.
The children look like this:
```
{% extends "base.html" %}
{% block title %}Title{% endblock %}
{% block h1 %}Title{% endblock %}
```
If this bothers you, you should set the title from the view for each object as a template variable.
```
{% block title %}{{ title }}{% endblock %}
{% block h1 %}{{ title }}{% endblock %}
```
Django strives to keep as much logic out of the template layer as possible. Often a title is determined dynamically from the database, so the view layer is the perfect place to retrieve and set this information. You can still leave the title blank if you'd like to defer to the default title (perhaps set in `base.html`, or you can grab the name of the site from the `django.contrib.sites` package)
Also `{{ block.super }}` may come in handy. This will allow you to combine the contents of the parent block with additional contents from the child. So you could define a title like "Stackoverflow.com" in the base, and set
```
{% block title %}{{ block.super }} - Ask a Question{% endblock %}
```
in the child to get a title like "Stackoverflow.com - Ask a Question" | In base.html:
```
<head>
<title>{% block title %}{% endblock %}</title>
</head>
<body>
<h1>{% block h1 %}{% endblock %}</h1>
</body>
```
Then, make another "base" layer on top of that called content\_base.html (or something):
```
{% extends "base.html" %}
{% block h1 %}{% block title %}{% endblock %}{% endblock %}
```
Now have all your other templates extend content\_base.html. Whatever you put in block "title" in all your templates will go into both "title" and "h1" blocks in base.html. | Whats the best way to duplicate data in a django template? | [
"",
"python",
"django",
"django-templates",
""
] |
Can someone explain this to me? In C# double.NaN is not equal to double.NaN
```
bool huh = double.NaN == double.NaN; // huh = false
bool huh2 = double.NaN >= 0; // huh2 = false
bool huh3 = double.NaN <= 0; // huh3 = false
```
What constant can I compare to a double.NaN and get true? | If you are curious, this is what `Double.IsNaN` looks like:
```
public static bool IsNaN(double d)
{
return (d != d);
}
```
Funky, huh? | Use [Double.IsNaN](http://msdn.microsoft.com/en-us/library/system.double.isnan.aspx). | Why is double.NaN not equal to itself? | [
"",
"c#",
".net",
""
] |
I've got an array of cats objects:
```
$cats = Array
(
[0] => stdClass Object
(
[id] => 15
),
[1] => stdClass Object
(
[id] => 18
),
[2] => stdClass Object
(
[id] => 23
)
)
```
and I want to extract an array of cats' IDs in 1 line (not a function nor a loop).
I was thinking about using `array_walk` with `create_function` but I don't know how to do it.
Any idea? | If you have **PHP 7.0 or later**, the best way is to use the built in function `array_column()` to access a column of properties from an array of objects:
```
$idCats = array_column($cats, 'id');
```
But the son has to be an array or converted to an array | > **Warning** `create_function()` has been DEPRECATED as of PHP 7.2.0. Relying on this function is highly discouraged.
You can use the [`array_map()`](http://php.net/array_map) function.
This should do it:
```
$catIds = array_map(create_function('$o', 'return $o->id;'), $objects);
```
---
As @Relequestual writes below, the function is now integrated directly in the array\_map. The new version of the solution looks like this:
```
$catIds = array_map(function($o) { return $o->id;}, $objects);
``` | PHP - Extracting a column of properties from an array of objects | [
"",
"php",
"arrays",
"object",
"array-column",
""
] |
I'd like to have a function behaving as mysql\_real\_escape\_string without connecting to database as at times I need to do dry testing without DB connection. mysql\_escape\_string is deprecated and therefore is undesirable. Some of my findings:
<http://www.gamedev.net/community/forums/topic.asp?topic_id=448909>
<http://w3schools.invisionzone.com/index.php?showtopic=20064> | It is impossible to safely escape a string without a DB connection. `mysql_real_escape_string()` and prepared statements need a connection to the database so that they can escape the string using the appropriate character set - otherwise SQL injection attacks are still possible using multi-byte characters.
If you are only **testing**, then you may as well use `mysql_escape_string()`, it's not 100% guaranteed against SQL injection attacks, but it's impossible to build anything safer without a DB connection. | Well, according to the [mysql\_real\_escape\_string](http://php.net/mysql_real_escape_string) function reference page: "mysql\_real\_escape\_string() calls MySQL's library function mysql\_real\_escape\_string, which escapes the following characters: \x00, \n, \r, \, ', " and \x1a."
With that in mind, then the function given in the second link you posted should do exactly what you need:
```
function mres($value)
{
$search = array("\\", "\x00", "\n", "\r", "'", '"', "\x1a");
$replace = array("\\\\","\\0","\\n", "\\r", "\'", '\"', "\\Z");
return str_replace($search, $replace, $value);
}
``` | Alternative to mysql_real_escape_string without connecting to DB | [
"",
"php",
"mysql",
"mysql-real-escape-string",
""
] |
It's a design question. I have a business object, and 5 business object types that are derived from it.
I also will have a class which has BindingList as a member. I will have 5 classes derived from it.
Since covariance doesn't work here, how would you structure the design to minimize code repetition? I could of course chuck the BindingList and go with DataTable in which case the problem is eliminated.
But since everyone raves about BindingList I would LOVE to see how you guys would approach this.
SOLUTION (based on Pavel Minaev's answer):
```
public class SampleBase
{
protected string m_seq;
protected string m_id;
protected string m_weight;
protected string m_units;
public SampleBase(string seq, string id, string weight, string units)
{
Seq = seq;
Id = id;
Weight = weight;
Units = units;
}
public SampleBase() { }
public string Seq
{
get { return m_seq; }
set { m_seq = value; }
}
public string Id
{
get { return m_id; }
set { m_id = value; }
}
public string Weight
{
get { return m_weight; }
set { m_weight = value; }
}
public string Units
{
get { return m_units; }
set { m_units = value; }
}
}
public class FwdSample : SampleBase
{
protected string m_std_id;
public FwdSample() { }
public FwdSample (string seq, string id, string weight, string units, string std_id ) : base(seq, id, weight, units)
{
StdId = std_id;
}
public string StdId
{
get { return m_std_id; }
set { m_std_id = value; }
}
}
//End of Sample Classes
public abstract class RunBase<T> where T : SampleBase , new()
{
protected BindingList<T> m_samples;
public RunBase() {}
public void Add(T sample)
{
m_samples.Add(sample);
}
public void Update(int index, T sample)
{
m_samples[index] = sample;
}
public void Delete(int index)
{
m_samples.RemoveAt(index);
}
public BindingList<T> Samples
{
get { return m_samples; }
}
}
public class FwdRun : RunBase<FwdSample>
{
public FwdRun()
{
m_samples = new BindingList<FwdSample>();
}
}
``` | Assuming your `BindingList` member is private (or protected), and isn't otherwise exposed in your class API, you'd probably want something like this:
```
class Base
{
// No BindingList here. All members that don't need it should be here
...
}
class Base<TDerived> : Base where TDerived : Base<TDerived>
{
BindingList<TDerived> list = new BindingList<TDerived>();
// All members that need BindingList should be here
}
class Derived1 : Base<Derived1> { ... }
class Derived2 : Base<Derived2> { ... }
...
``` | This example only works with .net 3.5 or higher. :(
Perhaps a property that returns all of the inherited objects. I had a similar question and using System.Linq. Here is what I used:
```
List<A> testme = new List<B>().OfType<A>().ToList();
```
Or cast all of them to the parent:
```
List<A> testme = new List<B>().Cast<A>().ToList();
```
The above code was from [this answer](https://stackoverflow.com/questions/885893/how-to-make-a-generic-class-with-inheritance/885925#885925). Thanks Matt. | Solution to the lack of covariance with generics in c# 2.0 (BindingList) | [
"",
"c#",
"oop",
"inheritance",
"covariance",
"bindinglist",
""
] |
When creating a simple data container class, what should it be?
* Class or struct?
* Mutable or immutable?
* With or without non-empty constructor?
Examples of the above:
```
struct MutableStruct
{
public string Text { get; set; }
public int Number { get; set; }
}
struct ImmutableStruct
{
public string Text { get; private set; }
public int Number { get; private set; }
public ImmutableStruct(string text, int number)
: this()
{
Text = text;
Number = number;
}
}
struct MutableStructWithConstructor
{
public string Text { get; set; }
public int Number { get; set; }
public MutableStructWithConstructor(string text, int number)
: this()
{
Text = text;
Number = number;
}
}
class MutableClass
{
public string Text { get; set; }
public int Number { get; set; }
}
class ImmutableClass
{
public string Text { get; private set; }
public int Number { get; private set; }
public ImmutableClass(string text, int number)
{
Text = text;
Number = number;
}
}
class MutableClassWithConstructor
{
public string Text { get; set; }
public int Number { get; set; }
public MutableClassWithConstructor(string text, int number)
{
Text = text;
Number = number;
}
}
```
Any good reasons we should choose one above another? Or are there mostly subjective preferences that separate them? Or does it depend a lot on the spesific use cases? If so in what use cases should you choose what and why? | Almost always a class; structs should really only be used for things that **are values** - for example, a complex-number, or a currency type/value pair - and should *almost*-without-exclusion be immutable.
A parameterless constructor is handy on mutable data if you are going to do data-binding, as this allows the system to create instances without additional code from yourself. A non-empty constructor is pretty-much essential for immutable data. For mutable data, an object initializer goes a long way towards that (although isn't *quite* the same in terms of validation etc):
```
var obj = new Person {Name="Fred", DateOfBirth=DateTime.Today};
```
Whether your types are immutable is up to you; mutable makes it easier to do data-binding and serialization. In general, you tend to see more mutable types in .NET, but this may change as we get into the parallel / many-core era. | * You should almost always prefer classes over structs. Use structs only when the object represents an (immutable) value.
* If you need to change the object and it is safe to do so then make it mutable, else make it immutable and use cloning.
* If the object is in a valid state when created with the default constructor, fine. Otherwise always provide your own constructor. | C#: What style of data containers are preferred in general? | [
"",
"c#",
"data-structures",
"data-containers",
""
] |
Ho do I go about adding random minutes to column in dataset, here is my code:
```
protected void btnUpdateTable_Click(object sender, EventArgs e)
{
foreach (DataRow dr in ds.Tables[0].Rows)
{
///check if column[logout] is null or empty, fill it
if(dr.IsNull("logout_time"))
{
///get the login colum datetime
/// add random datetime to it
if (!dr.IsNull("login_time"))
{
DateTime dt = Convert.ToDateTime(dr["login_time"]);
dt = dt.AddMinutes(?);/// "?"<--here I want to add random minutes
}
}
}
```
Any help greatly appreciated.
> **Thank you all for the help, here my final Code snippet:**
```
foreach (DataRow dr in ds.Tables[0].Rows)
{
///check if column[logout] is null or empty, fill it
if(dr.IsNull("logout_time"))
{
///get the login colum datetime
/// add random datetime to it
if (!dr.IsNull("login_time"))
{
DateTime dt = Convert.ToDateTime(dr["login_time"]);
Random rand = new Random();
//return random.Next(0, 59);
dt = dt.AddMinutes(rand.Next(0,59));
dt = dt.AddSeconds(rand.Next(0, 59));
dr["logout_time"] = dt;
}
}
}
``` | You can use this:
```
Random random = new Random();
foreach(DataRow dr ...)
{
int rand = random.Next(0, 60);
}
```
As a comment pointed out, you don't need to create a new Random object for every number you wish to create. (Actually, you probably shouldn't). | Try using `Random`:
```
Random randGen = new Random();
foreach (DataRow dr in ds.Tables[0].Rows)
{
///check if column[logout] is null or empty, fill it
if(dr.IsNull("logout_time"))
{
///get the login colum datetime
/// add random datetime to it
if (!dr.IsNull("login_time"))
{
DateTime dt = Convert.ToDateTime(dr["login_time"]);
dt = dt.AddMinutes(randGen.Next(0, 60));
/// "?"<--here I want to add random minutes
}
}
}
``` | Random minutes in C# 2.0 | [
"",
"c#",
".net",
"asp.net",
"c#-2.0",
""
] |
I'm trying to edit a text file to remove the vowels from it and for some reason nothing happens to the text file. I think it may be because a mode argument needs to be passed in the filestream.
[SOLVED]
Code:
```
#include "std_lib_facilities.h"
bool isvowel(char s)
{
return (s == 'a' || s == 'e' || s =='i' || s == 'o' || s == 'u';)
}
void vowel_removal(string& s)
{
for(int i = 0; i < s.length(); ++i)
if(isvowel(s[i]))
s[i] = ' ';
}
int main()
{
vector<string>wordhold;
cout << "Enter file name.\n";
string filename;
cin >> filename;
ifstream f(filename.c_str());
string word;
while(f>>word) wordhold.push_back(word);
f.close();
ofstream out(filename.c_str(), ios::out);
for(int i = 0; i < wordhold.size(); ++i){
vowel_removal(wordhold[i]);
out << wordhold[i] << " ";}
keep_window_open();
}
``` | Reading and writing on the same stream results in an error. Check `f.bad()` and `f.eof()` after the loop terminates. I'm afraid that you have two choices:
1. Read and write to different files
2. Read the entire file into memory, close it, and overwrite the original
As [Anders](https://stackoverflow.com/questions/1191349/why-doesnt-this-change-the-txt-file/1191416) stated, you probably don't want to use `operator<<` for this since it will break everything up by whitespace. You probably want [`std::getline()`](http://www.cplusplus.com/reference/string/getline/) to slurp in the lines. Pull them into a `std::vector<std::string>`, close the file, edit the vector, and overwrite the file.
## Edit:
[Anders](https://stackoverflow.com/questions/1191349/why-doesnt-this-change-the-txt-file/1191416) was right on the money with his description. Think of a file as a byte stream. If you want to transform the file *in place*, try something like the following:
```
void
remove_vowel(char& ch) {
if (ch=='a' || ch=='e' || ch=='i' || ch =='o' || ch=='u') {
ch = ' ';
}
}
int
main() {
char const delim = '\n';
std::fstream::streampos start_of_line;
std::string buf;
std::fstream fs("file.txt");
start_of_line = fs.tellg();
while (std::getline(fs, buf, delim)) {
std::for_each(buf.begin(), buf.end(), &remove_vowel);
fs.seekg(start_of_line); // go back to the start and...
fs << buf << delim; // overwrite the line, then ...
start_of_line = fs.tellg(); // grab the next line start
}
return 0;
}
```
There are some small problems with this code like it won't work for MS-DOS style text files but you can probably figure out how to account for that if you have to. | Files are sort of like a list, a sequential byte stream. When you open the file you position the file pointer at the very start, every read/write repositions the file pointer in the file with an offset larger than the last. You can use seekg() to move back in the file and overwrite previous content. Another problem with your approach above is that there will probably be some delimiters between the words typically one or more spaces for instance, you will need to handle read/write on these too.
It is much easier to just load the whole file in memory and do your manipulation on that string then rewriting the whole thing back. | Why doesn't this change the .txt file? | [
"",
"c++",
""
] |
I have a python desktop application that needs to store user data. On Windows, this is usually in `%USERPROFILE%\Application Data\AppName\`, on OSX it's usually `~/Library/Application Support/AppName/`, and on other \*nixes it's usually `~/.appname/`.
There exists a function in the standard library, `os.path.expanduser` that will get me a user's home directory, but I know that on Windows, at least, "Application Data" is localized into the user's language. That might be true for OSX as well.
What is the correct way to get this location?
**UPDATE:**
Some further research indicates that the correct way to get this on OSX is by using the function NSSearchPathDirectory, but that's Cocoa, so it means calling the PyObjC bridge... | Well, I hate to have been the one to answer my own question, but no one else seems to know. I'm leaving the answer for posterity.
```
APPNAME = "MyApp"
import sys
from os import path, environ
if sys.platform == 'darwin':
from AppKit import NSSearchPathForDirectoriesInDomains
# http://developer.apple.com/DOCUMENTATION/Cocoa/Reference/Foundation/Miscellaneous/Foundation_Functions/Reference/reference.html#//apple_ref/c/func/NSSearchPathForDirectoriesInDomains
# NSApplicationSupportDirectory = 14
# NSUserDomainMask = 1
# True for expanding the tilde into a fully qualified path
appdata = path.join(NSSearchPathForDirectoriesInDomains(14, 1, True)[0], APPNAME)
elif sys.platform == 'win32':
appdata = path.join(environ['APPDATA'], APPNAME)
else:
appdata = path.expanduser(path.join("~", "." + APPNAME))
``` | There's a small module available that does exactly that:
<https://pypi.org/project/appdirs/> | How do I store desktop application data in a cross platform way for python? | [
"",
"python",
"desktop-application",
"application-settings",
""
] |
I am creating software that creates documents, (Bayesian network graphs to be exact), and these documents need to be saved in an XML format.
I know how to create XML files, but I have yet to decide how to organise the code.
At the moment, I plan on having each object (i.e. a Vertex or an Edge) have a function called getXML() (they will probably implement an interface so that it can be expanded later on). getXML() will return a string containing the XML for that object.
There will be another object which will collect all these XML strings and put them together, and output an XML file.
For some reason, I think this seems a bit messy, how would you recommend doing it? | The Model (vertex/edge) **should not depend on representation** (XML):
```
Model model = new Model();
View view = model.getView(); // wrong
```
the correct way is to decouple model from view (with something like XStream or whatever) or just make the view coupled with the model:
```
Model model = new Model();
View view = new XMLView(model); // ok
``` | **Do you have to use Java**? *Scala* contains implicit type conversions which allow you to implicitly convert your model object to a view representation of your choice. It's also completely compatible with Java. For example:
```
def printData(obj: DataObject, os: OutputStream) = {
val view: ViewRepresentation = obj //note implicit conversion
view.printTo(os)
}
```
Where you have a `trait` (i.e. an `interface`)
```
trait ViewRepresentation {
def printTo(os: OutputStream)
}
```
And an implicit conversion:
```
implicit def dataobj2xmlviewrep(obj: DataObject): ViewRepresentation = {
new XmlViewRepresentation(obj)
}
```
You just have to code the bespoke XML representation. Oh yes - and it has native XML support in the language | How would you create xml files in java | [
"",
"java",
"design-patterns",
""
] |
I often use the python interpreter for doing quick numerical calculations and would like all numerical results to be automatically printed using, e.g., exponential notation. Is there a way to set this for the entire session?
For example, I want:
```
>>> 1.e12
1.0e+12
```
not:
```
>>> 1.e12
1000000000000.0
``` | Create a Python script called whatever you want (say `mystartup.py`) and then set an environment variable `PYTHONSTARTUP` to the path of this script. Python will then load this script on startup of an interactive session (but not when running scripts). In this script, define a function similar to this:
```
def _(v):
if type(v) == type(0.0):
print "%e" % v
else:
print v
```
Then, in an interactive session:
```
C:\temp>set PYTHONSTARTUP=mystartup.py
C:\temp>python
ActivePython 2.5.2.2 (ActiveState Software Inc.) based on
Python 2.5.2 (r252:60911, Mar 27 2008, 17:57:18) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> _(1e12)
1.000000e+012
>>> _(14)
14
>>> _(14.0)
1.400000e+001
>>>
```
Of course, you can define the function to be called whaetver you want and to work exactly however you want.
Even better than this would be to use [IPython](http://ipython.scipy.org/). It's great, and you can set the number formatting how you want by using `result_display.when_type(some_type)(my_print_func)` (see the IPython site or search for more details on how to use this). | Sorry for necroposting, but this topic shows up in related google searches and I believe a satisfactory answer is missing here.
I believe the right way is to use [`sys.displayhook`](https://docs.python.org/3/library/sys.html#sys.displayhook). For example, you could add code like this in your `PYTHONSTARTUP` file:
```
import builtins
import sys
import numbers
__orig_hook = sys.displayhook
def __displayhook(value):
if isinstance(value, numbers.Number) and value >= 1e5:
builtins._ = value
print("{:e}".format(value))
else:
__orig_hook(value)
sys.displayhook = __displayhook
```
This will display large enough values using the exp syntax. Feel free to modify the threshold as you see fit.
Alternatively you can have the answer printed in both formats for large numbers:
```
def __displayhook(value):
__orig_hook(value)
if isinstance(value, numbers.Number) and value >= 1e5:
print("{:e}".format(value))
```
Or you can define yourself another answer variable besides the default `_`, such as `__` (such creativity, I know):
```
builtins.__ = None
__orig_hook = sys.displayhook
def __displayhook(value):
if isinstance(value, numbers.Number):
builtins.__ = "{:e}".format(value)
__orig_hook(value)
sys.displayhook = __displayhook
```
... and then display the exp-formatted answer by typing just `__`. | How do I control number formatting in the python interpreter? | [
"",
"python",
"ipython",
""
] |
Inspired by [this question](https://stackoverflow.com/questions/1084045/).
I commonly see people referring to JavaScript as a low level language, especially among users of GWT and similar toolkits.
My question is: why? If you use one of those toolkits, you're cutting yourself from some of the features that make JavaScript so nice to program in: functions as objects, dynamic typing, etc. Especially when combined with one of the popular frameworks such as jQuery or Prototype.
It's like calling C++ low level because the standard library is smaller than the Java API. I'm not a C++ programmer, but I highly doubt every C++ programmer writes their own GUI and networking libraries. | It is a high-level language, given its flexibility (functions as objects, etc.)
But anything that is commonly compiled-to can be considered a low-level language simply because it's a target for compilation, and [there are many languages that can now be compiled to JS](http://www.google.com/search?q=javascript+as+assembly+language) because of its unique role as the browser's DOM-controlling language.
Among the languages (or subsets of them) that can be compiled to JS:
* Java
* C#
* Haxe
* Objective-J
* Ruby
* Python | Answering a question that says "why is something sometimes called X..." with "it's not X" is completely side stepping the question, isn't it?
To many people, "low-level" and "high-level" are flexible, abstract ideas that apply differently when working with different systems. To people that are not all hung up on the past (there is no such thing as a modern low-level language to some people) the high-ness or low-ness of a language will commonly refer to how close to the target machine it is. This includes virtual machines, of which a browser is now days. Sorry to all the guys who pine for asm on base hardware.
When you look at the browser as a virtual machine, javascript is as close to the (fake) hardware as you get. That is the viewpoint that many who call javascript "low-level" have. I think it's a pointless distinction to make and people shouldn't get hung up on what is low and what is high. | Why is JavaScript sometimes viewed as a low level language? | [
"",
"javascript",
"definition",
"low-level",
""
] |
I create a bunch of forms, and I want to save and restore their position on application close/startup.
However, if the form is not visible, then the `.top` and `.left` are both 0. Only when it's visible are these properties populated with their 'real' values.
Right now my kludge is to show each form, save the info, then return it to its previous visible state:
```
int i;
bool formVisible;
// Show all current forms and form positions in array frmTestPanels
i = 0;
while (frmTestPanels[i] != null)
{
formVisible = frmTestPanels[i].Visible;
frmTestPanels[i].Visible = true;
note(frmTestPanels[i].Text + "(" + frmTestPanels[i].Left.ToString() + ", " + frmTestPanels[i].Top.ToString() + ") visible: " + formVisible.ToString());
frmTestPanels[i].Visible = formVisible;
i++;
}
note(i.ToString() + " forms present");
```
`note()` is a simple function that just displays information.
This, of course, results in flashing all the non-visible forms on shut down (possibly on startup as well? Haven't gotten that far...) which is undesirable.
* Is there another way to get the top and left of the form when it's not visible?
* Alternately, is there a better way to save and restore form state? | You will need to trap the Closing and Minimizing events on the form, and store the position at that point in time.
These fields are not valid when the form is hidden or minimized. | Whenever the user dismisses/hides/closes/makes invisible/whatevers a form, save its location. And **only** at this point in time. If the user is getting rid of a form, it must have been on the screen and you won't have to worry about it being not visible.
On the other side, don't create a form until the user asks for it for the first time. When each form is created, read its stored location and set it accordingly.
With this scheme if a form is never shown to the user, it's location will never be restored or saved. | Form.visible must be true to read .left and .top? | [
"",
"c#",
"winforms",
"forms",
""
] |
I want to match either @ or 'at' in a regex. Can someone help? I tried using the ? operator, giving me /@?(at)?/ but that didn't work | Try:
```
/(@|at)/
```
This means either `@` or `at` but not both. It's also captured in a group, so you can later access the exact match through a backreference if you want to. | ```
/(?:@|at)/
```
mmyers' answer will perform a paren capture; mine won't. Which you should use depends on whether you want the paren capture. | How do you match one of two words in a regular expression? | [
"",
"php",
"regex",
""
] |
This is my expression code:
```
($F{Personel_ODEME}.equals(Boolean.TRUE)) ? "PAID" : "NO PAID"
```
If Personel is paid, her/his Jasper tax report will read `PAID`, otherwise `NO PAID`. In the DB, this field is Boolean type, but the expression is returning a String type. So I am getting a `Cannot cast from String to Boolean` error. | OK, I fixed it. I changed `$F{Personel_ODEME}`'s type to String, then it worked like a charm. | The problem stems from your test `$F{Personel_ODEME}.equals(Boolean.TRUE)`, which Jasper is thinking is a `String` to `Boolean` comparison, and doesnt like. To fix this, try this:
```
($F{Personel_ODEME}.equals(Boolean.TRUE.toString())) ? "PAID" : "NO PAID"
```
This will result in a `String` to `String` comparison.
It is good to note that In Java, a `"true".equals(Boolean.TRUE)` would result to false.
**edit:**
This appears to be a Jasper 'PrintWhen' expression, which allows you to determine whether to print the contents of a cell or not. It is expecting `Boolean.TRUE` or `Boolean.FALSE` as its return values. When you return "PAID", Jasper tries to evaluate that `String` as a `Boolean`, which it cant, so it throws the exception. | Expression Value in Jasper Report: "Cannot cast from String to Boolean" error | [
"",
"java",
"jasper-reports",
""
] |
Okay, thus may seen kind of odd, but I wanted to get some suggestions from everyone here. I am a beginning Java developer (after 2 years of ASP.NET web development) and I have recently began working on my first Java project - a calculator. I realize that their are tons of calculators out there, but I thought it would be a good beginner project.
Anyway, here is what I need help with. Currently, I am using a Scrolling JTextArea for display (instead of a simple JTextField) that is approximately 5 rows tall. I want the user to be able to scroll through the list to see previous entries and such. The format of the box will be equation on one line and the program will generate the answer on the next and so on.
My real question is, how is the best way to implement this? My fist idea was to read through the JTextArea when equals is pressed, down to the last line and try to search that line for the operator (+, -, etc.) and the operands. Is this the best way to go about this? Although, this would work would work, I think it could get cumbersome and sounds very inefficient. I am open to any suggestions, even possibly replacing the JTextArea is some other component would work better.
Thanks! | your idea is interesting. so you would have a line such as.
> 2+2
then when pressing calculate would add the line
> 4
and so on then you could type in another equation.
it could work but as you said it wouldn't be the most efficient implementation... but that's just a tradeoff of getting the desired functionality.
If i were going to implement it the way you discribed (with a JTextArea) I'd use scanner, and scan the value string a line at a time.
if the line has +/- in it then do the calculation and add both the original line and the answer to a string.
the new string is the new value of the text field.
this method would get pretty cumbersom as you would be continually recalculating the users old entries more were added.
I guess if you continually stored the last line of the document, when you run out of lines, calculate the last stored and append the answer, then it wouldn't be so bad.
---
Here's what I would do:
use a JTextField to enter in the calculations, and a [JList](http://java.sun.com/docs/books/tutorial/uiswing/components/list.html) to display the old ones and their answers. | There's no need to read through the JTextArea contents - use JTextArea.append() to add to the end. Here are some examples of JTextArea content manipulation:
```
JTextArea ta = new JTextArea("Initial Text");
// Insert some text at the beginning
int pos = 0;
ta.insert("some text", pos);
// Insert some text after the 5th character
pos = 5;
ta.insert("some text", pos);
// Append some text
ta.append("some text");
// Replace the first 3 characters with some text
int start = 0;
int end = 3;
ta.replaceRange("new text", start, end);
// Delete the first 5 characters
start = 0;
end = 5;
ta.replaceRange(null, start, end);
``` | Reading from Java JTextArea | [
"",
"java",
""
] |
Here's the Python code to run an arbitrary command returning its `stdout` data, or raise an exception on non-zero exit codes:
```
proc = subprocess.Popen(
cmd,
stderr=subprocess.STDOUT, # Merge stdout and stderr
stdout=subprocess.PIPE,
shell=True)
```
`communicate` is used to wait for the process to exit:
```
stdoutdata, stderrdata = proc.communicate()
```
The `subprocess` module does not support timeout--ability to kill a process running for more than X number of seconds--therefore, `communicate` may take forever to run.
What is the **simplest** way to implement timeouts in a Python program meant to run on Windows and Linux? | In Python 3.3+:
```
from subprocess import STDOUT, check_output
output = check_output(cmd, stderr=STDOUT, timeout=seconds)
```
`output` is a byte string that contains command's merged stdout, stderr data.
[`check_output`](https://docs.python.org/3/library/subprocess.html#subprocess.check_output) raises `CalledProcessError` on non-zero exit status as specified in the question's text unlike `proc.communicate()` method.
I've removed `shell=True` because it is often used unnecessarily. You can always add it back if `cmd` indeed requires it. If you add `shell=True` i.e., if the child process spawns its own descendants; `check_output()` can return much later than the timeout indicates, see [Subprocess timeout failure](https://stackoverflow.com/q/36952245/4279).
The timeout feature is available on Python 2.x via the [`subprocess32`](http://pypi.python.org/pypi/subprocess32/) backport of the 3.2+ subprocess module. | I don't know much about the low level details; but, given that in
python 2.6 the API offers the ability to wait for threads and
terminate processes, what about running the process in a separate
thread?
```
import subprocess, threading
class Command(object):
def __init__(self, cmd):
self.cmd = cmd
self.process = None
def run(self, timeout):
def target():
print 'Thread started'
self.process = subprocess.Popen(self.cmd, shell=True)
self.process.communicate()
print 'Thread finished'
thread = threading.Thread(target=target)
thread.start()
thread.join(timeout)
if thread.is_alive():
print 'Terminating process'
self.process.terminate()
thread.join()
print self.process.returncode
command = Command("echo 'Process started'; sleep 2; echo 'Process finished'")
command.run(timeout=3)
command.run(timeout=1)
```
The output of this snippet in my machine is:
```
Thread started
Process started
Process finished
Thread finished
0
Thread started
Process started
Terminating process
Thread finished
-15
```
where it can be seen that, in the first execution, the process
finished correctly (return code 0), while the in the second one the
process was terminated (return code -15).
I haven't tested in windows; but, aside from updating the example
command, I think it should work since I haven't found in the
documentation anything that says that thread.join or process.terminate
is not supported. | Using module 'subprocess' with timeout | [
"",
"python",
"multithreading",
"timeout",
"subprocess",
""
] |
How do I use Unix timestamps with the Doctrine Timestampable behavior? I found the following code snippet [here](http://maxgarrick.com/effective-development-using-doctrine-orm/), but I'd rather not manually add this everywhere:
```
$this->actAs('Timestampable', array(
'created' => array('name' => 'created_at',
'type' => 'integer',
'format' => 'U',
'disabled' => false,
'options' => array()),
'updated' => array('name' => 'updated_at',
'type' => 'integer',
'format' => 'U',
'disabled' => false,
'options' => array())));
``` | This is a question that might get an answer easier than what I first thought, actually...
Let's begin by what you have now :
* a model class, that extends `Doctrine_Record`
+ I will call this class `Test`, for my example(s).
+ In this `Test` model, you want to use the `Timestampable` Behaviour, but with UNIX timestamps, and not `datetime` values
+ And you want this without having to write lots of configuration stuff in your models.
*(I can understand that : less risk to forget one line somewhere and at wrong data in DB)*
* A project that is configured and everything
+ which means you know you stuff with Doctrine
+ and that I won't talk about the basics
A solution to this problem would be to not use the default `Timestampable` behaviour that comes with Doctrine, but another one, that you will define.
Which means, in your model, you will have something like this at the bottom of `setTableDefinition` method :
```
$this->actAs('MyTimestampable');
```
*(I suppose this could go in the `setUp` method too, btw -- maybe it would be it's real place, actually)*
---
What we now have to do is define that `MyTimestampable` behaviour, so it does what we want.
As Doctrine's `Doctrine_Template_Timestampable` already does the job quite well *(except for the format, of course)*, we will inherit from it ; hopefully, it'll mean less code to write **;-)**
So, we declare our behaviour class like this :
```
class MyTimestampable extends Doctrine_Template_Timestampable
{
// Here it will come ^^
}
```
Now, let's have a look at what `Doctrine_Template_Timestampable` actually does, in Doctrine's code source :
* a bit of configuration *(the two `created_at` and `updated_at` fields)*
* And the following line, which registers a listener :
```
$this->addListener(new Doctrine_Template_Listener_Timestampable($this->_options));
```
Let's have a look at the source of this one ; we notice this part :
```
if ($options['type'] == 'date') {
return date($options['format'], time());
} else if ($options['type'] == 'timestamp') {
return date($options['format'], time());
} else {
return time();
}
```
This means if the type of the two `created_at` and `updated_at` fields is not `date` nor `timestamp`, `Doctrine_Template_Listener_Timestampable` will automatically use an UNIX timestamp -- how convenient !
---
As you don't want to define the `type` to use for those fields in every one of your models, we will modify our `MyTimestampable` class.
Remember, we said it was extending `Doctrine_Template_Timestampable`, which was responsible of the configuration of the behaviour...
So, we override that configuration, using a `type` other than `date` and `timestamp` :
```
class MyTimestampable extends Doctrine_Template_Timestampable
{
protected $_options = array(
'created' => array('name' => 'created_at',
'alias' => null,
'type' => 'integer',
'disabled' => false,
'expression' => false,
'options' => array('notnull' => true)),
'updated' => array('name' => 'updated_at',
'alias' => null,
'type' => 'integer',
'disabled' => false,
'expression' => false,
'onInsert' => true,
'options' => array('notnull' => true)));
}
```
We said earlier on that our model was acting as `MyTimestampable`, and not `Timestampable`... So, now, let's see the result **;-)**
If we consider this model class for `Test` :
```
class Test extends Doctrine_Record
{
public function setTableDefinition()
{
$this->setTableName('test');
$this->hasColumn('id', 'integer', 4, array(
'type' => 'integer',
'length' => 4,
'unsigned' => 0,
'primary' => true,
'autoincrement' => true,
));
$this->hasColumn('name', 'string', 32, array(
'type' => 'string',
'length' => 32,
'fixed' => false,
'primary' => false,
'notnull' => true,
'autoincrement' => false,
));
$this->hasColumn('value', 'string', 128, array(
'type' => 'string',
'length' => 128,
'fixed' => false,
'primary' => false,
'notnull' => true,
'autoincrement' => false,
));
$this->hasColumn('created_at', 'integer', 4, array(
'type' => 'integer',
'length' => 4,
'unsigned' => 0,
'primary' => false,
'notnull' => true,
'autoincrement' => false,
));
$this->hasColumn('updated_at', 'integer', 4, array(
'type' => 'integer',
'length' => 4,
'unsigned' => 0,
'primary' => false,
'notnull' => false,
'autoincrement' => false,
));
$this->actAs('MyTimestampable');
}
}
```
Which maps to the following MySQL table :
```
CREATE TABLE `test1`.`test` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(32) NOT NULL,
`value` varchar(128) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) default NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
```
We can create two rows in the table this way :
```
$test = new Test();
$test->name = 'Test 1';
$test->value = 'My Value 2';
$test->save();
$test = new Test();
$test->name = 'Test 2';
$test->value = 'My Value 2';
$test->save();
```
If we check the values in the DB, we'll get something like this :
```
mysql> select * from test;
+----+--------+----------------+------------+------------+
| id | name | value | created_at | updated_at |
+----+--------+----------------+------------+------------+
| 1 | Test 1 | My Value 1 | 1248805507 | 1248805507 |
| 2 | Test 2 | My Value 2 | 1248805583 | 1248805583 |
+----+--------+----------------+------------+------------+
2 rows in set (0.00 sec)
```
So, we are OK for the creation of rows, it seems ;-)
---
And now, let's fetch and update the second row :
```
$test = Doctrine::getTable('Test')->find(2);
$test->value = 'My New Value 2';
$test->save();
```
And, back to the DB, we now get this :
```
mysql> select * from test;
+----+--------+----------------+------------+------------+
| id | name | value | created_at | updated_at |
+----+--------+----------------+------------+------------+
| 1 | Test 1 | My Value 1 | 1248805507 | 1248805507 |
| 2 | Test 2 | My New Value 2 | 1248805583 | 1248805821 |
+----+--------+----------------+------------+------------+
2 rows in set (0.00 sec)
```
The `updated_at` field has been updated, and the `created_at` field has not changed ; which seems OK too **;-)**
---
So, to make things short, fit in a couple of bullet points, and summarize quite a bit :
* Our model class acts as our own `MyTimestampable`, and not the default `Timestampable`
* Our behaviour extends Doctrine's one
* And only override it's configuration
+ So we can use it as we want, with only one line of code in each one of our models.
I will let you do some more intensive tests, but I hope this helps !
Have fun **:-)** | One method would be to use doctorine's listeners to create a unix timestamp equivalent when the record is fetched and before it is saved:
```
class Base extends Doctrine_Record_Listener
{
public function preHydrate(Doctrine_Event $event)
{
$data = $event->data;
$data['unix_created_at'] = strtotime($data['created_at']);
$data['unix_updated_at'] = strtotime($data['updated_at']);
$event->data = $data;
}
}
```
This could be your base class that you extend in anything that needs created\_at and updated\_at functionality.
I'm sure with a little bit more tinkering you could loop through $data and convert all datetime fields 'unix\_'.$field\_name.
Good luck | Use Unix timestamp in Doctrine Timestampable | [
"",
"php",
"doctrine",
"epoch",
"unix-timestamp",
""
] |
In Eclipse, can I find which methods override the method declaration on focus now?
Scenario: When I'm viewing a method in a base class (which interface), I would like to know where the method is overriden (or implemented). Now I just make the method `final` and see where I get the errors, but it is not perfect.
Note: I know about `class hierarchy` views, but I don't want to go through all the subclasses to find which ones use a custom implementation. | Select the method and click `Ctrl`+`T`. Alternatively, right-click on it and select `Quick Type Hierachy`.
*removed dead ImageShack link - Screenshot* | Select the method declaration and hit `Ctrl`+`G` to see all declarations.
To see only the declarations inherited by the subject, `right-click->Declarations->Hierarchy`. | Finding overriding methods | [
"",
"java",
"eclipse",
""
] |
I want to concat two or more gzip streams without recompressing them.
I mean I have A compressed to A.gz and B to B.gz, I want to compress them to single gzip (A+B).gz without compressing once again, using C or C++.
Several notes:
* Even you can just concat two files and gunzip would know how to deal with them, most of programs would not be able to deal with two chunks.
* I had seen once an example of code that does this just by decompression of the files and then manipulating original and this significantly faster then normal re-compression, but still requires O(n) CPU operation.
* Unfortunaly I can't found this example I had found once (concatenation using decompression only), if someone can point it I would be greatful.
Note: it is not duplicate of [this](https://stackoverflow.com/questions/274185/is-there-a-gzip-merger-that-merges-two-gzip-files-without-decompressing-them) because proposed solution is not fits my needs.
**Clearification edit**:
I want to concate several compressed HTML pices and send them to browser as one page, as per request: "Accept-Encoding: gzip", with respnse "Content-Encoding: gzip"
If the stream is concated as simple as `cat a.gz b.gz >ab.gz`, Gecko (firefox) and KHTML web engines gets only first part (a); IE6 does not display anything and Google Chrome displays first part (a) correctly and the second part (b) as garbage (does not decompress at all).
Only Opera handles this well.
So I need to create a **single** gzip stream of several chunks and send them without re-compressing.
**Update:** I had found [gzjoin.c](http://svn.ghostscript.com/ghostscript/tags/zlib-1.2.3/examples/gzjoin.c) in the examples of zlib, it does it using only decompression. The problem is that decompression is still slower them simple `memcpy`.
It is still faster 4 times then fastest gzip compression. But it is not enough.
What I need is to find the data I need to save together with gzip file in order to
not run decompression procedure, and how do I find this data during compression. | Look at the [RFC1951](http://www.ietf.org/rfc/rfc1951.txt) and [RFC1952](http://www.ietf.org/rfc/rfc1952.txt)
The format is simply a suites of members, each composed of three parts, an header, data and a trailer. The data part is itself a set of chunks with each chunks having an header and data part.
To simulate the effect of gzipping the result of the concatenation of two (or more files), you simply have to adjust the headers (there is a last chunk flag for instance) and trailer correctly and copying the data parts.
There is a problem, the trailer has a CRC32 of the uncompressed data and I'm not sure if this one is easy to compute when you know the CRC of the parts.
Edit: the comments in the gzjoin.c file you found imply that, while it is possible to compute the CRC32 without decompressing the data, there are other things which need the decompression. | The gzip manual says that two gzip files can be concatenated as you attempted.
<http://www.gnu.org/software/gzip/manual/gzip.html#Advanced-usage>
So it appears that the other tools may be broken. As seen in this bug report.
<http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=97263>
Apart from filing a bug report with each one of the browser makers, and hoping they comply, perhaps your program can cache the most common concatenations of the required data.
As others have mentioned you may be able to perform surgery:
<http://www.gzip.org/zlib/rfc-gzip.html>
And this requires a CRC-32 of the final uncompressed file. The required size of the uncompressed file can be easily calculated by adding the lengths of the individual sub-files.
In the bottom of the last link, there is code for calculating a running crc-32 named update\_crc.
Calculating the crc on the uncompressed files each time your process is run, is probably cheaper than the gzip algorithm itself. | How to concat two or more gzip files/streams | [
"",
"c++",
"gzip",
"concatenation",
""
] |
So I have file that is in folder in website folder structure.
I use it to log errors.
It works when ran from Visual Studio.
I understand the problem. I need to set permissions on inetpub.
But for what user ? and how?
I tried adding some IIS user but it still can not write to the file.
So I am using ASP.net
Framework 3.5 SP1
Server is Windows Server 2003 enterprise edition SP2
How should I set up permissions so write would work?
Thanks | You need to give the Network Service account modify rights.
Right click on the folder, choose properties, go to the Security tab and add the Network Service account if it's not there. If it is listed, ensure it had "Modify" checked. | which users have you granted write access on the folder? I believe you may need to give write access to the user that runs ASP NET user. | How to allow writing to a file on the server ASP.net | [
"",
"c#",
"asp.net",
"permissions",
""
] |
I have a function
```
function toggleSelectCancels(e) {
var checkBox = e.target;
var cancelThis = checkBox.checked;
var tableRow = checkBox.parentNode.parentNode;
}
```
how can I get a jQuery object that contains tableRow
Normally I would go `$("#" + tableRow.id)`, the problem here is the id for tableRow is something like this `"x:1280880471.17:adr:2:key:[95]:tag:"`. It is autogenerated by an infragistics control. jQuery doesn't seem to `getElementById` when the id is like this. the standard dom `document.getElementById("x:1280880471.17:adr:2:key:[95]:tag:")` does however return the correct row element.
Anyways, is there a way to get a jQuery object from a dom element?
Thanks,
~ck in San Diego | Absolutely,
```
$(tableRow)
```
<http://docs.jquery.com/Core/jQuery#elements> | You can call the jQuery function on DOM elements: `$(tableRow)`
You can also use the [`closest`](http://docs.jquery.com/Traversing/closest) method of jQuery in this case:
```
var tableRowJquery = $(checkBox).closest('tr');
```
If you want to keep using your ID, **kgiannakakis** (below), provided an excellent link on how to [escape characters with special meaning in a jQuery selector](http://docs.jquery.com/Frequently_Asked_Questions#How%5Fdo%5FI%5Fselect%5Fan%5Felement%5Fthat%5Fhas%5Fweird%5Fcharacters%5Fin%5Fits%5FID.3F). | Can I get a jQuery object from an existing element | [
"",
"javascript",
"jquery",
""
] |
Given this function:
```
function Repeater(template) {
var repeater = {
markup: template,
replace: function(pattern, value) {
this.markup = this.markup.replace(pattern, value);
}
};
return repeater;
};
```
How do I make `this.markup.replace()` replace globally? Here's the problem. If I use it like this:
```
alert(new Repeater("$TEST_ONE $TEST_ONE").replace("$TEST_ONE", "foobar").markup);
```
The alert's value is "foobar $TEST\_ONE".
If I change `Repeater` to the following, then nothing in replaced in Chrome:
```
function Repeater(template) {
var repeater = {
markup: template,
replace: function(pattern, value) {
this.markup = this.markup.replace(new RegExp(pattern, "gm"), value);
}
};
return repeater;
};
```
...and the alert is `$TEST_ONE $TEST_ONE`. | You need to double escape any RegExp characters (once for the slash in the string and once for the regexp):
```
"$TESTONE $TESTONE".replace( new RegExp("\\$TESTONE","gm"),"foo")
```
Otherwise, it looks for the end of the line and 'TESTONE' (which it never finds).
Personally, I'm not a big fan of building regexp's using strings for this reason. The level of escaping that's needed could lead you to drink. I'm sure others feel differently though and like drinking when writing regexes. | In terms of pattern interpretation, there's no difference between the following forms:
* `/pattern/`
* `new RegExp("pattern")`
If you want to replace a literal string using the `replace` method, I think you can just pass a string instead of a regexp to `replace`.
Otherwise, you'd have to escape any regexp special characters in the pattern first - maybe like so:
```
function reEscape(s) {
return s.replace(/([.*+?^$|(){}\[\]])/mg, "\\$1");
}
// ...
var re = new RegExp(reEscape(pattern), "mg");
this.markup = this.markup.replace(re, value);
``` | JavaScript replace/regex | [
"",
"javascript",
"regex",
"replace",
""
] |
I have an input element with onchange="do\_something()". When I am typing and hit the enter key it executes correctly (do\_something first, submit then) on Firefox and Chromium (not tested in IE and Safari), however in Opera it doesn't (it submits immediately). I tried using a delay like this:
```
<form action="." method="POST" onsubmit="wait_using_a_big_loop()">
<input type="text" onchange="do_something()">
</form>
```
but it didn't work either.
Do you have some recommendations?
Edit:
Finally I used a mix of the solutions provided by iftrue and crescentfresh, just unfocus the field to fire do\_something() method, I did this because some other input fields had others methods onchange.
```
$('#myForm').submit( function(){
$('#id_submit').focus();
} );
```
Thanks | You could use jquery and hijack the form.
<http://docs.jquery.com/Events/submit>
```
<form id = "myForm">
</form>
$('#myForm').submit( function(){
do_something();
} );
```
This should submit the form after calling that method. If you need more fine-grained control, throw a return false at the end of the submit event and make your own post request with $.post(); | From <http://cross-browser.com/forums/viewtopic.php?id=123>:
> Per the spec, pressing enter is not
> supposed to fire the change event. The
> change event occurs when a control
> loses focus and its value has changed.
>
> In IE pressing enter focuses the
> submit button - so the text input
> loses focus and this causes a change
> event.
>
> In FF pressing enter does not focus
> the submit button, yet the change
> event still occurs.
>
> In Opera neither of the above occurs.
`keydown` is more consistent across browsers for detecting changes in a field. | Opera problem with javascript on submit | [
"",
"javascript",
"submit",
"opera",
""
] |
I'm a C++ developer who has primarily programmed on Solaris and Linux until recently, when I was forced to create an application targeted to Windows.
I've been using a communication design based on C++ I/O stream backed by TCP socket. The design is based on a single thread reading continuously from the stream (most of the time blocked in the socket read waiting for data) while other threads send through the same stream (synchronized by mutex).
When moving to windows, I elected to use the boost::asio::ip::tcp::iostream to implement the socket stream. I was dismayed to find that the above multithreaded design resulted in deadlock on Windows. It appears that the `operator<<(std::basic_ostream<...>,std::basic_string<...>)` declares a 'Sentry' that locks the entire stream for both input and output operations. Since my read thread is always waiting on the stream, send operations from other threads deadlock when this Sentry is created.
Here is the relevant part of the call stack during operator<< and Sentry construction:
```
...
ntdll.dll!7c901046()
CAF.exe!_Mtxlock(_RTL_CRITICAL_SECTION * _Mtx=0x00397ad0) Line 45 C
CAF.exe!std::_Mutex::_Lock() Line 24 + 0xb bytes C++
CAF.exe!std::basic_streambuf<char,std::char_traits<char> >::_Lock() Line 174 C++
CAF.exe!std::basic_ostream<char,std::char_traits<char> >::_Sentry_base::_Sentry_base(std::basic_ostream<char,std::char_traits<char> > & _Ostr={...}) Line 78 C++
CAF.exe!std::basic_ostream<char,std::char_traits<char> >::sentry::sentry(std::basic_ostream<char,std::char_traits<char> > & _Ostr={...}) Line 95 + 0x4e bytes C++
> CAF.exe!std::operator<<<char,std::char_traits<char>,std::allocator<char> >(std::basic_ostream<char,std::char_traits<char> > & _Ostr={...}, const std::basic_string<char,std::char_traits<char>,std::allocator<char> > & _Str="###") Line 549 + 0xc bytes C++
...
```
I would be fine if the istream and ostream components were locked separately, but that is not the case.
Is there an alternate implementation of the stream operators that I can use? Can I direct it not to lock? Should I implement my own (not sure how to do this)?
Any suggestions would be appreciated.
(Platform is Windows 32- and 64-bit. Behavior observed with Visual Studio 2003 Pro and 2008 Express) | This question has languished for long enough. I'm going to report what I ended up doing even though there's a chance I'll be derided.
I had already determined that the problem was that two threads were coming to a deadlock while trying to access an iostream object in separate read and write operations. I could see that the Visual Studio implementation of string stream insertion and extraction operators both declared a Sentry, which locked the stream buffer associated with the stream being operated on.
I knew that, for the stream in question for this deadlock, the stream buffer implementation was boost::asio::basic\_socket\_streambuf. I inspected the implementation to see that read and write operations (underflow and overflow) actually operate on different buffers (get vs. put).
With the above verified, I chose to simply circumvent the locking for this application. To do that, I used project-specific pre-processor definitions to exclude the locking code in the basic\_istream implementation of the locking sentry:
```
class _Sentry_base
{ // stores thread lock and reference to input stream
public:
__CLR_OR_THIS_CALL _Sentry_base(_Myt& _Istr)
: _Myistr(_Istr)
{ // lock the stream buffer, if there
#ifndef MY_PROJECT
if (_Myistr.rdbuf() != 0)
_Myistr.rdbuf()->_Lock();
#endif
}
__CLR_OR_THIS_CALL ~_Sentry_base()
{ // destroy after unlocking
#ifndef MY_PROJECT
if (_Myistr.rdbuf() != 0)
_Myistr.rdbuf()->_Unlock();
#endif
}
```
***Upside:***
* It works
* Only my project (with the appropriate defines) is affected
***Downside:***
* Feels a little hacky
* Each platform where this is built will need this modification
I plan to mitigate the latter point by loudly documenting this in the code and project documentation.
I realize that there may be a more elegant solution to this, but in the interest of expediency I chose a direct solution after due diligence to understand the impacts. | According to the boost documentation [1] the use of two threads accessing the one object without mutexes is "unsafe". Just because it worked on the Unix platforms is no guarantee that it will work on the Windows platform.
So your options are:
1. Rewrite your code so your threads don't access the object simultaneously
2. Patch the boost library and send the changes back
3. Ask Chris really nicely if he will do the changes for the Windows platform
[1] <http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/overview/core/threads.html> | Is there a way to get non-locking stream insertion/extraction on basic_iostream in Windows? | [
"",
"c++",
"windows",
"string",
"sockets",
"iostream",
""
] |
Many of my company's clients use our data acquisition software in a research basis. Due to the nature of research in general, some of the clients ask that data is encrypted to prevent tampering -- there could be serious ramifications if their data was shown to be falsified.
Some of our *binary* software encrypts output files with a password stored in the source, that looks like random characters. At the software level, we are able to open up encrypted files for read-only operations. If someone *really* wanted to find out the password so that they could alter data, it would be possible, but it would be a lot of work.
I'm looking into using Python for rapid development of another piece of software. To duplicate the functionality of encryption to defeat/discourage data tampering, the best idea I've come up with so far is to just use `ctypes` with a DLL for file reading/writing operations, so that the method of encryption and decryption is "sufficiently" obfuscated.
We are well aware that an "uncrackable" method is unattainable, but at the same time I'm obviously not comfortable with just having the encryption/decryption approaches sitting there in plain text in the Python source code. A "very strong discouragement of data tampering" would be good enough, I think.
**What would be the best approach to attain a happy medium of encryption or other proof of data integrity using Python?** I saw [another post](https://stackoverflow.com/questions/609127/generating-a-tamper-proof-signature-of-some-data) talking about generating a "tamper proof signature", but if a signature was generated in pure Python then it would be trivial to generate a signature for any arbitrary data. We *might* be able to phone home to prove data integrity, but that seems like a major inconvenience for everyone involved. | As a general principle, you don't want to use encryption to protect against tampering, instead you want to use a digital signature. Encryption gives you **confidentiality**, but you are after **integrity**.
Compute a hash value over your data and either store the hash value in a place where you know it cannot be tampered with or digitally sign it.
In your case, it seems like you want to ensure that only your software can have generated the files? Like you say, there cannot exist a really secure way to do this when your users have access to the software since they can tear it apart and find any secret keys you include. Given that constraint, I think your idea of using a DLL is about as good as you can do it. | If you are embedding passwords somewhere, you are already hosed. You can't guarantee anything.
However, you could use public key/private key encryption to make sure the data hasn't been tampered with.
The way it works is this:
1. You generate a public key / private key pair.
2. Keep the private key secure, distribute the public key.
3. Hash the data and then sign the hash with the private key.
4. Use the public key to verify the hash.
This effectively renders the data read-only outside your company, and provides your program a simple way to verify that the data hasn't been modified without distributing passwords. | Python: encryption as means to prevent data tampering | [
"",
"python",
"encryption",
"data-integrity",
"tampering",
""
] |
My question is a little bit tricky. at least I think. Maybe not, anyway. I want to know if there's any way to know if the user is leaving the page. Whatever if he clicks "Previous button", closing the window or ckicking on a link on my website. If my memory's still good, I think it's possible with JavaScript.
But in my case, I want to do some stuff (cleaning objects) in my codebehind. | There really is no way to do it. No event is fired when the browser goes back.
You can do it with Javascript, but it is difficult at best.
See the [question here](https://stackoverflow.com/questions/821011/how-do-you-prevent-javascript-page-from-navigating-away).
This script will also work. It was found [here](http://www.sajithmr.com/warning-before-navigate-away-from-a-page/)
```
<script>
window.onbeforeunload = function (evt) {
var message = ‘Are you sure you want to leave?’;
if (typeof evt == ‘undefined’) {
evt = window.event;
}
if (evt) {
evt.returnValue = message;
}
return message;
}
</script>
``` | You can use javascript to tell the server when the user leaves the page. But the webserver washes it's hands of the page once it leaves the server, while the user might keep the page open for a week.
If you use javascript on the page to fire off a notice to your server when the page unloads you *can* take some action. But, you can't tell if he's leaving your page for another one of your pages, another website, or closing the browser.
And that last notice isn't guaranteed to always be sent, so you can't rely on it completely.
So using the javascript notice to clean up objects (caches or sessions) is a flawed system. You're better with cache & session invalidation strategies that are independent of the onunload notice. | Is there any way to know that user leaving a page with asp.net? | [
"",
"c#",
"asp.net",
"events",
""
] |
I have a global variable:
```
const std::string whiteSpaceBeforeLeadingCmntOption = "WhiteSpaceBeforeLeadingComment";
```
When I remove the const on this variable declaration, I get many occurrences of the following linker error:
```
error LNK2005: "class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > whiteSpaceBeforeLeadingCmntOption" (?whiteSpaceBeforeLeadingCmntOption@@3V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@A) already defined in REGISTER_TO_UNSRLZ.obj
```
This is in a .h file, which is included various places, but I have a #ifndef band on it to avoid including it in several places. Any ideas what the error is from? | This works when you have const in the .h, since const implies static so you can have the same variable in multiple compilands.
By removing const on a variable defined in a .h file, you are creating multiple instances with the same identifier within the same program.
If you need to remove const, in the .h, you could do:
```
extern std::string whiteSpaceBeforeLeadingCmntOption;
```
And then have:
```
std::string whiteSpaceBeforeLeadingCmntOption = "";
```
In one of your .cpp files. | The problem is that by defining it in the header file, it is being instantiated in each compilation unit that includes that header file, leading to it being multiply defined for the link stage.
What you want to do is declare this in your .h:
```
extern std::string whiteSpaceBeforeLeadingCmntOption;
```
And then in a *single* cpp, declare:
```
std::string whiteSpaceBeforeLeadingCmntOption = "WhiteSpaceBeforeLeadingComment";
``` | Why does removing const give me linker errors? | [
"",
"c++",
"linker",
"constants",
""
] |
What's the best way to wait (without spinning) until something is available in either one of two (multiprocessing) [Queues](http://docs.python.org/library/multiprocessing.html#multiprocessing.Queue), where both reside on the same system? | It doesn't look like there's an official way to handle this yet. Or at least, not based on this:
* <http://bugs.python.org/issue3831>
You could try something like what this post is doing -- accessing the underlying pipe filehandles:
* [http://haltcondition.net/?p=2319](https://web.archive.org/web/20141124021104/http://www.haltcondition.net/?p=2319)
and then use select. | Actually you can use multiprocessing.Queue objects in select.select. i.e.
```
que = multiprocessing.Queue()
(input,[],[]) = select.select([que._reader],[],[])
```
would select que only if it is ready to be read from.
No documentation about it though. I was reading the source code of the multiprocessing.queue library (at linux it's usually sth like /usr/lib/python2.6/multiprocessing/queue.py) to find it out.
With Queue.Queue I didn't have found any smart way to do this (and I would really love to). | "select" on multiple Python multiprocessing Queues? | [
"",
"python",
"events",
"select",
"synchronization",
"multiprocessing",
""
] |
I am intrigued as to how singletons work in Google App Engine (or any distributed server environment). Given your application can be running in multiple processes (on multiple machines) at once, and requests can get routed all off the place, what actually happens under the hood when an app does something like: 'CacheManager.getInstance()'?
I'm just using the (GAE) CacheManager as an example, but my point is, there is a single global application instance of a singleton somewhere, so where does it live? Is an RPC invoked? In fact, how is global application state (like sessions) actually handled generally?
Regards,
Shane | The singletons in App Engine Java are per-runtime, not per-webapp. Their purpose is simply to provide a single point of access to the underlying service (which in the case of both Memcache and Users API, is accessed via an RPC), but that's purely a design pattern for the library - there's no per-app singleton anywhere that these methods access. | Caches are generally linked up with some sort of distributed replicated cache. For example, GAE uses a custom version of [memcached](http://www.danga.com/memcached/) to handle maintaining a shared cache of objects across a cluster, while maintaining the storage state in a consistent state. In general there are lots of solutions for this problem with lots of different tradeoffs to be made in terms of performance and cache coherence (eg, is it critical that all caches match 100% of the time, must the cache be written to disk to protect against loss, etc).
Here are some sample products with distributed caching features (most have documentation describing the tradeoffs of various approaches in great detail:
* [memcached](http://www.danga.com/memcached/) - C with lots of client APIs and language ports
* [Ehcache](http://ehcache.sourceforge.net/) - OSS Java cache, with widespread adoption
* [JBoss Cache](http://jboss.org/jbosscache/) - Another popular Java OSS solution
* [Oracle Coherence (formerly Tangosol Coherence)](http://www.oracle.com/technology/products/coherence/index.html) - Probably the best known Java commercial cache.
* [Indexus Cache](http://www.sharedcache.com/cms/) - A popular .Net OSS solution
* [NCache](http://www.alachisoft.com/ncache/index.html) - Likely the most popular .Net commercial caching solution
As you can see, there have been many projects that have approached this problem. One possible solution is to simply share a single cache on a single machine, however, most projects make some sort of replication and distributed failover possible. | How do Singletons in Google App Engine (or more generally in a distributed server environment) work? | [
"",
"java",
"google-app-engine",
"singleton",
"distributed",
""
] |
I am trying to clean up the javascript of a site. I am finding the header of my site is looking like this and growing:
```
<script type="text/javascript" src="jquery.base.js"></script>
<script type="text/javascript" src="jquery.plugin1.js"></script>
<script type="text/javascript" src="jquery.plugin2.js"></script>
<script type="text/javascript" src="jquery.plugin3.js"></script>
<script type="text/javascript" src="jquery.plugin4.js"></script>
```
I am well aware of the negative effects of many http requests. The site also has lots embedded js too that will need to be pulled in to external files. I am wondering if I will be able to just copy paste all of this together and run it through some compression or will that cause issues? I hope someone has had some simillar experience. | I suggest you look at [Supercharging Javascript in PHP](http://www.cforcoding.com/2009/05/supercharging-javascript.html) for good practices in combining and caching Javascript. | Copy and paste them all into one script, run them through a minifier, save the resulting file and link to it from the bottom of your page (so the DOM loads without waiting for that file to be loaded), and make sure your server gzips it.
Gzipping will save you even more than minifying. The two together can reduce the file size 80% easily on top of the reduced HTTP requests. | How to properly compress Jquery and lots of plugins? | [
"",
"javascript",
"jquery",
"jquery-plugins",
"compression",
""
] |
I encountered an interesting situation today in a program where I inadvertantly assigned an unsigned integer to a std::string. The VisualStudio C++ compiler did not give any warnings or errors about it, but I happened to notice the bug when I ran the project and it gave me junk characters for my string.
This is kind of what the code looked like:
```
std::string my_string("");
unsigned int my_number = 1234;
my_string = my_number;
```
The following code also compiles fine:
```
std::string my_string("");
unsigned int my_number = 1234;
my_string.operator=(my_number);
```
The following results in an error:
```
unsigned int my_number = 1234;
std::string my_string(my_number);
```
What is going on? How come the compiler will stop the build with the last code block, but let the first 2 code blocks build? | Because string is assignable from `char`, and `int` is implicitly convertible to `char`. | The std::string class has the following assignment operator defined:
```
string& operator=( char ch );
```
This operator is invoked by implicit conversion of `unsigned int` to `char`.
In your third case, you are using an explicit constructor to instantiate a `std::string`, none of the available constructors can accept an `unsigned int`, or use implicit conversion from `unsigned int`:
```
string();
string( const string& s );
string( size_type length, const char& ch );
string( const char* str );
string( const char* str, size_type length );
string( const string& str, size_type index, size_type length );
string( input_iterator start, input_iterator end );
``` | Why does C++ allow an integer to be assigned to a string? | [
"",
"c++",
"string",
"stl",
"variable-assignment",
""
] |
**Situation:**
[here,](http://yvoschaap.com/videowall/?q=sunset%20beautiful) where I pressed some video.
**Problem:** I try to stop the video by Javascript in the console of Firebug:
```
player.stopVideo(playerid):Void [1] [2]
```
**Question:** Why does not the command above work?
[1] [Source](http://code.google.com/apis/youtube/js_api_reference.html) for the part "player.stopVideo():Void"
[2] I looked playerid with Firebug from the source. | Your video is requesting w/ the JSAPI enabled, so you are very close! All you need is a valid reference to the embedded player. Inspecting your page revealed that you are using the HTML DOM element id of "playerid" to identify your player.
Example:
```
<embed id="playerid" width="100%" height="100%" allowfullscreen="true" allowscriptaccess="always" quality="high" bgcolor="#000000" name="playerid" style="" src="http://www.youtube.com/apiplayerbeta?enablejsapi=1&playerapiid=normalplayer" type="application/x-shockwave-flash">
```
To obtain a reference to the player and then stop the video use the following code:
```
var myPlayer = document.getElementById('playerid');
myPlayer.stopVideo();
``` | The following works well, tested on wamp server. Just replace the 11-digit ID in the following line with that of the video you want to play.
```
http://www.youtube.com/v/***LpbzbyGjJGE***?enablejsapi=1&version=3&playerapiid=ytplayer
```
Good luck.
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
</head>
<body>
<a href="#" onclick="var myPlayer = document.getElementById('playerid'); myPlayer.pauseVideo();">Pause</a>
<a href="#" onclick="var myPlayer = document.getElementById('playerid'); myPlayer.playVideo();">Play</a>
<embed id="playerid" width="500px" height="400px" allowfullscreen="true" allowscriptaccess="always" quality="high" bgcolor="#000000" name="playerid" style="" src="http://www.youtube.com/v/LpbzbyGjJGE?enablejsapi=1&version=3&playerapiid=ytplayer" type="application/x-shockwave-flash">
</body>
</html>
``` | How can I stop a video with Javascript in Youtube? | [
"",
"javascript",
"youtube-api",
""
] |
I have a cpu intensive task that I need to run on the client. Ideally, I'd like to be able to invoke the function and trigger progress events using jquery so I can update the UI.
I know javascript does not support threading, but I've seen a few promising articles trying to mimic threading using setTimeout.
What is the best approach to use for this? Thanks. | I had a similar problem to solve recently where i needed to keep my UI thread free while crunching some data to display.
I wrote a library Background.js to handle a few scenarios: a sequential background queue (based on the WorkerQueue library), a list of jobs where each is called on every timer, and an array iterator to help break up your work into smaller chunks. Examples and code here: <https://github.com/kmalakoff/background>
Enjoy! | Basically, what you want to do is to divide the operation into pieces. So say you have 10 000 items you want to process, store them in a list and then process a small number of them with a small delay between each call. Here's a simple structure you could use:
```
function performTask(items, numToProcess, processItem) {
var pos = 0;
// This is run once for every numToProcess items.
function iteration() {
// Calculate last position.
var j = Math.min(pos + numToProcess, items.length);
// Start at current position and loop to last position.
for (var i = pos; i < j; i++) {
processItem(items, i);
}
// Increment current position.
pos += numToProcess;
// Only continue if there are more items to process.
if (pos < items.length)
setTimeout(iteration, 10); // Wait 10 ms to let the UI update.
}
iteration();
}
performTask(
// A set of items.
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'],
// Process two items every iteration.
2,
// Function that will do stuff to the items. Called once for every item. Gets
// the array with items and the index of the current item (to prevent copying
// values around which is unnecessary.)
function (items, index) {
// Do stuff with items[index]
// This could also be inline in iteration for better performance.
});
```
Also note that [Google Gears has support to do work on a separate thread](http://code.google.com/apis/gears/api_workerpool.html). Firefox 3.5 also introduced [its own workers that do the same thing](https://developer.mozilla.org/En/Using_web_workers) (although they follow [the W3 standard](http://www.w3.org/TR/workers/), while Google Gears uses its own methods.) | Execute Background Task In Javascript | [
"",
"javascript",
"multithreading",
""
] |
I have a situation in a WebForm where I need to recurse throguh the control tree to find all controls that implement a given interface.
How would I do this?
I have tried writing an extension method like this
```
public static class ControlExtensions
{
public static List<T> FindControlsByInterface<T>(this Control control)
{
List<T> retval = new List<T>();
if (control.GetType() == typeof(T))
retval.Add((T)control);
foreach (Control c in control.Controls)
{
retval.AddRange(c.FindControlsByInterface<T>());
}
return retval;
}
}
```
But it does not like the cast to `T` on line 7.
I also thought about trying the as operator but that doesn't work with interfaces.
I saw [Scott Hanselmans disucssion](http://www.hanselman.com/blog/DoesATypeImplementAnInterface.aspx) but could not glean anything useful from it.
Can anyone give me any pointers. Thanks.
Greg | I think you need to split this method into 2 parts
1. Find Controls recursively
2. Find Controls implementing the interface based off of #1
Here is #1
```
public static IEnumerable<Control> FindAllControls(this Control control) {
yield return control;
foreach ( var child in control.Controls ) {
foreach ( var all in child.FindAllControls() ) {
yield return all;
}
}
}
```
Now to get all controls of a type, use the OfType extension method
```
var all = someControl.FindAllControls().OfType<ISomeInterface>();
``` | I would use the *as* keyword.
```
public static class ControlExtensions {
public static List<T> FindControlsByInterface<T>(this Control control) where T : class
{
List<T> retval = new List<T>();
T item = control as T;
if (T != null)
retval.Add(item);
foreach (Control c in control.Controls)
retval.AddRange(c.FindControlsByInterface<T>());
return retval;
}
}
``` | Check for implementation of an Interface recursively, c# | [
"",
"c#",
"reflection",
"interface",
"recursion",
""
] |
I'm working on a report that displays information about our company's sales locations. One of the bits of information is the 'last visit date' of the location. If the location has never been visited, I have to display (in the current language) 'Never' in red. There are several values like this one, this is just the example I'm using.
Currently my location model returns NULL (direct from the database) if the location hasn't been visited.
So my question is, do I use the
1. View to check for the NULL value, and then display 'Never' in red.
2. Controller to check for the NULL value, change it to 'Never' and then the View would detect 'Never' and display that in red
3. Should the Model call some isValid() method with the 'last visit date' which could check all manner of business rules(false on NULL, older then 6 months, etc) then return the date or 'Never' along with a flag to tell the view to display the value in red or black.
With #3, I think that is the most flexible. But is this simple case too soon to add that advanced functionality?
Any ideas are very appreciated!
Note: Our company's framework is some in-house PHP framework written many years ago. | Since the view has to examine the value anyway to determine whether it should be red or not, I see no reason not to let it deal with `null` directly. After all, the "Never" is a display detail. | Option 3 would be the best decision. The model should be responsible for all of the data values, the controller the business logic, and the view presentation.
It's always a good idea to keep the views as simple as possible and avoid embedding code in them. While you could do handle this in the controller it would need to be duplicated in each controller which utilizes this model. That could create problems down the road if you needed to make a change. | Which MVC layer should set this value? | [
"",
"php",
"model-view-controller",
""
] |
Do you know how to make a `<div>` editable with JavaScript? I'm looking for cross-browser solution.
Something similar to a rich text area, but that uses an editable `<iframe>`. I need something similar for a `<div>`.
*I don't want to use a replacement textbox.* | I found out how.
You use the contentEditable property of the DOMElement, like so
```
<div onClick="this.contentEditable='true';">
lorem ipsum dolor lorem ipsum dolorlorem ipsum dolor
</div>
``` | You can do this in pure HTML
```
<div class="editable" contenteditable="true">
Editable text...
</div>
```
Hope this helps! | How to make an HTML <div> element editable cross-browser? | [
"",
"javascript",
"html",
"cross-browser",
"contenteditable",
""
] |
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module? | Pickling is a two-face coin.
On one side, you have a way to store your object in a very easy way. Just four lines of code and you pickle. You have the object exactly as it is.
On the other side, it can become a compatibility nightmare. You cannot unpickle objects if they are not defined in your code, exactly as they were defined when pickled. This strongly limits your ability to refactor the code, or rearrange stuff in your modules.
Also, not everything can be pickled, and if you are not strict on what gets pickled and the client of your code has full freedom of including any object, sooner or later it will pass something unpicklable to your system, and the system will go boom.
Be very careful about its use. there's no better definition of quick and dirty. | One reason to define your own custom binary format could be optimization. pickle (and shelve, which uses pickle) is a generic serialization framework; it can store almost any Python data. It's easy to use pickle in a lot of situations, but it takes time to inspect all the objects and serialize their data and the data itself is stored in a generic, verbose format. If you are storing specific known data a custom-built serializer can be both faster and more concise.
It takes 37 bytes to pickle an object with a single integer value:
```
>>> import pickle
>>> class Foo: pass...
>>> foo = Foo()
>>> foo.x = 3
>>> print repr(pickle.dumps(foo))
"(i__main__\nFoo\np0\n(dp1\nS'x'\np2\nI3\nsb."
```
Embedded in that data is the name of the property and its type. A custom serializer for Foo (and Foo alone) could dispense with that and just store the number, saving both time and space.
Another reason for a custom serialization framework is you can easily do custom validation and versioning of data. If you change your object types and need to load an old version of data it can be tricky via pickle. Your own code can be easily customized to handle older data formats.
In practice, I'd build something using the generic cPickle module and only replace it if profiling indicated it was really important. Maintaining a separate serialization framework is a significant amount of work.
One final resource you may find useful: [some synthetic serializer benchmarks](http://kbyanc.blogspot.com/2007/07/python-serializer-benchmarks.html). cPickle is pretty fast. | What are the benefits of not using cPickle to create a persistent storage for data? | [
"",
"python",
"database",
"data-structures",
"persistence",
""
] |
In some C++ code, I use integers to store lots of changing data.
To analyze my program, I want to log certain changes to some of the variables, such as how often a certain value is assigned to, and how often that assignment is redundant (the new value is the same as the old value.)
If the type were a class Foo, I'd just derive a new LoggingFoo and add my logging data to the member function(s) I was interested in, and then call the parent member function.
I'd have to update my code to use the new type, but as long as I was originally consistent with typedefs, that's a one-line change.
My problem is that the variable I want to add logging to is an int. You can't derive from built in types in C++ (can you?)
My question is whether there's a clever way to derive from the basic types (int, float, double, etc).
The solution may be to define a new class that **effectively** is an int.. it defines **every** operation an int can do and just applies that operation to a private int data member.
This strategy will work, but perhaps others have already made such a class and I can just use a "fakeint.h" header file definition. Is there such a "proxy native class wrapper" type definitions already available somewhere before I implement them myself?
I do realize of course my proxy int can't be used interchangably with an int especially since existing function definitions all expect an int.. but for my app, this is all in an inner loop which is doing lots of simple native +-\*^= operations, not used as function arguments or anything. | Something like this...
```
template <typename T> class logging_type
{
private:
T value;
public:
logging_type() { }
logging_type (T v) : value(v) { } // allow myClass = T
operator T () { return value; } // allow T = myClass
// Add any operators you need here.
};
```
This will create a template class that's convertible to the original type in both directions. You'd need to add logging handling and overload operators for every operation used on that type in your code.
This still might not be quite what you want, because it's implicitly convertible to int (or whatever type you specify), so your code might silently convert your logging int to an int and you'd be left with incomplete logs. You can prevent this in one direction by adding an 'explicit' keyword to the constructor, but you can't do anything similar with the conversion operator. Unless you perhaps make it private... I haven't tried. Doing either of those things will somewhat defeat the purpose though.
**Edit**: Since c++11, you *can* add `explicit` to conversion operators. | You can't derive a class from int, but you should be able to make a class (e.g. `Integer`) that is interchangeable with `int` by implementing Coplein's Concrete Data Type idiom from [Advanced C++: Programming Styles and Idioms](https://rads.stackoverflow.com/amzn/click/com/0201548550) and then by overloading the type cast operator from `Integer` to `int` and defining a conversion operator from `int` to `Integer`.
here is another link that describes the basic [Idioms](http://www.mactech.com/articles/frameworks/7_4/Coplien_Advanced_C%2B%2B_Rose.html) from the book
and yet another link that I think is pretty close to what you are looking for <http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Int-To-Type> | C++ derive from a native type | [
"",
"c++",
"class",
""
] |
Given the code below, why is the `foo(T*)` function selected ?
If I remove it (the `foo(T*)`) the code still compiles and works correctly, but G++ v4.4.0 (and probably other compilers as well) will generate two `foo()` functions: one for char[4] and one for char[7].
```
#include <iostream>
using namespace std;
template< typename T >
void foo( const T& )
{
cout << "foo(const T&)" << endl;
}
template< typename T >
void foo( T* )
{
cout << "foo(T*)" << endl;
}
int main()
{
foo( "bar" );
foo( "foobar" );
return 0;
}
``` | Formally, when comparing conversion sequences, lvalue transformations are ignored. Conversions are grouped into several categories, like *qualification adjustment* (`T*` -> `T const*`), *lvalue transformation* (`int[N]` -> `int*`, `void()` -> `void(*)()`), and others.
The only difference between your two candidates is an lvalue transformation. String literals are arrays that convert to pointers. The first candidate accepts the array by reference, and thus won't need an lvalue transformation. The second candidate requires an lvalue transformation.
So, if there are two candidates that both function template specializations are equally viable by looking only at the conversions, then the rule is that the more specialized one is chosen by doing partial ordering of the two.
Let's compare the two by looking at their signature of their function parameter list
```
void(T const&);
void(T*);
```
If we choose some unique type `Q` for the first parameter list and try to match against the second parameter list, we are matching `Q` against `T*`. This will fail, since `Q` is not a pointer. Thus, the second is at least as specialized as the first.
If we do the other way around, we match `Q*` against `T const&`. The reference is dropped and toplevel qualifiers are ignored, and the remaining `T` becomes `Q*`. This is an exact match for the purpose of partial ordering, and thus deduction of the transformed parameter list of the second against the first candidate succeeds. Since the other direction (against the second) didn't succeed, the second candidate is *more* specialized than the first - and in consequence, overload resolution will prefer the second, if there would otherwise be an ambiguity.
At `13.3.3.2/3`:
> Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if [...]
>
> * S1 is a proper subsequence of S2 (comparing the conversion sequences in the canonical form
> defined by 13.3.3.1.1, excluding any Lvalue Transformation; the identity conversion sequence is considered to be a subsequence of any non-identity conversion sequence) or, if not that [...]
Then `13.3.3/1`
> * let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F. 13.3.3.1 defines the implicit conversion sequences and 13.3.3.2 defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
>
> Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then [...]
>
> * F1 and F2 are function template specializations, and the function template for F1 is more specialized than the template for F2 according to the partial ordering rules described in 14.5.5.2, or, if not that, [...]
Finally, here is the table of implicit conversions that may participate in an standard conversion sequence at `13.3.3.1.1/3`.
[Conversion sequences http://img259.imageshack.us/img259/851/convs.png](http://img259.imageshack.us/img259/851/convs.png) | The full answer is quite technical.
First, string literals have `char const[N]` type.
Then there is an implicit conversion from `char const[N]` to `char const*`.
So both your template function match, one using reference binding, one using the implicit conversion. When they are alone, both your template functions are able to handle the calls, but when they are both present, we have to explain why the second foo (instantiated with T=char const[N]) is a better match than the first (instantiated with T=char). If you look at the overloading rules (as given by litb), the choice between
```
void foo(char const (&x)[4));
```
and
```
void foo(char const* x);
```
is ambigous (the rules are quite complicated but you can check by writing non template functions with such signatures and see that the compiler complains). In that case, the choice is made to the second one because that one is more specialized (again the rules for this partial ordering are complicated, but in this case it is because you can pass a `char const[N]` to a `char const*` but not a `char const*` to a `char const[N]` in the same way as `void bar(char const*)` is more specialized than `void bar(char*)` because you can pass a `char*` to a `char const*` but not vise-versa). | What are the rules for choosing from overloaded template functions? | [
"",
"c++",
"templates",
""
] |
When I execute the following code, I'm getting results such as:
```
ID column1 column2
34 NULL NULL
34 Org13 Org13
36 NULL NULL
36 NULL Org2
36 Org4 NULL
41 NULL NULL
41 NULL Org5
41 Org3 NULL
```
I want my results to look like:
```
ID column1 column2
34 Org13 Org13
36 Org4 Org2
41 Org3 Org5
```
I've got two tables: Table1 and Table2. Table2 is a lookup table with the following fields: id, name
Table1 has the following fields (id, column1, column2). column1 and column2 both have foreign key relationships to the lookup table:
```
FK_1: Table1.column1-Table2.id
FK_2: Table1.column2-Table2.id
```
Since I want to pull out the values for column1 and column2, and since both of these values are lookups on the same field (Table2.name), I suspect I need to do inner Selects.
My code is below. How can I change this so that it produces the results desired, instead of the ones I'm getting? Thanks in advance!
```
DECLARE @value INT
SET @value = 14
SELECT DISTINCT
Table1.[id] AS ID
, ( SELECT DISTINCT
Table2.[name]
WHERE
Table1.column1 =
Table2.id ) AS column1
, ( SELECT DISTINCT
Table2.[name]
WHERE
Table1.column2 =
Table2.id ) AS column2
FROM
Table1
,Table2
WHERE
Table1.[id] = @value
``` | gbn, I think you meant to write
```
DECLARE @value INT
SET @value = 1
SELECT --??? DISTINCT
t1.[id] AS ID, --- missed comma
table2a.name,
table2b.name
FROM
Table1 t1
JOIN Table2 table2a ON t1.column1 = table2a.id
JOIN Table2 table2b ON t1.column2 = table2b.id -- you have t1.column1 oops
WHERE
t1.[id] = @value
``` | ```
/*
create table table1(id int, col1 int, col2 int);
create table table2(id int, name varchar(10) );
insert into table2 values(1, 'org 1');
insert into table2 values(2, 'org 2');
insert into table2 values(3, 'org 3');
insert into table2 values(4, 'org 4');
insert into table1 values(1, 1, 2);
insert into table1 values(2, 2, 2);
insert into table1 values(3, 2, 3);
insert into table1 values(4, 4, 1);
*/
select
a.id,
b.name as column1,
c.name as column2
from
table1 a
join table2 b on b.id = a.col1
join table2 c on c.id = a.col2;
id column1 column2
----- ---------- ----------
1 org 1 org 2
2 org 2 org 2
3 org 2 org 3
4 org 4 org 1
4 record(s) selected [Fetch MetaData: 3/ms] [Fetch Data: 0/ms]
[Executed: 7/7/09 4:07:25 PM EDT ] [Execution: 1/ms]
``` | How to properly construct SQL subquery in this code? | [
"",
"sql",
"sql-server",
"t-sql",
"subquery",
""
] |
Ok, please bear with me as I can be a bit of a wood duck at times...
I have a gridview in asp.net that will be pulling back many thousand of records. This is all well and good apart from the performance aspect of things. I am binding my Gridview to a dataset and this pulls back every record in the query. I want to change this so that the gridview only pulls back the records that it is currently displaying and then when the user moves to the next page it goes and gets the next chuck of data etc.
Below is how I normally bind my gridviews and handle the paging and sorting, which works very well for me with small data amounts, but not so good for large data amounts. I use SubSonic as my DAL, which is cool. Can anyone point me in the right direction on how best to achieve paging as described above?
Thanks in advance...
```
public SortDirection SortDir
{
get
{
if (ViewState["sortDirection"] == null)
{
ViewState["sortDirection"] = SortDirection.Ascending;
} return (SortDirection)ViewState["sortDirection"];
}
set
{
ViewState["sortDirection"] = value;
}
}
DataSet ds = new DataSet();
DataView dv = new DataView();
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
BindGrid();
GridView1.DataSource = dv;
GridView1.DataBind();
}
}
private DataView BindGrid()
{
ds = new Query(AnthemWeb.DAL.Item.Schema).ExecuteDataSet();
if (ViewState["sortExpr"] != null)
{
dv = new DataView(ds.Tables[0]);
dv.Sort = (string)ViewState["sortExpr"];
}
else
{
dv = ds.Tables[0].DefaultView;
}
return dv;
}
protected void GridView1_PageIndexChanging(object sender, GridViewPageEventArgs e)
{
GridView1.PageIndex = e.NewPageIndex;
GridView1.DataSource = BindGrid();
GridView1.DataBind();
}
protected void GridView1_Sorting(object sender, GridViewSortEventArgs e)
{
string stExp = e.SortExpression;
string stDir = string.Empty;
if (SortDir == SortDirection.Ascending)
{
SortDir = SortDirection.Descending;
stDir = "DESC";
}
else
{
SortDir = SortDirection.Ascending;
stDir = "ASC";
}
ViewState["sortExpr"] = e.SortExpression + " " + stDir;
GridView1.DataSource = BindGrid();
GridView1.DataBind();
}
protected void GridView1_SelectedIndexChanged(object sender, EventArgs e)
{
int selectedRowIndex;
selectedRowIndex = GridView1.SelectedIndex;
GridViewRow row = GridView1.Rows[selectedRowIndex];
string ID = row.Cells[0].Text;
Response.Redirect("customer-details.aspx?ID=" + ID, false);
}
``` | what version of ASP.NET do you use?
There is a DataPager control that comes with ListView in .NET 3.5
See [system.web.ui.webcontrols.datapager](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.datapager.aspx)
SubSonic also supports paging, you should call to the Paged method inline in the query. See [SubSonic Paging](http://subsonicproject.com/docs/Paging) | The `Fill()` method of `DbDataAdapter` class has a convenient overload for this very purpose :
```
public int Fill (DataSet dataSet, int startRecord, int maxRecords, string srcTable)
```
In this overload, you can provide the record number to start with and the maximum records to retrieve from that starting point. This enables you to retrieve only a subset of records from the datasource based on the current page index. All you need to keep track of is the current record index that is displayed.
So you would need to modify your DAL to provide this overload. I haven't used SubSonic so I can't tell if that feature exists in it. | Paging large amounts of Data in a Gridview | [
"",
"c#",
"asp.net",
"gridview",
"subsonic",
"paging",
""
] |
I'd like to know if my textBox1 variable has the ABCAttribute. How can I check this? | You need a handle to the class (type) in which textBox1 exists:
```
Type myClassType = typeof(MyClass);
MemberInfo[] members = myClassType.GetMember("textBox1",
BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance);
if(members.Length > 0) //found a member called "textBox1"
{
object[] attribs = members[0].GetCustomAttributes(typeof(ABCAttribute));
if(attribs.Length > 0) //found an attribute of type ABCAttribute
{
ABCAttribute myAttrib = attribs[0] as ABCAttribute;
//we know "textBox1" has an ABCAttribute,
//and we have a handle to the attribute!
}
}
```
This is a bit nasty, one possibility is to roll it into an extension method, used like so:
```
MyObject obj = new MyObject();
bool hasIt = obj.HasAttribute("textBox1", typeof(ABCAttribute));
public static bool HasAttribute(this object item, string memberName, Type attribute)
{
MemberInfo[] members = item.GetType().GetMember(memberName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance);
if(members.Length > 0)
{
object[] attribs = members[0].GetCustomAttributes(attribute);
if(attribs.length > 0)
{
return true;
}
}
return false;
}
``` | Assuming textBox1 is, well, a TextBox control, then the answer is likely, "No, it doesn't have the attribute." Attributes are assigned to a *Type*, not an instance of the type. You can lookup what attributes are on any TextBox that ever was, is, or will be created right now (for a particular version of the framework). | (.net) How to check if a given variable is defined with an attribute | [
"",
"c#",
".net",
"vb.net",
"attributes",
""
] |
Say I have this class :
```
public class BaseJob{
String name;
public void setName(String name){
this.name=name;
}
public String getName()
{
return name;
}
}
```
and another class that extends it :
```
public class DetailedJob extends BaseJob{
public void doThing();
}
```
Furthermore, I have this method in another class :
```
List<BaseJob> getSomeJobs()
```
Now, my problem is :
is it possible to avoid to cast each item sequentially in the returned list of getSomeJobs, if I know for sure that every BaseJob returned is indeed a DetailedJob ?
Put differently, is there another solution than the following to cast all items in the list :
```
List<BaseJob> baseJobList = getSomeJobs();
List<DetailedJob> detailedJobList = new ArrayList<DetailedJob>();
for (BaseJob baseJob : baseJobList)
detailedJobList.add((DetailedJob) baseJob);
``` | Well, there's:
```
List<BaseJob> baseJobList = getSomeJobs();
@SuppressWarnings("unchecked")
List<DetailedJob> detailedJobList = (List) baseJobList;
```
The downside of this is that if any of the jobs in the list *aren't* detailed jobs, the exception will only be thrown when someone tries to fetch it. Also, if a new non-detailed job is added to `baseJobList` afterwards, that could screw up anyone using `detailedJobList`. Basically you've lost a lot of type safety. In some cases you may not care, but it's not something you should do lightly. | Probably what you want to do is parameterising the class that defines `getSomeJobs`.
```
public final class JobHolder<T extends BaseJob> {
public List<T> getSomeJobs() {
...
```
Generally unchecked casts indicate a design problem. They are unavoidable in certain situations such as low-level implementations and when dealing with serialisation. | How to cast a complete list without iterating through it in Java | [
"",
"java",
"collections",
""
] |
This is a basic string reverse program and I want to do some level of exception handling in it. But during compilation it gives me an error "NOt all code paths return value. I am not able to find out why
```
public static string Reverse(string s)
{
try
{
if (string.IsNullOrEmpty(s))
{
throw new NullReferenceException();
}
char[] c = s.ToCharArray();
int start = 0;
int end = c.Length - 1;
char temp;
while (start < end)
{
temp = c[start];
c[start] = c[end];
c[end] = temp;
start++;
end--;
}
return new string(c);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
```
Thanks guys...I change the code to something like this
```
public static string Reverse(string s)
{
if (!string.IsNullOrEmpty(s))
{
char[] c = s.ToCharArray();
int start = 0;
int end = c.Length - 1;
char temp;
while (start < end)
{
temp = c[start];
c[start] = c[end];
c[end] = temp;
start++;
end--;
}
return new string(c);
}
else return s;
}
``` | If an exception happens then there is no return statement being executed. Walk it through.
The best remedy (my choice) would be to remove the entire try/catch . A utility function like Reverse should not handle (its own) exceptions. | If you throw an exception before the return statement, the `catch` handler is called. After the catch handler executes, it procedes past it (since there's no `return` or `throw` statement in it), at which point it reaches the end of the method without returning a value.
**Edit 2 (major bug):** You throw an `ArgumentNullException`, and procede to catch it and "eat" it, so it's pointless (in this form). You should do your parameter validation before entering a `try` block, plus this method shouldn't use a `try` block at all (it'll make it slower for no useful reason).
Edit: on a side note:
```
char[] characters = s.ToCharArray();
Array.Reverse(characters);
return new string(characters);
``` | getting error not all code paths return value by c# compiler | [
"",
"c#",
"string",
"exception",
"compilation",
""
] |
> **Possible Duplicate:**
> [Equivalence of “With…End With” in c#?](https://stackoverflow.com/questions/1063429/equivalence-of-with-end-with-in-c)
There was one feature of VB that I really like...the `With` statement. Does C# have any equivalent to it? I know you can use `using` to not have to type a namespace, but it is limited to just that. In VB you could do this:
```
With Stuff.Elements.Foo
.Name = "Bob Dylan"
.Age = 68
.Location = "On Tour"
.IsCool = True
End With
```
The same code in C# would be:
```
Stuff.Elements.Foo.Name = "Bob Dylan";
Stuff.Elements.Foo.Age = 68;
Stuff.Elements.Foo.Location = "On Tour";
Stuff.Elements.Foo.IsCool = true;
``` | Not really, you have to assign a variable. So
```
var bar = Stuff.Elements.Foo;
bar.Name = "Bob Dylan";
bar.Age = 68;
bar.Location = "On Tour";
bar.IsCool = True;
```
Or in C# 3.0 and above:
```
var bar = new FooType
{
Name = "Bob Dylan",
Age = 68,
Location = "On Tour",
IsCool = True
};
Stuff.Elements.Foo = bar;
``` | Aside from object initializers (usable only in constructor calls), the best you can get is:
```
var it = Stuff.Elements.Foo;
it.Name = "Bob Dylan";
it.Age = 68;
...
``` | What's the C# equivalent to the With statement in VB? | [
"",
"c#",
"vb.net",
""
] |
What is the right way to convert raw array of bytes into Image in Java SE.
array consist of bytes, where each three bytes represent one pixel, with each byte for corresponding RGB component.
Can anybody suggest a code sample?
Thanks,
Mike | Assuming you know the height and width of the image.
```
BufferedImage img=new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for(int r=0; r<height; r++)
for(int c=0; c<width; c++)
{
int index=r*width+c;
int red=colors[index] & 0xFF;
int green=colors[index+1] & 0xFF;
int blue=colors[index+2] & 0xFF;
int rgb = (red << 16) | (green << 8) | blue;
img.setRGB(c, r, rgb);
}
```
Roughly. This assumes the pixel data is encoded as a set of rows; and that the length of colors is 3 \* width \* height (which should be valid). | You can do it using Raster class. It's better because it does not require iterating and copying of byte arrays.
```
byte[] raw = new byte[width*height*3]; // raw bytes of our image
DataBuffer buffer = new DataBufferByte(raw, raw.length);
//The most difficult part of awt api for me to learn
SampleModel sampleModel = new ComponentSampleModel(DataBuffer.TYPE_BYTE, width, height, 3, width*3, new int[]{2,1,0});
Raster raster = Raster.createRaster(sampleModel, buffer, null);
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
image.setData(raster);
``` | How to convert array of bytes into Image in Java SE | [
"",
"java",
"image",
"graphics",
""
] |
I'm having a bit of a pain here and I just can't figure out what's wrong.
I have an ASP.net project which I deployed on a server. At first everything seemed allright, no Errors whatsoever. However, as a last adition I wanted to add a search function to a fairly large list so I added the following syntax to my mark-up:
```
<td>
Search Server:
<asp:TextBox ID="txtSearch" runat="server" />
<asp:Button ID="btnLookup" runat="server" OnClick="btnLookup_Clicked" Text="Search" />
<asp:Label ID="lblFeedback" runat="server" />
</td>
```
and the following in code behind:
```
protected void btnLookup_Clicked(object sender, EventArgs e)
{
lblFeedback.Text = "";
Session["IsSearch"] = true;
LoadServerList();
}
```
When I run this locally it works just fine as I expect.
HOWEVER!
When I copy these files to the server I get a compilation error:
**Compiler Error Message**: CS1061: ' ASP.ntservice\_ reports\_ reports\_ serverlist\_ manage\_ aspx ' does not contain a definition for 'btnLookup\_ Clicked' and no extension method 'btnLookup\_ Clicked' accepting a first argument of type 'ASP.ntservice\_ reports\_ reports\_ serverlist\_ manage\_ aspx' could be found (are you missing a using directive or an assembly reference?)
it says there is nothing that handles my Clicked event although it does work when I run it through Visual studio.
any ideas?
EDIT:
What I tried myself is
* renaming button
* deleting and readding the button
* add through designer
* renaming click event
* removing the event from markup allows normal execution ... :/ | Is your project a web site or a web application? I would guess that it's a web application project and that perhaps you don't have the latest DLL deployed from the bin folder. Check the versions between your machine and the server to verify that they are the same. | I think this error is generated when you change an object Name, ex.
Text1 -» txtSerialNo I had the same problem but I could fix it. Here is the solution:
Go to Split Mode, click on the textbox/object, in the code remove the line
```
[**ontextchanged="txtSerialNo_TextChanged"**]
```
From:
```
[<asp:TextBox ID="txtSerialNo" runat="server"
ontextchanged="txtSerialNo_TextChanged"></asp:TextBox> //remove this line]
```
To:
```
[<asp:TextBox ID="txtSerialNo" runat="server"</asp:TextBox>]
```
I hope it works for you. Bless you. | ASP.net CS1061 Compilation Error on deployment | [
"",
"c#",
"asp.net",
"iis",
""
] |
I have experience doing this with single file uploads using `<input type="file">`. However, I am having trouble doing uploading more than one at a time.
For example, I'd like to select a series of images and then upload them to the server, all at once.
*It would be great to use a single file input control, if possible.*
Does anyone know how to accomplish this? | This is possible in [HTML5](http://en.wikipedia.org/wiki/File_select#Multiple_file_selection). Example (PHP 5.4):
```
<!doctype html>
<html>
<head>
<title>Test</title>
</head>
<body>
<form method="post" enctype="multipart/form-data">
<input type="file" name="my_file[]" multiple>
<input type="submit" value="Upload">
</form>
<?php
if (isset($_FILES['my_file'])) {
$myFile = $_FILES['my_file'];
$fileCount = count($myFile["name"]);
for ($i = 0; $i < $fileCount; $i++) {
?>
<p>File #<?= $i+1 ?>:</p>
<p>
Name: <?= $myFile["name"][$i] ?><br>
Temporary file: <?= $myFile["tmp_name"][$i] ?><br>
Type: <?= $myFile["type"][$i] ?><br>
Size: <?= $myFile["size"][$i] ?><br>
Error: <?= $myFile["error"][$i] ?><br>
</p>
<?php
}
}
?>
</body>
</html>
```
Here's what it looks like in Chrome after selecting 2 items in the file dialog:

And here's what it looks like after clicking the "Upload" button.

This is just a sketch of a fully working answer. See [PHP Manual: Handling file uploads](http://php.net/manual/en/features.file-upload.php) for more information on proper, secure handling of file uploads in PHP. | There are a few things you need to do to create a multiple file upload, its pretty basic actually. You don't need to use Java, Ajax, Flash. Just build a normal file upload form starting off with:
```
<form enctype="multipart/form-data" action="post_upload.php" method="POST">
```
Then the key to success;
`<input type="file" name="file[]" multiple />`
do NOT forget those brackets!
In the post\_upload.php try the following:
```
<?php print_r($_FILES['file']['tmp_name']); ?>
```
Notice you get an array with tmp\_name data, which will mean you can access each file with an third pair of brackets with the file 'number' example:
```
$_FILES['file']['tmp_name'][0]
```
You can use php count() to count the number of files that was selected. Goodluck widdit! | How can I select and upload multiple files with HTML and PHP, using HTTP POST? | [
"",
"php",
"html",
"http",
"post",
"upload",
""
] |
Which is more conventional in C#?
```
class Foo
{
private string _first;
private string _second;
public Foo(string first)
{
_first = first;
_second = string.Empty;
}
}
```
or
```
class Foo
{
private string _first;
private string _second = string.Empty;
public Foo(string first)
{
_first = first;
}
}
``` | Don't know about convention, but the *safer* way is initializing members as in your second example. You may have multiple constructors and forget to do the init in one of them. | Technically, there is a small variation in behavior between your snippets when calling virtual functions from the base class's constructor, but making virtual function calls during construction is bad practice anyway, so let's ignore that.
I'd prefer the second. If you have overloaded constructors, it saves some copy/pasting, and no one has to remember to write the assignment statement - the compiler takes care of all that.
If calculation of the assigned value is too complex or is cannot be assigned to a field initializer, I'd have a protected (or private) default constructor
```
class Foo
{
private Foo() { _second = SomeComplexCalculation(); }
public Foo(string first) : this()
{
_first = first;
}
}
```
If the order of assignment matters, then I'll have a private function do the initialization. | What is the conventional way to assign default values to member variables in C#? | [
"",
"c#",
"constructor",
"conventions",
""
] |
**This question is based on** [*this other question*](https://stackoverflow.com/questions/1145228/) **of mine, and uses all of the same basic info. That link shows my table layouts and the basic gist of the simple join.**
I would like to write another query that would select EVERY record from ***Table1***, and simply sort them by whether or not Value is less than the linked Threshold.
Again, I appreciate anyone who is willing to take a stab at this. Databases have never been a strong point of mine. | ```
SELECT t1.LogEntryID, t1.Value, t1.ThresholdID,
case when t1.Value < t2.threshold then 1 else 0 end as Rank
FROM Table1 t1
INNER JOIN Table2 t2 ON t1.ThresholdID = t2.ThresholdID
ORDER By Rank
```
You can add `DESC` after `ORDER By Rank` if you want the reverse order. | Similar to the answer given by OrbMan, but I do prefer the CASE to be explicit in the ORDER BY, so that you are not forced to display your order by column.
```
SELECT
t1.LogEntryID
,t1.Value
,t1.ThresholdID
FROM
Table1 t1
JOIN Table2 t2 ON t2.ThresholdID = t1.ThresholdID
ORDER BY
CASE WHEN t1.Value < t2.threshold
THEN 1
ELSE 0
END ASC
``` | How Can I Write a SQL Query That Will Sort the Results Based on a Condition? | [
"",
"sql",
"sql-order-by",
""
] |
I'm trying to wrap my brain about how to do this. We need to provide some files within a directory from our servers to our clients' servers via a PHP/Web interface using FTP. I've looked at the FTP capabilities built in to PHP and some custom classes, but someone suggested cURL might be a better option. We will have the FTP login credentials in our database for the application to access. With that information can we use cURL FTP capabilities to do the transfers, knowing our server has libcurl installed, but the clients servers may not? Do both servers have to have it for the FTP function to work?
Or am I completely going about this the wrong way, and have misunderstood how to use cURL and should be looking into an FTP PHP class? | Just to make it super clear, there are 2 computers involved:
* Your server, the one that's supposed to provide files to the client using the FTP protocol. That server does not need to have a web server (or PHP) running. The only thing it needs is an FTP server. It also needs to have permissions configured in such a way that there is an account that can access the files through FTP.
* Your client's server, the one that's supposed to retrieve files from your server using the FTP protocol. That server needs to have PHP installed, with libCurl. The software on that server needs to access your server using the FTP protocol, providing the user credentials that you configured on your box.
Hope that helps. | libCURL is a library; it acts as the client.
Your clients need to be running a FTP server but do not need libCURL. | To use cURL FTP does both servers need the PHP cURL library installed? | [
"",
"php",
"ftp",
"curl",
""
] |
Hi I need a small help regarding writing a simple query. I want to display results based on max number of employees in a dept.
Here is my table
```
empid dept sal
emp001 d001 10000
emp002 d001 10000
emp003 d002 20000
emp004 d001 10000
emp005 d003 5000
emp006 d003 5000
```
Expected result
```
empid dept sal
emp001 d001 10000
emp002 d001 10000
emp004 d001 10000
emp005 d003 5000
emp006 d003 5000
emp003 d002 20000
```
so dept1 contains 3 employees so it should come first and dept3 contains 2 employees so it will come next and so on
Thanks in advance,
Nagu | Just tried this in MySQL, and with this query:
> SELECT e1. \* , count( e2.empid ) AS c
> FROM `employees` e1 LEFT JOIN
> employees e2 ON e1.dept = e2.dept
> GROUP BY e1.empid ORDER BY c DESC
I got this result:
```
empid dept sal c
emp001 d001 10000 3
emp004 d001 10000 3
emp002 d001 10000 3
emp006 d003 5000 2
emp005 d003 5000 2
emp003 d002 20000 1
```
And then you could of course sort on empid to get the emp002 before the emp004 etc :)
Edit: A better MySQL query would be with not selecting \* elements and escaping all table and field names with backticks, something like this:
> SELECT `e1`.`empid`, `e1`.`dept`,
> `e1`.`sal`, COUNT(`e2`.`empid`) AS `c`
> FROM `employees` `e1` LEFT JOIN
> `employees` `e2` ON `e1`.`dept` =
> `e2`.`dept` GROUP BY `e1`.`empid`
> ORDER BY `c` DESC, `e1`.`empid` | The answer hardly depends on your DBMS !
With Oracle 8+ you can use analytic functions :
```
select empid, dept, sal
from MyTable
order by count(empid) over (partition by dept) desc
``` | sql sub query | [
"",
"sql",
""
] |
I have a table created with:
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[TestFeature1](
[Id] [nvarchar](50) NOT NULL,
[Leng] [decimal](18, 0) NOT NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
```
I inserted data with this:
```
insert into TestFeature1 (id,leng) values ('1',100);
insert into TestFeature1 (id,leng) values ('1 ',1000);
```
When I select from the table with this:
```
select * from TestFeature1 where id='1';
```
this returns 2 rows
```
1 100
1 1000
```
why would this be? why is it returning the version with the space on the end even when the query specifies that the value is only a 1 on its own, with no space? | Interestingly, works if you use LIKE:
```
select * from TestFeature1 where id LIKE '1'
```
**Edit:** after a bit more research I found others have had the same conversation as us. See [here](http://www.eggheadcafe.com/conversation.aspx?messageid=30627763&threadid=30627736). That particular comment is half way through the discussion. But the outcome was as we have found, either use LIKE as demonstrated above, or add a 2nd condition to check the DATALENGTH of the column and supplied value are the same. I prefer the LIKE route. | To rework my answer, LEN() is unsafe to test ANSI\_PADDING as it is defined to return the length excluding trailing spaces, and DATALENGTH() is preferable as AdaTheDev says.
What is interesting is that ANSI\_PADDING is an insertion-time setting, and that it is honoured for VARCHAR but not for NVARCHAR.
Secondly, if returning a column with trailing spaces, or using the '=' for equality, there seems to be an implicit truncation of trailing space that occurs.
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING OFF
GO
CREATE TABLE [dbo].[TestFeature1](
[Id] [varchar](50) NOT NULL,
[Leng] [decimal](18, 0) NOT NULL
) ON [PRIMARY]
GO
insert into TestFeature1 (id,leng) values ('1',100); insert into TestFeature1 (id,leng) values ('1 ',1000);
-- verify no spaces inserted at end
select '['+id+']', * from TestFeature1
select datalength(id), * from TestFeature1
go
DROP TABLE [dbo].[TestFeature1]
go
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING OFF
GO
CREATE TABLE [dbo].[TestFeature1](
[Id] [nvarchar](50) NOT NULL,
[Leng] [decimal](18, 0) NOT NULL
) ON [PRIMARY]
GO
insert into TestFeature1 (id,leng) values ('1',100); insert into TestFeature1 (id,leng) values ('1 ',1000);
-- verify spaces inserted at end, and ANSI_PADDING OFF was not honoured by NVARCHAR
select '['+id+']', * from TestFeature1
select datalength(id), * from TestFeature1
go
``` | Why would SqlServer select statement select rows which match and rows which match and have trailing spaces | [
"",
"sql",
"sql-server",
"select",
""
] |
Is it possible to define an Interface with optional implementation methods? For example I have the following interface definition as IDataReader in my core library:
```
public interface IDataReader<T> {
void StartRead(T data);
void Stop();
}
```
However, in my current implementations, the Stop() method has never been used or implemented. In all my implementation classes, this method has to be implemented with throw NotImplementedExcetion() as default:
```
class MyDataReader : IDataReader<MyData> {
...
public void Stop()
{
// this none implementaion looks like uncompleted codes
throw NotImplementedException();
}
```
Of course, I can remove the throw exception code and leave it empty.
When I designed this data reader interface, I thought it should provide a way to stop the reading process. Maybe we will use Stop() sometime in the future.
Anyway, not sure if it is possible to make this Stop() method as an optional implementation method? The only way I can think is to either to define two interfaces one with stop and another without such as IDataReader and IDataReader2. Another option is to break this one into to interfaces like this:
```
interface IDataReader<T> {
void StartRead(T data);
}
interface IStop {
void Stop();
}
```
In my implementation cases, I have to cast or use as IStop to check if my implementation supports Stop() method:
```
reader.StartRead(myData);
....
// some where when I need to stop reader
IStop stoppable = reader as IStop;
if (stoppable != null ) stoppable.Stop();
...
```
Still I have to write those codes. Any suggestions? Not sure if there is any way to define optional implementation methods in an interface in .Net or C#? | For info, another approach fairly common in the BCL is `Supports*` on the same interface, i.e.
```
bool SupportsStop {get;}
void Stop();
```
(examples of this, for example, in `IBindingList`).
I'm not pretending that it is "pure" or anything, but it works - but it means you now have **two** methods to implement per feature, not one. Separate interfaces (`IStoppableReader`, for example) may be preferable.
For info, if the implementation is common between **all** implementations, then you can use extension methods; for a trivial example:
```
public static void AddRange<T>(this IList<T> list, IEnumerable<T> items) {
foreach(T item in items) list.Add(item);
}
```
(or the equivalent for your interface). If you provide a more specialized version against the concrete type, then it will take precedence (but only if the caller knows about the variable **as** the concrete type, not the interface). So with the above, anyone knowingly using a `List<T>` still uses `List<T>`'s version of `AddRange`; but if the have a `List<T>` but only know about it as `IList<T>`, it'll use the extension method. | Interesting. I'll have to quote you here:
> However, in my current
> implementations, the Stop() method has
> never been used or implemented. In all
> my implementation classes, this method
> has to be implemented with throw
> NotImplementedExcetion() as default:
If this is the case, then you have two options:
1. Remove the Stop() method from the interface. If it isn't used by *every* implementor of the interface, it *clearly does not belong there*.
* Instead of an interface, convert your interface to an abstract base class. This way there is no need to override an empty Stop() method until you need to.
**Update** The only way I think methods can be made optional is to assign a method to a variable (of a delegate type similar to the method's signature) and then evaluating if the method is null before attempting to call it anywhere.
This is usually done for event handlers, wherein the handler may or may not be present, and can be considered optional. | Define optional implementation methods in Interface? | [
"",
"c#",
".net-3.5",
""
] |
I have a List and a Button. When the Lists Count == 0, I would like the button Visibility to = false.
How do I do this using Data Binding?
Thanks in advance,
**Added**
I have asked this so that I can try to avoid checking the Count on the list in code every time I add or remove an item to or from the list. But if there is no solution then I will continue to do it that way. | Create a DTO (Data Transfer Object) that exposes all your data that you intend to bind to UI elements. Create a property in the DTO (with an appropriate name):
```
public bool ButtonVisible
{
get { return myListCount != 0; }
}
```
Add a `BindingSource` to your form and set it's `DataSource` to your DTO type.
Click on the Button, goto **Properties**. Expand the **DataBindings** node, and click **Advanced**.
Scroll down the list in the left hand pane, and select Visible. Set it's binding to your property exposed vis the BindingSource.. | **The General Answer**
Write an event handler and register it with your list-control's bindings object
**A Specific Example**
```
class MyForm : Form {
protected Button myButton;
BindingSource myBindingSource;
DataGridView dgv;
public MyForm(List someList) {
myBindingSource = new BindingSource();
dgv = new DataGridView();
this.myButton = new Button();
this.Controls.Add(myButton);
this.Controls.Add(dgv);
myBindingSource.DataSource = someList;
dgv.DataSource = myBindingSource;
dgv.DataSource.ListChanged += new ListChangedEventHandler (ListEmptyDisableButton);
}
protected void ListEmptyDisableButton (object sender, ListChangedEventArgs e) {
this.myButton.Visible = this.dgv.RowCount <= 0 ? false : true;
}
```
}
*PS - I'd vote down the favorite answer. A Data Transfer Object (DTO) misses the whole point and functionality of .NET Binding architechture* | C# - Can I Data Bind between a value and an expression? | [
"",
"c#",
"winforms",
"data-binding",
".net-2.0",
""
] |
I'm building a script which will open a saved text file, export the contents to an array and then dump the contents in a database. So far I've been able to get the file upload working quite happily and can also open said file.
The trouble I'm having is the contents of the file are variable, they have a fixed structure but the contents will change every time. The structure of the file is that each "section" is seperated by a blank line.
I've used php's file() to get an array ... I'm not sure if there's a way to then split that array up every time it comes across a blank line?
```
$file = $target_path;
$data = file($file) or die('Could not read file!');
```
Example output:
```
[0] => domain.com
[1] => # Files to be checked
[2] => /www/06.php
[3] => /www/08.php
[4] =>
[5] => domain2.com
[6] => # Files to be checked
[7] => /cgi-bin/cache.txt
[8] => /cgi-bin/log.txt
[9] =>
[10] => domain3.com
[11] => # Files to be checked
[12] => /www/Content.js
[13] =>
```
I know that Field 0 and 1 will be constants, they will always be a domain name then that hash line. The lines thereafter could be anywhere between 1 line and 1000 lines.
I've looked at array\_chunk() which is close to what I want but it works on a numerical value, what would be good if there was something which would work on a specified value (like a new line, or a comma or something of that sort!).
Lastly, apologies if this has been answered previously. I've searched the usual places a few times for potential solutions.
Hope you can help :)
Foxed | You could just do something like this. You could change it also to read the file in line-by-line rather than using file(), which would use less memory, which might be important if you use larger files.
```
$handle = fopen('blah', 'r');
$blocks = array();
$currentBlock = array();
while (!feof($handle)) {
$line = fgets($handle);
if (trim($line) == '') {
if ($currentBlock) {
$blocks[] = $currentBlock;
$currentBlock = array();
}
} else {
$currentBlock[] = $line;
}
}
fclose($handle);
//if is anything left
if ($currentBlock) {
$blocks[] = $currentBlock;
}
print_r($blocks);
``` | I think what you're looking for is [preg\_split](http://us.php.net/preg-split). If you just split on a carriage return, you might miss lines that just have spaces or tabs.
```
$output = array(...);//what you just posted
$string_output = implode('', $output);
$array_with_only_populated_lines = preg_split('`\n\W+`', $string_output);
``` | php function to split an array at each blank line? | [
"",
"php",
"arrays",
"file",
"multidimensional-array",
""
] |
I want to know what is CreateNewAttribute and how it works? | There is no such thing "in C#". But maybe you mean [Microsoft.Practices.ObjectBuilder.CreateNewAttribute](http://technet.microsoft.com/en-us/library/microsoft.practices.objectbuilder.createnewattribute(BTS.10).aspx)? | It's used by ObjectBuilder. Here's a [blog post](http://blog.irm.se/blogs/eric/archive/2006/08/21/Dependency-Injection_3A00_-CreateNew-Attribute.aspx) explaining the attribute and usage of it. | What is CreateNewAttribute in C#? | [
"",
"c#",
""
] |
My program has a daily routine, similar to an alarm clock event. Say, when it's 2pm(The time is the system time in my pc), do something for me.
What I want to do is to speed up the testing period(I don't really want to wait 4 days looking at the daily routine and check errors.) I read on [wiki of Mock object](http://en.wikipedia.org/wiki/Mock_object), the writer DID mention alarm clock program. I was so happy to see but still, don't know how to do it.
I am new to Mock Object, and I am programming in Java. So JMock or EasyMock(or any similar) might okay to me.
Thanks | Whenever you need to get the current time, don't use the system clock directly - use an interface such as:
```
public interface Clock
{
long currentMillis();
}
```
You can then implement this with a system clock for production, and pass in a fake for tests, where you can set the fake to whatever time you want.
However, you'll also need to mock out whatever's driving your system - are you explicitly waiting for a particular time, or does something else call your code? | I do have to apologize since you've asked about java and I'm out to lunch when it comes to java, but one solution is to Mock the DateTime object and set it for the desired time.
In .NET it would look something like this:
```
public static class SystemTime
{
public static Func<DateTime> Now = () => DateTime.Now;
}
SystemTime.Now = () => new DateTime(2000,1,1);
```
From: [Dealing With Time In Tests](http://ayende.com/Blog/archive/2008/07/07/Dealing-with-time-in-tests.aspx)
---
> ... [A]n alarm clock program which causes a
> bell to ring at a certain time might
> get the current time from the outside
> world. To test this, the test must
> wait until the alarm time to know
> whether it has rung the bell
> correctly. If a mock object is used in
> place of the real object, it can be
> programmed to provide the bell-ringing
> time (whether it is actually that time
> or not) so that the alarm clock
> program can be tested in isolation.
This Alarm Clock you're referencing is giving an example of mocking an object. It isn't actually an object you can use from the mock framework. | How to use mock object mimicing a daily routine program? | [
"",
"java",
"mocking",
"alarm",
"dailybuilds",
"routines",
""
] |
I'm having a heck of a time trying to only display my jqGrid when records are returned from my webservice. I don't want it to be collapsed to where you only see the caption bar either, but if that's the best I can do, I suppose that I could put a meaningful message into the caption. Still, I'd much rather just hide the grid and show a "No Records Found" message div block.
I also guess that if worst came to worst, I could do the solution on this question [How to display information in jqGrid that there are not any data?](https://stackoverflow.com/questions/1019155/how-to-display-information-in-jqgrid-that-there-are-not-any-data) (link included as alternate possible solution for others).
I've tried doing a .hide() inside of both the function used when loading the data from a function and the GRIDCOMPLETE event, and neither accomplished hiding the grid. I'm pretty new to JQuery, not to mention pretty new to using jqGrid.
```
$(document).ready(function() {
$("#list").jqGrid({
url: 'Service/JQGridTest.asmx/AssetSearchXml',
datatype: 'xml',
mtype: 'GET',
colNames: ['Inv No', 'Date', 'Amount'],
colModel: [
{ name: 'invid', index: 'invid', width: 55 },
{ name: 'invdate', index: 'invdate', width: 90 },
{ name: 'amount', index: 'amount', width: 80, align: 'right' }],
pager: jQuery('#pager'),
postData: { "testvar": "whatever" },
rowNum: 10,
rowList: [10, 20, 30],
sortname: 'id',
sortorder: "desc",
viewrecords: true,
imgpath: 'themes/sand/images',
caption: 'My first grid',
gridComplete: function() {
var recs = $("#list").getGridParam("records");
if (recs == 0) {
$("#list").hide();
}
else {
alert('records > 0');
}
}
});
...
<table id="list" class="scroll"></table>
<div id="pager" class="scroll" style="text-align:center;"></div>
```
And tried this too:
```
$(document).ready(function() {
$("#list").jqGrid({
datatype: function(postdata) {
jQuery.ajax({
url: 'Service/JQGridTest.asmx/AssetSearchXml',
data: postdata,
dataType: "xml",
complete: function(xmldata, stat) {
if (stat == "success") {
var thegrid = $("#list")[0];
thegrid.addXmlData(xmldata.responseXML);
var recs = $("#list").getGridParam("records");
if (recs == 0) {
$("#list").hide();
alert('No rows - grid hidden');
}
else {
alert(recs);
}
}
else {
alert('FAIL');
}
}
});
},
mtype: 'GET',
colNames: ['Inv No', 'Date', 'Amount'],
colModel: [
{ name: 'invid', index: 'invid', width: 55 },
{ name: 'invdate', index: 'invdate', width: 90 },
{ name: 'amount', index: 'amount', width: 80, align: 'right' }],
pager: jQuery('#pager'),
postData: { "testvar": "whatever" },
rowNum: 10,
rowList: [10, 20, 30],
sortname: 'id',
sortorder: "desc",
viewrecords: true,
imgpath: 'themes/sand/images',
caption: 'My first grid'
});
...
<table id="list" class="scroll"></table>
<div id="pager" class="scroll" style="text-align:center;"></div>
```
Thanks for any help you can provide. | jqGrid wraps your table with it's special sauce and divs so you should be able to do what you want by wrapping that table with your own div that you can hide:
```
<div id="gridWrapper">
<table id="list" class="scroll"></table>
</div>
```
Then in your gridComplete:
```
gridComplete: function() {
var recs = parseInt($("#list").getGridParam("records"),10);
if (isNaN(recs) || recs == 0) {
$("#gridWrapper").hide();
}
else {
$('#gridWrapper').show();
alert('records > 0');
}
}
```
Hope this helps. | just a little twist on [seth's](https://stackoverflow.com/users/8590/seth) solution:
1. you can use in place of var recs = $('#list').jqGrid('getGridParam','*records*');
`var recs = $('#list').jqGrid('getGridParam','reccount');`
jqGrid grid option '*records*' default value = 'None'
jqGrid grid option '**reccount**' defaults to 0 and will always return a number even when there are no records (returns 0)
see [wiki:options @ jqGrid Wiki](http://www.trirand.com/jqgridwiki/doku.php?id=wiki:options)
2. If you don't want to use a wrapping div you can hide the whole jqGrid using
`$('.ui-jqgrid').hide()` or `$('.ui-jqgrid').show()`.
The ui-jqgrid class is only used for the jqGrid parent. | How can I hide the jqgrid completely when no data returned? | [
"",
"javascript",
"jquery",
"plugins",
"jqgrid",
""
] |
Suppose I have a relatively long module, but need an external module or method only once.
Is it considered OK to import that method or module in the middle of the module?
Or should `import`s only be in the first part of the module.
Example:
```
import string, pythis, pythat
...
...
...
...
def func():
blah
blah
blah
from pysomething import foo
foo()
etc
etc
etc
...
...
...
```
**Please justify your answer and add links to [PEP](http://en.wikipedia.org/wiki/Python_%28programming_language%29#Development)s or relevant sources** | [PEP 8](https://www.python.org/dev/peps/pep-0008/#imports) authoritatively states:
> Imports are always put at the top of
> the file, just after any module
> comments and docstrings, and before module globals and constants.
PEP 8 should be the basis of any "in-house" style guide, since it summarizes what the core Python team has found to be the most effective style, overall (and with individual dissent of course, as on any other language, but consensus and the BDFL agree on PEP 8). | There was a detailed discussion of this topic on the Python mailing list in 2001:
<https://mail.python.org/pipermail/python-list/2001-July/071567.html>
Here are some of the reasons discussed in that thread. From Peter Hansen, here are three reasons not to have imports all at the top of the file:
> Possible reasons to import in a function:
>
> 1. Readability: if the import is needed in only one
> function and that's very unlikely ever to change,
> it might be clearer and cleaner to put it there only.
> 2. Startup time: if you don't have the import outside
> of the function definitions, it will not execute
> when your module is first imported by another, but
> only when one of the functions is called. This
> delays the overhead of the import (or avoids it
> if the functions might never be called).
> 3. There is always one more reason than the ones
> we've thought of until now.
Just van Rossum chimed in with a fourth:
> 4. Overhead: if the module imports a lot of modules,
> and there's a good chance only a few will actually
> be used. This is similar to the "Startup time"
> reason, but goes a little further. If a script
> using your module only uses a small subset of the
> functionality it can save quite some time, especially
> if the imports that can be avoided also import a lot
> of modules.
A fifth was offered as local imports are a way to avoid the problem of circular imports.
Feel free to read through that thread for the full discussion. | Good or bad practice in Python: import in the middle of a file | [
"",
"python",
"python-import",
""
] |
I'm experiencing some strange behaviour when using a stylus with swing.
I am interpreting pressing the button on the side of the stylus(RIGHT) and pressing the stylus down(LEFT) as a "Grab" event, but occasionally (more often than 0), events are just being dropped.
The JavaDocs for MouseEvent are pretty explicit about how multibutton presses are handled if executed one at a time (left down, right down, right up, left up) but say nothing about simultaneous button presses.
I'm left to wonder, would they be emitted as two mousePressed events, or as one with the button mask set for both buttons, or something else entirely?
Thanks. | I'd interpret the API doc as simultaneous button presses being simply not possible:
> When multiple mouse buttons are pressed, each press, release, and click results in a separate event.
So there *should* be separate events. The problems you observe could be due to errors in your code, the stylus' driver, the hardware, or Swing (this is in decreasing order of likelihood as I see it :)
I'd try to diagnose the problem by logging events at different levels, if possible. | Simultaneous button presses are processed as two separate mousePressed events. Run the [Mouse Events Demo](http://java.sun.com/docs/books/tutorial/uiswing/events/mouselistener.html) to see them processed separately. | How are multi-button presses handled in swing? | [
"",
"java",
"swing",
"events",
""
] |
I want to write some small program. It will run on my computer(laptop) with bluetooth adapter and then discover all the visible Bluetooth adapters (phones, printers, other computers, etc.), but I've not worked with bluetooth in Java before.
Help me to find starting point, please.
What SDKs or libraries I must download first? What literature to read?
I've googled BlueCove, but it doesn't support my Samsung D600, so I could not test my app. | This java sample will discover all visible devices, and it works with both the BlueCove and Avetana libraries ( <http://www.avetana-gmbh.de/avetana-gmbh/produkte/jsr82.eng.xml> ):
<http://www.jsr82.com/jsr-82-sample-device-discovery/> | Ok, sorry for annoying.
I've found an answer and it is BlueCove.
After installing additional libs it works perfect on Linux Mint 7 now. Even, not listed in compatibility list Samsung D600 was discovered perfectly.
Test program output:
```
run:
BlueCove version 2.1.0 on bluez
Address: 0006C990021D
Name: hostname-0
Starting device inquiry...
Device discovered: 0015B95BEA0F
INQUIRY_COMPLETED
Device Inquiry Completed.
Bluetooth Devices:
1. 0015B95BEA0F (zl0-b0tan)
BlueCove stack shutdown completed
BUILD SUCCESSFUL (total time: 13 seconds)
``` | Java (J2SE) and Bluetooth | [
"",
"bluetooth",
"java",
""
] |
How do I convert uint to int in C#? | Given:
```
uint n = 3;
int i = checked((int)n); //throws OverflowException if n > Int32.MaxValue
int i = unchecked((int)n); //converts the bits only
//i will be negative if n > Int32.MaxValue
int i = (int)n; //same behavior as unchecked
```
or
```
int i = Convert.ToInt32(n); //same behavior as checked
```
--EDIT
Included info as mentioned by [Kenan E. K.](https://stackoverflow.com/users/133143/kenan-e-k) | Assuming you want to simply lift the 32bits from one type and dump them as-is into the other type:
```
uint asUint = unchecked((uint)myInt);
int asInt = unchecked((int)myUint);
```
The destination type will blindly pick the 32 bits and reinterpret them.
Conversely if you're more interested in keeping the decimal/numerical values within the range of the destination type itself:
```
uint asUint = checked((uint)myInt);
int asInt = checked((int)myUint);
```
In this case, you'll get overflow exceptions if:
* casting a negative int (eg: -1) to an uint
* casting a positive uint between 2,147,483,648 and 4,294,967,295 to an int
In our case, we wanted the `unchecked` solution to preserve the 32bits as-is, so here are some examples:
# Examples
## int => uint
```
int....: 0000000000 (00-00-00-00)
asUint.: 0000000000 (00-00-00-00)
------------------------------
int....: 0000000001 (01-00-00-00)
asUint.: 0000000001 (01-00-00-00)
------------------------------
int....: -0000000001 (FF-FF-FF-FF)
asUint.: 4294967295 (FF-FF-FF-FF)
------------------------------
int....: 2147483647 (FF-FF-FF-7F)
asUint.: 2147483647 (FF-FF-FF-7F)
------------------------------
int....: -2147483648 (00-00-00-80)
asUint.: 2147483648 (00-00-00-80)
```
## uint => int
```
uint...: 0000000000 (00-00-00-00)
asInt..: 0000000000 (00-00-00-00)
------------------------------
uint...: 0000000001 (01-00-00-00)
asInt..: 0000000001 (01-00-00-00)
------------------------------
uint...: 2147483647 (FF-FF-FF-7F)
asInt..: 2147483647 (FF-FF-FF-7F)
------------------------------
uint...: 4294967295 (FF-FF-FF-FF)
asInt..: -0000000001 (FF-FF-FF-FF)
------------------------------
```
# Code
```
int[] testInts = { 0, 1, -1, int.MaxValue, int.MinValue };
uint[] testUints = { uint.MinValue, 1, uint.MaxValue / 2, uint.MaxValue };
foreach (var Int in testInts)
{
uint asUint = unchecked((uint)Int);
Console.WriteLine("int....: {0:D10} ({1})", Int, BitConverter.ToString(BitConverter.GetBytes(Int)));
Console.WriteLine("asUint.: {0:D10} ({1})", asUint, BitConverter.ToString(BitConverter.GetBytes(asUint)));
Console.WriteLine(new string('-',30));
}
Console.WriteLine(new string('=', 30));
foreach (var Uint in testUints)
{
int asInt = unchecked((int)Uint);
Console.WriteLine("uint...: {0:D10} ({1})", Uint, BitConverter.ToString(BitConverter.GetBytes(Uint)));
Console.WriteLine("asInt..: {0:D10} ({1})", asInt, BitConverter.ToString(BitConverter.GetBytes(asInt)));
Console.WriteLine(new string('-', 30));
}
``` | How do I convert uint to int in C#? | [
"",
"c#",
""
] |
I am refactoring some Spring JDBC code in which some of the costlier queries do **"SELECT \* FROM..."** - and was about to start checking which columns were actually needed and just **SELECT x , y FROM..** them. But reading through the **ResultSet** class is seemed that most data is lazily loaded. When you do a **ResultSet.next()** it moves the cursor in the database (Oracle 10g in this application) and when you do a **ResultSet.getXX()** it retrieves that column. So my thought was that if you do a **"SELECT \* "** but only retrieve the columns you want you are not really taking a performance hit. Am I thinking about this correctly? The only place I can think of where this hurts you is inside the database because it is storing the query results in memory and has to use more memory then it would if only a few rows are selected, but if it's actually only storing pointers to the columns that hit the query then even this wouldn't be the case.
Thoughts?
NOTE : this only applies to standard **ResultSet**, I know **CachedResultSet** acts differently. | Depending on the table structure, the Oracle version, and the indexes involved, it is entirely possible that changing the set of columns you are selecting would substantially improve performance by changing query plans for the better. For most queries, the performance benefits may well be minimal, but overall it is generally good practice to name columns explicitly.
The simplest case where performance will be improved will occur when you have a "covered index" that the optimizer could use. If all the columns you are selecting and all the columns you are filtering by are part of a single index, that index is a covered index for the query. In that case, Oracle can avoid ever reading the data from the table and can just read the index.
There are other cases where performance will be improved as well. The optimizer may be able to perform [table elimination](http://optimizermagic.blogspot.com/2008/06/why-are-some-of-tables-in-my-query.html) if you have queries there are interim joins that don't affect the eventual output. If you are selecting all the columns, that optimization isn't possible. If you have tables with chained rows, eliminating columns can also eliminate the need to fetch the additional blocks where the eliminated columns reside. If there are LONG and LOB columns in the table, not selecting those columns would also result in large improvements.
Finally, eliminating columns will generally reduce the amount of space Oracle will require to sort and hash results before shipping them over the wire. And even though the ResultSet may lazily load data in the application server's RAM, it is probably not able to lazily fetch columns over the network. If you select all the columns from the table, the JDBC driver likely has to fetch at least 1 complete row at a time (more likely it is fetching 10 or 100 rows per network round-trip). And since the driver doesn't know when the data is fetched what columns are going to be requested, you'll have to ship all the data over the network. | I would be surprised if going from "SELECT \*" to "SELECT A,B,C" gave you any meaningful performance improvement, unless you had a huge number of columns that you didn't need.
This is all very dependent on your database, your driver and your application, and most generalisations are going to be pretty meaningless.
The only reliable answer you're going to get from this is by benchmarking it - try "SELECT \*", try "SELECT A,B,C", and see if there's improvement worth chasing. | ResultSet and Select * Performance | [
"",
"java",
"oracle",
"spring",
"jdbc",
"resultset",
""
] |
I need to change the cursor image. Whenever the mouse is over my form I need to load my own image from a local path. I am using version 1.1 of the .NET framwork.
Here is what I have tried:
```
Cursor = new Cursor(GetType(), Application.StartupPath+ "\\windowfi.cur");
```
But this throws an exception:
> Value cannot be null.
> Parameter name: dataStream | Cursor class has a constructor that takes in cur file path as a parameter. Use that. Like this:
```
this.Cursor = new Cursor("<your_cur_file_path");
``` | This should probably work:
```
Cursor.Current = new Cursor(GetType(), Application.StartupPath+ @"\windowfi.cur");
```
or
```
Cursor.Current = new Cursor(GetType(), Application.StartupPath+ "\\windowfi.cur");
```
Note the use of @ string literal and the \ escape character above to be able to use the backslash character correctly in the path to the cursor's icon. As well as the [Current](http://msdn.microsoft.com/en-us/library/system.windows.forms.cursor.current.aspx) property of the Cursor class. | How can I change the mouse cursor image? | [
"",
"c#",
"mouse-cursor",
".net-1.1",
""
] |
I want to output a string that welcomes a user to the application. I have the user's first, middle and last name. I would like to write the user's full name, i.e. `"Hello {0} {1} {2}"` with first, middle and last being the parameters. However, If the middle name is empty, I don't want two spaces between the first and the last name, but rather only one space. I can obviously do it with an "if", but is there a more elegant way to achieve this?
Thanks | ```
string.Format("{0} {1} {2}", first, middle, last).Replace(" "," ")
``` | ```
"Hello {0} {1}{3}{2}"
```
where
```
{3} = param1.IsNullOrEmpty() ? "" : " "
``` | C#: parametrized string with spaces only between "full" parameters? | [
"",
"c#",
"string",
""
] |
I have an audit table in SQL server. It is to record an entry for each high-level user action, e.g. update a record, add a new record, delete a record etc.
I have a mechanism for detecting and recording all the changes made (in .NET, not as a trigger in the database) and have a collection of objects that record a field name, previous value and new value. I'm wanting to store the field changes in the same table (not in a separate table, i.e. I don't want a full-blown normalised relational design for this), so I have a blob field (or it could be a character field) that I want to record the field-level audit data in.
I'm thinking I just want to take my object graph (basically just a list of these field change objects) and serialize it and store the serialized version in the table.
Later, when the user wants to view the changes I can deserialize the field and reconstruct the collection of field changes.
So, what would be the best framework/serialization format in .NET 3.5 to use? I don't much mind about the size, and it doesn't have to be human-readable. | Avoid `BinaryFormatter` for anything that you store long-term (for example in a database); because it contains type/assembly metadata you can easily find that you can't deserialize the data later. Plus it is .NET-specific! So you're a bit scuppered if you want to read it from any other platform.
JSON (via Json.NET) would make a simple, pretty readable format that doesn't take much space. Xml via `XmlSerializer` or `DataContractSerializer` would be fine but isn't *as* readable. If space is your biggest concern, perhaps something like "protocol buffers" (protobuf-net and others) - virtually impossible to read without the supporting utility dll, but very fast and efficient.
I'd be tempted to use JSON, personally. It means I can read the audit in SSMS... | To serialize an object, you need to either mark the object class with the [Serializable] attribute or implement the ISerializable interface i.e.
```
[Serializable]
public class MySerializableClass
{
...
}
```
or
```
using System.Runtime.Serialization;
public class MySerializableClass : ISerializable
{
SerializationInfo info;
StreamingContext context;
public void GetObjectData(SerializationInfo info, StreamingContext context)
{
this.info = info;
this.context = context;
// implementation code goes here
}
```
}
Also have a look at [IFormatter](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.iformatter(VS.71).aspx) where you can chose what data type you want to serialize your object to. | How should I serialize some simple auditing data for storing in a SQL table? | [
"",
"c#",
"serialization",
""
] |
I need to detect if an object was created anonymously like new{name=value,}
if it is an AnonymousType, it should add it's properties names/values into a
```
Dictionary<string,object>
```
This is what I hacked together myself:
```
var name="name";
var obj = new { name = new object(), };
var lookup = new Dictionary<string,object>();
if(obj.GetType().Name.StartsWith("<>f__AnonymousType"))
{
foreach (var property in obj.GetType().GetProperties())
{
lookup[property.Name] = property.GetValue(obj, null);
}
}
else
{
lookup[name]=obj;
}
```
I was wondering if there is a better/faster way of detecting AnonymousTypes,
or if there is a better/faster way to dump an object's properties names/values into a
```
Dictionary<string,object>
``` | To get all the properties of an object, with its values into a `Dictionary`, you can couple the power of Linq to Objects with Reflection.
You can use the [Enumerable.ToDictionary](http://msdn.microsoft.com/en-us/library/bb549277.aspx) method:
```
var dic = obj.GetType()
.GetProperties()
.ToDictionary(p => p.Name, p=> p.GetValue(obj, null));
```
This will return you a `Dictionary<string, object>`. | Use the new collection object initializer syntax instead of an anonymous type:
```
var obj = new Dictionary<string, object>()
{
{ "Name", t.Name },
{ "Value", t.Value }
};
``` | C# Detecting AnonymousType new{name=value,} and convert into Dictionary<string,object> | [
"",
"c#",
".net",
""
] |
How can a Windows console application written in C# determine whether it is invoked in a non-interactive environment (e.g. from a service or as a scheduled task) or from an environment capable of user-interaction (e.g. Command Prompt or PowerShell)? | [Environment.UserInteractive](http://msdn.microsoft.com/en-us/library/system.environment.userinteractive.aspx) Property | **[EDIT: 4/2021 - new answer...]**
Due to a recent change in the Visual Studio debugger, my original answer stopped working correctly when debugging. To remedy this, I'm providing an entirely different approach. The text of the original answer is included at the bottom.
**---
1. Just the code, please...**
To determine if a .NET application is running in GUI mode:
```
[DllImport("kernel32.dll")] static extern IntPtr GetModuleHandleW(IntPtr _);
public static bool IsGui
{
get
{
var p = GetModuleHandleW(default);
return Marshal.ReadInt16(p, Marshal.ReadInt32(p, 0x3C) + 0x5C) == 2;
}
}
```
This checks the `Subsystem` value in the [**PE header**](https://en.wikipedia.org/wiki/Portable_Executable). For a console application, the value will be `3` instead of `2`.
**---
2. Discussion**
As noted in a [related question](https://stackoverflow.com/questions/30890104/determine-whether-assembly-is-a-gui-application), the most reliable indicator of **GUI** *vs.* **console** is the "`Subsystem`" field in the [PE header](https://learn.microsoft.com/en-us/windows/win32/debug/pe-format#windows-subsystem) of the executable image. The following C# `enum` lists the allowable (documented) values:
```
public enum Subsystem : ushort
{
Unknown /**/ = 0x0000,
Native /**/ = 0x0001,
WindowsGui /**/ = 0x0002,
WindowsCui /**/ = 0x0003,
OS2Cui /**/ = 0x0005,
PosixCui /**/ = 0x0007,
NativeWindows /**/ = 0x0008,
WindowsCEGui /**/ = 0x0009,
EfiApplication /**/ = 0x000A,
EfiBootServiceDriver /**/ = 0x000B,
EfiRuntimeDriver /**/ = 0x000C,
EfiRom /**/ = 0x000D,
Xbox /**/ = 0x000E,
WindowsBootApplication /**/ = 0x0010,
};
```
As easy as that code (in that other answer) is, our case here can be vastly simplified. Since we are only specifically interested in our own running process (which is necessarily loaded), you don't have to open any file or read from the disk to obtain the **subsystem** value. Our executable image is guaranteed to be already mapped into memory. And it is simple to retrieve the base address for any loaded file image by calling the [`GetModuleHandleW`](https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-getmodulehandlew) function:
```
[DllImport("kernel32.dll")]
static extern IntPtr GetModuleHandleW(IntPtr lpModuleName);
```
Although we might provide a filename to this function, again things are easier and we don't have to. Passing `null`, or in this case, `default(IntPtr.Zero)` (which is the same as `IntPtr.Zero`), returns the base address of the virtual memory image for the current process. This eliminates the extra steps (alluded to earlier) of having to fetch the entry assembly and its `Location` property, etc. Without further ado, here is the new and simplified code:
```
static Subsystem GetSubsystem()
{
var p = GetModuleHandleW(default); // VM base address of mapped PE image
p += Marshal.ReadInt32(p, 0x3C); // RVA of COFF/PE within DOS header
return (Subsystem)Marshal.ReadInt16(p + 0x5C); // PE offset to 'Subsystem' word
}
public static bool IsGui => GetSubsystem() == Subsystem.WindowsGui;
public static bool IsConsole => GetSubsystem() == Subsystem.WindowsCui;
```
[official end of the new answer]
**---
3. Bonus Discussion**
For the purposes of .NET, `Subsystem` is perhaps the most—*or only*—useful piece of information in the **PE Header**. But depending on your tolerance for minutiae, there could be other invaluable tidbits, and it's easy to use the technique just described to retrieve additional interesting data.
Obviously, by changing the final field offset (`0x5C`) used earlier, you can access other fields in the COFF or PE header. The next snippet illustrates this for `Subsystem` (as above) plus three additional fields with their respective offsets.
> *NOTE: To reduce clutter, the `enum` declarations used in the following can be found [here](https://pastebin.com/pZL3Y7vM)*
```
var p = GetModuleHandleW(default); // PE image VM mapped base address
p += Marshal.ReadInt32(p, 0x3C); // RVA of COFF/PE within DOS header
var subsys = (Subsystem)Marshal.ReadInt16(p + 0x005C); // (same as before)
var machine = (ImageFileMachine)Marshal.ReadInt16(p + 0x0004); // new
var imgType = (ImageFileCharacteristics)Marshal.ReadInt16(p + 0x0016); // new
var dllFlags = (DllCharacteristics)Marshal.ReadInt16(p + 0x005E); // new
// ... etc.
```
To improve things when accessing multiple fields in unmanaged memory, it's essential to define an overlaying `struct`. This allows for direct and natural managed access using C#. For the running example, I merged the adjacent COFF and PE headers together into the following C# `struct` definition, and only included the four fields we deemed interesting:
```
[StructLayout(LayoutKind.Explicit)]
struct COFF_PE
{
[FieldOffset(0x04)] public ImageFileMachine MachineType;
[FieldOffset(0x16)] public ImageFileCharacteristics Characteristics;
[FieldOffset(0x5C)] public Subsystem Subsystem;
[FieldOffset(0x5E)] public DllCharacteristics DllCharacteristics;
};
```
> *NOTE: A fuller version of this struct, without the omitted fields, can be found [here](https://pastebin.com/EU1QJJ7i)*
Any interop `struct` such as this has to be properly setup at runtime, and there are many options for doing so. Ideally, its generally better to impose the `struct` overlay "*in-situ*" directly on the unmanaged memory, so that no memory copying needs to occur. To avoid prolonging the discussion here even further however, I will instead show an easier method that does involve copying.
```
var p = GetModuleHandleW(default);
var _pe = Marshal.PtrToStructure<COFF_PE>(p + Marshal.ReadInt32(p, 0x3C));
Trace.WriteLine($@"
MachineType: {_pe.MachineType}
Characteristics: {_pe.Characteristics}
Subsystem: {_pe.Subsystem}
DllCharacteristics: {_pe.DllCharacteristics}");
```
**---
4. Output of the demo code**
Here is the output when a **console** program is running...
```
MachineType: Amd64
Characteristics: ExecutableImage, LargeAddressAware
Subsystem: WindowsCui (3)
DllCharacteristics: HighEntropyVA, DynamicBase, NxCompatible, NoSeh, TSAware
```
...compared to **GUI** (WPF) application:
```
MachineType: Amd64
Characteristics: ExecutableImage, LargeAddressAware
Subsystem: WindowsGui (2)
DllCharacteristics: HighEntropyVA, DynamicBase, NxCompatible, NoSeh, TSAware
```
---
**[OLD: original answer from 2012...]**
To determine if a .NET application is running in GUI mode:
```
bool is_console_app = Console.OpenStandardInput(1) != Stream.Null;
``` | How can a C# Windows Console application tell if it is run interactively | [
"",
"c#",
"console-application",
"user-interaction",
""
] |
I'm trying to match on some inconsistently formatted HTML and need to strip out some double quotes.
Current:
```
<input type="hidden">
```
The Goal:
```
<input type=hidden>
```
This is wrong because I'm not escaping it properly:
> s = s.Replace(""","");
This is wrong because there is not blank character character (to my knowledge):
```
s = s.Replace('"', '');
```
What is syntax / escape character combination for replacing double quotes with an empty string? | I think your first line would actually work but I think you need four quotation marks for a string containing a single one (in VB at least):
```
s = s.Replace("""", "")
```
for C# you'd have to escape the quotation mark using a backslash:
```
s = s.Replace("\"", "");
``` | I didn't see my thoughts repeated already, so I will suggest that you look at `string.Trim` in the Microsoft documentation for C# you can add a character to be trimmed instead of simply trimming empty spaces:
```
string withQuotes = "\"hellow\"";
string withOutQotes = withQuotes.Trim('"');
```
should result in withOutQuotes being `"hello"` instead of `""hello""` | Strip double quotes from a string in .NET | [
"",
"c#",
".net",
"vb.net",
""
] |
I need to create a view that lists out taxonomy terms and then list the top 3 recent(sort by node:date updated) nodes with that tag
example out put:
**Article**
* Article 1
* Article 2
* Article 3
**Podcast**
* Podcast 1
* Podcast 2
* Podcast 3
.
.
.
I created a view of type "Term" and I can get the view to output all of the terms. However, I don't see how to link in the nodes tagged with the taxonomy term. I looked around in the view of type node, but I couldn't get anywhere close to what I needed to output. | Outside Views, this appears to be exactly what Taxonews.module does. HAve you considered it ?
(disclaimer: I'm its author) | *(Only the first part of a possible solution -- maybe it'll help you get to the full solution)*
What about a view "node", with something like that (I use Drupal in French, so it might not always be the right words, sorry) :
* fields
+ Taxonomy : term
+ Node : title (as link to node)
* filters
+ whatever you want ^^
* sorting
+ whatever you want
* on the left of the screen, "base parameters" or something like this:
+ style : HTML list (or table)
+ the little "wheel icon" on the right of "style" : when you click on it, you have to possibility to choose, in a select list, a "grouping field" ; select "taxonomy: term"
It should list the nodes, grouped by taxonomy terms.
The only thing I don't know is how to list only 3 nodes of each taxonomy term ; if you do find out, I'm interested ! | Drupal 6: Views: Listing taxonomy terms with tagged nodes underneath | [
"",
"php",
"drupal",
"drupal-6",
"view",
"taxonomy",
""
] |
I'm new to the .NET world having come from C++ and I'm trying to better understand properties. I noticed in the .NET framework Microsoft uses properties all over the place. Is there an advantage for using properties rather than creating get/set methods? Is there a general guideline (as well as naming convention) for when one should use properties? | It is pure syntactic sugar. On the back end, it is compiled into plain get and set methods.
Use it because of convention, and that it looks nicer.
Some guidelines are that when it has a high risk of throwing Exceptions or going wrong, don't use properties but explicit getters/setters. But generally even then they are used. | Properties **are** get/set methods; simply, it formalises them into a single concept (for read and write), allowing (for example) metadata against the **property**, rather than individual members. For example:
```
[XmlAttribute("foo")]
public string Name {get;set;}
```
This is a get/set pair of methods, but the additional metadata applies to both. It also, IMO, simply makes it easier to use:
```
someObj.Name = "Fred"; // clearly a "set"
DateTime dob = someObj.DateOfBirth; // clearly a "get"
```
We haven't duplicated the fact that we're doing a get/set.
Another nice thing is that it allows simple two-way data-binding against the property ("Name" above), without relying on any magic patterns (except those guaranteed by the compiler). | When to use Properties and Methods? | [
"",
"c#",
".net",
"coding-style",
""
] |
I'm using the Kohana PHP Framework for an application. Now I'm running into a problem, when jQuery does an AJAX request to a certain file, it does work, but when this file throws an PHP exception, jQuery fails and doesn't show the output of the file.
A little example, this is the piece of Javascript:
```
$.post($('#' + e.currentTarget.id).attr('action'), $('#' + e.currentTarget.id).serialize(), function ( data )
{
alert ( data );
}/*, 'json' */);
```
Now this works when the PHP file does this (the alert box pops up):
```
<?php echo 'Test'; ?>
```
But when somewhere in the PHP file this happens:
```
<?php throw new Exception ( 'Test' ); ?>
```
jQuery fails and doesn't show the outputted HTML error, also there is a difference in PHP headers (generated by PHP?):
With the PHP echo (good):
```
. Connection:Keep-Alive
. Content-Encoding:gzip
. Content-Length:544
. Content-Type:text/html; charset=UTF-8
. Date:Wed, 22 Jul 2009 14:22:43 GMT
. Keep-Alive:timeout=15, max=100
. Server:Apache/2.0.59 (Unix) PHP/5.2.6 DAV/2
. Vary:Accept-Encoding
. X-Powered-By:PHP/5.2.6
```
With the PHP exception (fail):
```
. Connection:close
. Content-Encoding:gzip
. Content-Length:1896
. Content-Type:text/html; charset=UTF-8
. Date:Wed, 22 Jul 2009 14:23:11 GMT
. Server:Apache/2.0.59 (Unix) PHP/5.2.6 DAV/2
. Vary:Accept-Encoding
. X-Powered-By:PHP/5.2.6
```
Now I don't really see a problem, both ways PHP echoes some HTML. Somebody had this problem before, and how'd you fix this?
Thanks for the help! | To show what the error is, you can do this in your ajax call.
```
jQuery.ajax({
type: "POST",
...
error: function(xhr, desc, e) {
alert(xhr.responseText);
}
});
```
This should alert you with the same html that throwing a PHP exception would give you on a regular page. | If you want to keep your current error reporting level, but still give a nice error message to jQuery you can wrap your code in a try-catch block.
```
try {
// your code
} catch(Exception $e) {
echo $e->getMessage(); // formatted nicely or a generic message or something.
}
``` | jQuery AJAX request fails on PHP exception | [
"",
"php",
"jquery",
"exception",
"header",
"kohana",
""
] |
I have the following pointer.
```
char **x = NULL;
```
x is will point to an array of pointers. So is the following code correct?
```
x = new (nothrow) (*char)[20];
```
and we will dealocate it using
```
delete[] x;
```
Is
```
x = (char **) malloc(sizeof(char **) * 20);
```
and
```
x = new (nothrow) (*char)[20];
```
equivalent? | No, that code has syntax errors. The asterisk goes *after* the type name, to form a pointer to that type. So it's:
```
char*
```
not:
```
*char
```
It's weird that you have this right in the "C-style" example using `malloc()`, but not in C++.
As many commenters have kindly enough pointed out, there are other issues with the `malloc()` and its use of `sizeof`, though. But at least it got the type name right. Personally I'm against repeating type names in `malloc()` calls if at all possible, so I would write that version like this, to allocate a dynamic array of 20 character pointers:
```
char **x;
x = malloc(20 * sizeof *x);
```
This way:
1. Should be read as "20 times the size of whatever `x` points at", i.e. 20 times the size of a single `char *` pointer.
2. Contains the magical constant 20 in one place only.
3. Doesn't repeat any part of the type, if you were to change to `wchar_t **x` this would still work, and not by chance.
4. Is written in C, since I felt that is more natural when discussing `malloc()`. In C++, you need to cast the return value. In C, you should never do that. | Apart from the pointer-syntax mentioned by unwind, it is equivalent: an array of 20 char\* will be allocated and deleted in both cases.
C++-adept warning: use `std::vector< std::string >` instead :) No memory management needed. | Am I using new operator correctly? | [
"",
"c++",
"new-operator",
""
] |
I have a linux command line that I need to execute witin either PHP or Javascript (PHP preferred.)
The command is `keygen AB3554C1D1971DB7 \pc_code 365`
However, I would like to substitute the `\pc_code` with a string like $pccode where user enters the generated PC Code. This is for a legit project, but having a problem with creators of the program giving me assitance. | It could be that this program is poorly written and outputs its information to stderr instead of stdout. Or it could be that it is failing and (properly) printing the error message to stderr. In either case, shell\_exec wouldn't capture stderr. However, **you should be able to capture stderr by adding "`2>&1`" to the end of your command**, i.e.:
```
$result = shell_exec('keygen AB3554C1D1971DB7 \pc_code 365 2>&1');
echo '<pre>'.htmlspecialchars($result).'</pre>';
```
**Edit**: For continuity's sake, what fixed the problem was something mentioned in a comment to another answer below:
> Maybe it doesn't like the bare backslash? you could try "`\\pc_code`" | You could use [shell\_exec()](https://www.php.net/shell_exec):
```
$cmd = sprintf("keygen AB3554C1D1971DB7 %s 365",
escapeshellarg($pccode));
$result = shell_exec($cmd);
```
Whenever you execute external commands you have to be extremely careful to avoid command injection. You can prevent this using [escapeshellarg()](https://www.php.net/manual/en/function.escapeshellarg.php). | Execute external application with PHP | [
"",
"php",
""
] |
Do you know if an API exists for that checking? | [GetTimeZoneInformation](http://msdn.microsoft.com/en-us/library/ms724421(VS.85).aspx) is what you need.
You can call it and inspect the returned value to detect whether daylight saving is on at the moment of call. It also fills a structure that contains rules for switching to daylight saving and from daylight saving. Having this structure filled and any given time in UTC format you can relatively easily compute whether that time corrspongs to daylight saving time or standard time. | All well and good, but GetTimeZoneInformation and GetDynamicTimeZoneInformation only return the *current* time zone settings. What if the current TZ (i.e., where my server lives) is not the TZ that I want to check?
Let's say that I have a server app that reserves books for checkout. You can say "I want to check it out now" or "I'm going to need to check it out at *datetime* in the future". Checkout times are entered in the user's local time and converted to UTC before storing. When the user retrieves their list of checkouts the times are converted back to their local for display.
Assume that the server lives in New York and operates under standard post-2007 DST rules for the USA. The timezone is set to Eastern US, and it's currently 7/27/2009 15:30, so DST is ON.
Users in New York enter local dates and times. Convert from ET to UTC - not a problem. They enter a future date - fine. I use one of the above two API calls and figure it out.
However, a user in Sydney wants to reserve a checkout. She requests a checkout on 12/13/2009 18:25 relative to her local timezone in Sydney. I can't use my local TZ info - Sydney and NY don't follow the same DST rules. How do I go about loading Sydney's current TZ information and finding out if an arbitrary date is DST or not? | Routine to check if a given date is summertime or wintertime | [
"",
"c++",
"winapi",
"datetime",
""
] |
I have created a table. In one field, I have a priority of that record (between 1-9).
I didn't set priority for all the records, so for some records, it will remain null.
When displaying these records in my HTML page, I just check those priorities -- if the priority exists, then I will display as it is, if it's null, then I will display it as the lowest priority of '10' (just for display).
The problem occurs while sorting the table. If I try to `sort(DESC)` the table, it displays the 10 in the first row and later continues perfectly (1,2,....).
How to solve this?
Is it possible to display the priorities first and later continue with the null values? | There are a number of ways to solve it.
1/ Change the data so that 10 is actually in the table (rather than NULL):
```
update table TBL set FLD = 10 where FLD is null;
```
2/ Modify your query to return different values for NULL:
```
select FLD1, FLD2, case where FLD3 is null then 10 else FLD3 end from ...
```
3/ Create a view to do option 2 above automagically.
I'd tend to go for option 1 since it's likely to be the most efficient.
SQL does not specify how NULLs are sorted (although I think it specifies they must be adjacent) - that means they could be at the beginning or end (or possibly in the middle though I've never seen that happen).
The reason I bring up the efficiency aspect is that per-row functions do not scale well. As the table gets bigger, you'll find that converting NULLs into 10 every time you select will be very expensive.
It's far better to bite the bullet and just set them to 10 in the database. This will allow the DBMS to optimize queries better. And, if you ever need to use 10 for another real priority level, just change all the current 10s to 11s (or 9999s) before you start. | Here's a short sample that shows how to convert NULL's into a value and then sort on it..
```
create table test
(
priority int null,
detail varchar(10)
)
insert into test values (1, 'one')
insert into test values (3, 'three')
insert into test values (8, 'eight')
insert into test values (9, 'nine')
insert into test values (null, 'ten')
select ISNULL(priority, 10), detail from test order by ISNULL(priority, 10)
```
The key is ISNULL'ing the null'able value field to convert NULL's to the value (10) that you want. | Is it possible to use both order by and where in a single query | [
"",
"sql",
"mysql",
"caching",
""
] |
I have 4 servers and JVM is installed on them. I wrote a java service that Quartz calls this services every 10 minutes. But in 4 servers, every 10 minutes 4 calls are done. This sitiuation creates race condition. I want only one service on 4 JVM.
How can I do that with Spring Framework? | This is actually pretty easy to set up with Quartz. Spring itself cannot help you much here since it is unaware of the other JVMs that are running. Quartz on the other hand has the concept of a clustered scheduler.
Basically you need to set up a single database that all the 4 JVMs can share. This will be used as a scheduler for all 4 instances. When a job is scheduled, it is run by only one of the instances using the clustered scheduler.
Taken from the Quartz website wiki for clustering (
<http://www.opensymphony.com/quartz/wikidocs/ConfigJDBCJobStoreClustering.html>), this is an example configuration on how to set up the clustered scheduler. You can also set these properties directly from spring if you are configuring your scheduler that way.
```
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = MyClusteredScheduler
org.quartz.scheduler.instanceId = AUTO
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 25
org.quartz.threadPool.threadPriority = 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.myDS.driver = oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.myDS.URL = jdbc:oracle:thin:@polarbear:1521:dev
org.quartz.dataSource.myDS.user = quartz
org.quartz.dataSource.myDS.password = quartz
org.quartz.dataSource.myDS.maxConnections = 5
org.quartz.dataSource.myDS.validationQuery=select 0 from dual
``` | Your question isn't very clear, so let me see if I'm understanding you: you have 4 servers, each running Quartz inside a VM, and each server has the same quartz job scheduled to run every 10 minutes, using a cron expression. Every 10 minutes, all 4 servers kick off the same job, creating your race condition as they all try to do the same thing at the same time.
This isn't really a job for Spring. However, Quartz does have clustering ability, where you configure a job to run only a single server in the cluster. It uses a shared database to coordinate which servers run which job, and makes sure they don't all do it together.
The docs have some info on this [here](http://www.opensymphony.com/quartz/wikidocs/ConfigJDBCJobStoreClustering.html), but in the usual opensymphony.com style they're pretty sparse and unhelpful. | Spring Framework Connecting JVM with each other | [
"",
"java",
"spring",
"quartz-scheduler",
"cluster-computing",
""
] |
I need to convert double with number of seconds since the epoch to `ptime`. I'm prety sure there must be an easy way to do this, but I couldn't find anything. Thanks.
Edit: The original timestamp is floating point. I can't change it and i don't want to lose the sub-second precision. | after some fiddling around i came up with this:
```
ptime(date(1970, 1, 1), time_duration(0, 0, 0, time_duration::ticks_per_second() * 1234567890.0987654321))
```
I'm not sure this is the best solution, but it seems to do what i need. | Use the [`from_time_t()`](http://www.boost.org/doc/libs/1_39_0/doc/html/date_time/posix_time.html#ptime_from_funcs) conversion function. A `time_t` is a UNIX timestamp, i.e. the number of seconds since the epoch. | unix timestamp to boost::posix_time::ptime | [
"",
"c++",
"boost",
"unix-timestamp",
"boost-date-time",
""
] |
We're developing a big J2ee e-sales solution. It's got a lot of integrations: CMS, ERP, Mail server etc. All these systems are divided into test and production environments.
We need to deploy our application to our test servers with test configuration and when deployed to our production servers it should use the production configuration. How do we make our application select the correct properties?
The thing we've tried so far is this:
All our property files contain test properties and production properties
```
test.mvxapi.server = SERV100TS
test.mvxapi.username = user
test.mvxapi.password = password
test.mvxapi.port = 6006
test.mvxapi.cono = 600
mvxapi.server = SERV10001
mvxapi.username = user
mvxapi.password = password
mvxapi.port = 6001
mvxapi.cono = 100
```
The Util that reads these properties has a switch: isTest() which prefixes the key with "test."
```
public String getProperty(String property)
{
return properties.getProperty(prefix + "" + property);
}
```
The switch is set by another property which is created by our build server. When the .EAR is built the script for our production servers injects (input to build.xml) "isProduction=true" into system.properties.
```
<propertyfile file="${buildDir}/system.properties">
<entry key="isProduction" value="${systemType}"/>
</propertyfile>
```
I'm not sure this is the best way to do it. If for some reason "isProduction=false" is committed wrongly to our production environment all hell is loose.
I've read people have properties locally on the server. But we really don't want to have files spread around. We have cluster of production servers. Making sure every server has the right property file doesn't seem fail-safe | What you want to avoid is having the config file inside the EAR, the problem with this is that you need different EAR's for different environments, and also, changing the config file requires a rebuild.
Rather deploy the *same* EAR to every server but configure each server with a different URL resource. iow, add a `JNDI` URL resource to all the servers you deploy to that point to the config file for that resource. If you have read only SVN access to your repo then create the config files on the svn repo, or any repo you can access via a URL. The cool thing here is that all your configuration is centralized and thus managing them is easy.
What I've done (by customizing with spring) is make sure that `JNDI` URL resource optional. So, if it's there, the app will use it, if not, it won't. The app starts up whether it's there or not. That way, even when running with no `JNDI` resource available, the app still works (development environment for example). | You deploy an EAR? Then put the properties needed in JNDI. | How to differentiate between test and production properties in an application? | [
"",
"java",
"jakarta-ee",
""
] |
I have to make a decision regarding generalization vs polymorphism.
Well the scenario is standard: I want to make my monolithic interdependent
code to be more modular, clean and extensible.
It is still in a stage where the change of design principle is doable,
and, as I look at it, highly desirable.
Will I introduce purely virtual base classes (interfaces) or templates?
I am aware of the basics regarding the template option:
less indirection, better performance, more compiling
but
no late binding, and so on.
The stl does not use much (or none?) inheritance, and boost doesn't either.
But I think those are aimed to be really small basic tools that are used every
2 lines of code by the programmer.
I consider the inheritance and late binding approach to be more sensible for
plug-in style of big pieces of code and functionality that should be interchangeable,
updateable etc. after deployment or even during runtime.
Well my scenario lies somewhat inbetween.
I dont need to exchange pieces of code on the fly at runtime, compile time is fine.
Usually it is also a very central and frequently used piece of functionality,
it is not logically seperatable into big blocks.
This lets me tend somewhat to the template solution.
To me it also looks somewhat cleaner.
Are there any big bad implications, are interfaces still THE way
to go? When are they not?
Which complies more with standard c++ style?
I know this is bordering on subjective, but I am really interested in
some experiences. I don't own a copy of Scott Meyers effective C++
so I set my hopes on you guys :) | You're basically right, dynamic polymorphism (inheritance, virtuals) is generally the right choice when the type should be allowed to change at runtime (for example in plugin architectures). Static polymorphism (templates) is a better choice if the type should only change at compile-time.
The only potential downsides to templates are that 1) they generally have to be defined in the headers (which means more code gets #included), and this often leads to slower compile-times.
But design-wise, I can't see any problems in using templates when possible.
> Which complies more with standard c++
> style?
Depends on what "standard C++ style" is. The C++ standard library uses a bit of everything. The STL uses templates for everything, the slightly older IOStreams library uses inheritance and virtual functions, and the library functions inherited from C uses neither, of course.
These days, templates are by far the most popular choice though, and I'd have to say that is the most "standard" approach. | Properties of classic object-oriented polymorphism:
* objects are bound at run-time; this is more flexible, but also consumes more resources (CPU) at run-time
* strong typing brings somewhat more type safety, but the need to `dynamic_cast` (and its potential to blow up into a customer's face) might easily compensate for that
* probably more widely known and understood, but "classical" deep inheritance hierarchies seem horrendous to me
Properties of compile-time polymorphism by template:
* compile-time binding allows more aggressive optimizations, but prevents run-time flexibility
* duck-typing might seem more awkward, but failures are usually compile-time failures
* can sometimes be harder to read and understand; without concepts, compiler diagnostics might sometimes become enraging
Note that there is no need to decide for either one. You can freely mix and mingle both of them (and many other idioms and paradigms). Often, this leads to very impressive (and expressive) code. (See, for example, things like type erasure.) To get an idea what's possible through clever mixing of paradigms, you might want to browse through Alexandrescu's "Modern C++ Design". | c++ standard practice: virtual interface classes vs. templates | [
"",
"c++",
"templates",
"coding-style",
"polymorphism",
""
] |
Let's say I have a `Type` called `type`.
I want to determine if I can do this with my type (without actually doing this to each type):
If `type` is `System.Windows.Point` then I could do this:
```
Point point1 = new Point();
```
However if `type` is `System.Environment` then this will not fly:
```
Environment environment1 = new Environment(); //wrong
```
So if I am iterating through every visible type in an assembly how do I skip all the types that will fail to create an instance like the second one? I'm kind of new to reflection so I'm not that great with the terminology yet. Hopefully what I'm trying to do here is pretty clear. | `static` classes are declared `abstract` and `sealed` at the IL level. So, you can check `IsAbstract` property to handle both `abstract` classes and `static` classes in one go (for your use case).
However, `abstract` classes are not the only types you can't instantiate directly. You should check for things like interfaces ([without the `CoClass` attribute](https://stackoverflow.com/questions/1093536/how-does-the-c-compiler-detect-com-types)) and types that don't have a constructor accessible by the calling code. | ```
type.IsAbstract && type.IsSealed
```
This would be a sufficient check for C# since an abstract class cannot be sealed or static in C#. However, you'll need to be careful when dealing with CLR types from other languages. | Determine if a type is static | [
"",
"c#",
".net",
"reflection",
"types",
"instantiation",
""
] |
I have two arrays: `$Forms` and `$formsShared`.
```
<?php foreach ($Forms as $r): ?>
$("#shareform<?=$r['Form']['id'];?>").hide();
$(".Share<?=$r['Form']['id'];?>").click(function () {
$("#shareform<?=$r['Form']['id'];?>").toggle("show");
});
<?php endforeach; ?>
```
Currently, I have this hide and toggle function for each `Form` in the `$Forms` array. I want these functions to be enabled for the forms in the `$formsShared` array also.
If I add another for loop for `$formsShared`, like this:
```
<?php foreach ($formsShared as $r): ?>
$("#shareform<?=$r['Form']['id'];?>").hide();
$(".Share<?=$r['Form']['id'];?>").click(function () {
$("#shareform<?=$r['Form']['id'];?>").toggle("show");
});//.Share click
<?php endforeach; ?>
```
I achieve what I want, but it seems to be a repetition of the same code.
Is there any way in cakePHP to loop through two arrays in a single `foreach` loop?
**Solution:**
array\_merge() only accepts parameters of type array. So use typecasting to merge other types.
```
<?php foreach (array_merge((array)$Forms,(array)$formsShared) as $r): ?>
$("#shareform<?=$r['Form']['id'];?>").hide();
$(".Share<?=$r['Form']['id'];?>").click(function () {
$("#shareform<?=$r['Form']['id'];?>").toggle("show");
});//.Share click
<?php endforeach;?>
``` | It sounds like you don't want to loop over two lists *at the same time*. You want to loop over two lists separately, executing the same code for each element in either list. So why not concatenate the lists:
```
foreach (array_merge($Forms, $FormsShared) as $r)
// do stuff
``` | Instead of repeating so much JavaScript, you could make use of jQuery's [selectors](http://docs.jquery.com/Selectors). Instead of using unique `class`es on your form elements (or in addition to), use a generic class name like `form-field`. Then you can add a `click` event to all of them at the same time:
```
$('.form-field').click(function () {
// Without knowing your HTML structure, I can't make this accurate, but maybe
// this will work, assuming #shareform... is the parent <form> element:
$(this).closest('form').toggle('show');
});
```
This JavaScript would only need to be output once and would apply to all elements with `class="form-field"`. Note that you can have multiple classes on an element too: `class="Share123 form-field"` gives the element class `Share123` and class `form-field`. | Is it possible to loop through all the items of two arrays using foreach? | [
"",
"php",
"cakephp",
"for-loop",
""
] |
I am trying to query against an IList<string> property on one of my domain classes using NHibernate. Here is a simple example to demonstrate:
```
public class Demo
{
public Demo()
{
this.Tags = new List<string>();
}
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual IList<string> Tags { get; set; }
}
```
Mapped like this:
```
<class name="Demo">
<id name="Id" />
<property name="Name" />
<bag name="Tags">
<key column="DemoId"/>
<element column="Tag" type="String" />
</bag>
```
And I am able to save and retrieve just fine. Now to query for instances of my domain class where the Tags property contains a specified value:
```
var demos = this.session.CreateCriteria<Demo>()
.CreateAlias("Tags", "t")
.Add(Restrictions.Eq("t", "a"))
.List<Demo>();
```
Results in the error: collection was not an association: Demo.Tags
```
var demos = (from d in this.session.Linq<Demo>()
where d.Tags.Contains("a")
select d).ToList();
```
Results in the error: Objct reference not set to an instance of an object.
```
var demos = this.session.CreateQuery("from Demo d where :t in elements(d.Tags)")
.SetParameter("t", "a")
.List<Demo>();
```
Works fine, but as my real domain class has many many properties, and I am building a complicated dynamic query, doing ugly string manipulation is not my first choice. I'd much rather use ICriteria or Linq. I have a user interface where many different possible search criteria can be entered. The code that builds up the ICriteria right now is dozens of lines long. I'd really hate to turn that into HQL string manipulation. | So because of limitations of the Criteria API, I decided to bend my domain classes to fit.
I created an entity class for the Tag. I couldn't even create it as a value object. It had to have its own id.
I feel dirty now. But being able to construct a dynamic query without resorting to string manipulation was more important to me than staying true to the domain. | As documented here:
### [17.1.4.1. Alias and property references](http://nhforge.org/doc/nh/en/index.html#querysql-aliasreferences)
we can use:
```
...
A collection key {[aliasname].key} ORGID as {coll.key}
The id of an collection {[aliasname].id} EMPID as {coll.id}
The element of an collection {[aliasname].element} XID as {coll.element}
...
```
there is a small bug in doc... instead of `".element"` we have to use **`".elements"`**
```
var demos = this.session.CreateCriteria<Demo>()
.CreateAlias("Tags", "t")
// instead of this
// .Add(Restrictions.Eq("t", "a"))
// we can use the .elements keyword
.Add(Restrictions.Eq("t.elements", "a"))
.List<Demo>();
``` | NHibernate How do I query against an IList<string> property? | [
"",
"c#",
"linq",
"nhibernate",
"hql",
"icriteria",
""
] |
The company I work for is writing software for financial organizations and this often involves doing some complex calculations. Right now, a functional designer will just write down the calculation and then someone more technical (me) has to translate this to code.
However, I realize that this should probably be done in an easier way. Isn't there already some tool in which you can just design your functions and calculations in an easy-to-understand visual way, which would compile to a .NET assembly that can be used within my projects? That way, the functional designer just has to "draw" the formula in this tool, generate the code/assembly and pass that on to me.
(I must say, for this purpose we're already some graphical design tool but it's too limited for the more complex calculations.)
[These calculations are related to loans, mortgages and insurances, often doing complex calculations to predict the profitability of a certain product compared to others.] | I've used tools that try to do formulae via designers; they are usually terrible.
Perhaps a better idea is to use a user-friendly syntax for the formula? I've seen this done with Python in the past (IronPython would be a good embedded language in .NET). Or just parse a string in an expected syntax yourself? (not hugely complicated) | Build a Domain Specific Language package for Visual Studio. It's a well-documented and very common task. :)
<http://msdn.microsoft.com/en-us/library/bb126235.aspx> | Visual Development Tools | [
"",
"c#",
""
] |
How can I bind a DataSource with some selected enum values?
**My Enum:**
```
public enum Filters : byte
{
Filter1 = 1,
Filter2 = 2,
Filter3 = 4,
Filter4 = 8,
Filter5 = 16
}
```
**Selected values:**
```
public Filters SelectedFilters = Filters.Filter1 | Filters.Filter4;
```
How can I bind the **SelectedFilters** variable as a datasource? | Using:
```
public enum Filters : byte
{
Filter1 = 1,
Filter2 = 2,
Filter3 = 4,
Filter4 = 8,
Filter5 = 16
}
```
And selecting some values:
```
public Filters SelectedFilters = Filters.Filter1 | Filters.Filter4;
```
I created a method that can solve my own question:
```
public static List<T> EnumToList<T>(Enum someEnum)
{
List<T> list = new List<T>();
foreach (var it in Enum.GetValues(someEnum.GetType()))
{
if ((Convert.ToByte(it) & Convert.ToByte(someEnum)) != 0)
{
list.Add((T)it);
}
}
return list;
}
myRepeater.DataSource = EnumToList<Filters>(SelectedFilters);
``` | If you want to be able to have the Text and Value of the DropDownList seperate, use a Dictionary like this:
```
public static Dictionary<int, string> EnumToList(Type t)
{
Dictionary<int, string> list = new Dictionary<int, string>();
foreach (var v in Enum.GetValues(t))
{
list.Add((int)v, Enum.GetName(t, (int)v));
}
return list;
}
```
To use it, in the case of a LinqDataSource Selecting event:
```
protected void ldsMyTypes_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
e.Result = EnumToList(typeof(Enums.FeedTypes));
}
``` | Binding a datasource with some Enum values | [
"",
"c#",
"enums",
""
] |
Considering that everyone is always worried about User Data (And Rightly So), would it be sufficient to simply loop through each external array when you get it, and apply a mysql\_real\_escape\_string().
I'm curious to if this is a bad idea.
Something like:
```
function getExternalData($type='GET')
{
$type = strtoupper($type);
$data = $_$type;
foreach($data as $key => $value)
{
$clean[$key] = mysql_real_escape_string($value);
}
return $clean;
}
```
That would make all that data safe to use in databases. But, what are the cons of doing it this way? | The main con is if you have to process the input, for example in order to parse markup, you'll have to unescape the input then not forget to re-escape it. Also, it's quite inefficient. Query placeholders are a very good way to prevent SQL injection.
As for sanitization itself (not only for SQL) you should take a look at the [Filter extension](http://docs.php.net/manual/en/book.filter.php), available by default in PHP 5.2 and in PECL for 5.1. | Applying `mysql_real_escape_string` on all superglobal variables convey the impression that you either want to use them exclusively in MySQL queries or that you have no clue what `mysql_real_escape_string` is good for. | Looping Through GET, POST, and COOKIE to Sanitize? | [
"",
"php",
"mysql",
"validation",
"sanitization",
""
] |
I have this code that prints out columns in my DB and adds a column "Profit" for me.
Distance is calculated in a complex way and so is done in a loop, while the transformation of distance to "profit" is done as it is printed out.
What I wish to do is print them out in descending order of "profit". I believe (but do not know) that the best way would be to store them in an array and "sort them in there" then print them out from there.
How do I determine what row of the array to stick them in?
How do I sort in an array?
How do I loop through the array so I can't print them back out?
```
//display results
// now we retrieve the routes from the db
$query = "SELECT * FROM routes ORDER BY `id`;";
// run the query. if it fails, display error
$result = @mysql_query("$query") or die('<p class="error">There was an unexpected error grabbing routes from the database.</p>');
?>
<tr>
<td style="background: url(http://www.teamdelta.byethost12.com/barbg.jpg) repeat-x top;">
<center><b><font color="#F3EC84">»Matches«</font></b></center>
</td>
</tr>
<tr>
<td style="background: #222222;">
</font>
<table border="0" width="100%"><tr>
<td width="10%"><b><center><b>Player</b></center></b></td>
<td width="10%"><center><b>Base</b></center></td>
<td width="10%"><b>Location</b></td>
<td width="5%"><b>Econ</b></td>
<td width="10%"><b>Distance</b></td>
<td width="10%"><center><b>Profit cred./h</b></center></td>
<td width="40%"><b>Comment</b></td>
<td width="5%"><align="right"><b>Delete</b></align></td>
</tr>
<?
// while we still have rows from the db, display them
while ($row = mysql_fetch_array($result)) {
$dname = stripslashes($row['name']);
$dbase = stripslashes($row['base']);
$dlocation = stripslashes($row['location']);
$dx = stripslashes($row['x']);
$dy = stripslashes($row['y']);
$dgalaxy = stripslashes($row['galaxy']);
$dplanet = stripslashes($row['planet']);
$dcomment = stripslashes($row['comment']);
$did = stripslashes($row['id']);
$decon = stripslashes($row['econ']);
$distance = -1 ;//default
//distance calc
if($dgalaxy == $galaxy)
{//interstellar
if(($dx == $x) && ($dy == $y))
{//inter planitary
if ((floor($planet/10)*10) == (floor($dplanet/10)*10))
{// intra planitary loonar
$distance = abs(fmod($planet,10)-fmod($planet,10))*0.1;
}
else
{// inter planitary
$distance = abs((floor($planet/10)*10)-(floor($planet/10)*10))*0.2;
}
}else
{//interstllar
$distance = round(Sqrt(pow(($dx-$x),2)+pow(($dy-$y),2)));//interstllar
}
}
else
{//intergalatic
if ((floor($galaxy/10)*10) == (floor($dgalaxy/10)*10))
{//intra galactic cluster
$distance = abs(($galaxy-$dgalaxy))*200;
}
else
{//inter galactic cluster
if ($galaxy < $dgalaxy)
{//anti clockwise inter galactic cluster
$distance = (((9-fmod($galaxy,10))*200)+2000+(fmod($dgalaxy,10)*200));
}
else
{//clockwise inter galactic cluster
$distance = (((fmod($galaxy,10))*200)+2000+(fmod(9-$dgalaxy,10)*200));
}
}
}
echo('<tr>
<td width=\'20px\'><center>('.$dname.')</center></td>
<td><center>'.$dbase.'</center></td>
<td><a href="http://delta.astroempires.com/map.aspx?loc='.$dlocation.'">'.$dlocation.'</a></td>
<td>'.$decon.'</td><td>'.$distance.' </td>
<td>'.round(Sqrt(min($decon,$econ))*(1+Sqrt($distance)/75+Sqrt($players)/10)).'</td>
<td>['.$dcomment.']</td>
<td><a href=deleterouteconfirm.php?id='.$did.'>Delete</a></td>
</tr>');
}
?></table><!--display results table-->
``` | I think the easiest solution to implement is going to be a double-pass over your database results.
The first pass will to generate the "distance" and "profit" values for each row and store these rows into an array that we'll sort.
The second pass will just simply loop over the array created in the first pass and display them, after it has been properly sorted and escaped for output.
```
<?php
//display results
// now we retrieve the routes from the db
$query = "SELECT * FROM routes ORDER BY `id`;";
// run the query. if it fails, display error
$result = @mysql_query( "$query" ) or die( '<p class="error">There was an unexpected error grabbing routes from the database.</p>' );
?>
<tr>
<td
style="background: url(http://www.teamdelta.byethost12.com/barbg.jpg) repeat-x top;">
<center><b><font color="#F3EC84">»Matches«</font></b></center>
</td>
</tr>
<tr>
<td style="background: #222222;"></font>
<table border="0" width="100%">
<tr>
<td width="10%"><b>
<center><b>Player</b></center>
</b></td>
<td width="10%">
<center><b>Base</b></center>
</td>
<td width="10%"><b>Location</b></td>
<td width="5%"><b>Econ</b></td>
<td width="10%"><b>Distance</b></td>
<td width="10%">
<center><b>Profit cred./h</b></center>
</td>
<td width="40%"><b>Comment</b></td>
<td width="5%"><align="right"><b>Delete</b></align></td>
</tr>
<?
// while we still have rows from the db, display them
$resultSet = array();
while ( $row = mysql_fetch_array( $result ) )
{
$dname = stripslashes( $row['name'] );
$dbase = stripslashes( $row['base'] );
$dlocation = stripslashes( $row['location'] );
$dx = stripslashes( $row['x'] );
$dy = stripslashes( $row['y'] );
$dgalaxy = stripslashes( $row['galaxy'] );
$dplanet = stripslashes( $row['planet'] );
$dcomment = stripslashes( $row['comment'] );
$did = stripslashes( $row['id'] );
$decon = stripslashes( $row['econ'] );
$distance = -1; //default
//distance calc
if ( $dgalaxy == $galaxy )
{ //interstellar
if ( ( $dx == $x ) && ( $dy == $y ) )
{ //inter planitary
if ( ( floor( $planet / 10 ) * 10 ) == ( floor( $dplanet / 10 ) * 10 ) )
{ // intra planitary loonar
$distance = abs( fmod( $planet, 10 ) - fmod( $planet, 10 ) ) * 0.1;
} else
{ // inter planitary
$distance = abs( ( floor( $planet / 10 ) * 10 ) - ( floor( $planet / 10 ) * 10 ) ) * 0.2;
}
} else
{ //interstllar
$distance = round( Sqrt( pow( ( $dx - $x ), 2 ) + pow( ( $dy - $y ), 2 ) ) ); //interstllar
}
} else
{ //intergalatic
if ( ( floor( $galaxy / 10 ) * 10 ) == ( floor( $dgalaxy / 10 ) * 10 ) )
{ //intra galactic cluster
$distance = abs( ( $galaxy - $dgalaxy ) ) * 200;
} else
{ //inter galactic cluster
if ( $galaxy < $dgalaxy )
{ //anti clockwise inter galactic cluster
$distance = ( ( ( 9 - fmod( $galaxy, 10 ) ) * 200 ) + 2000 + ( fmod( $dgalaxy, 10 ) * 200 ) );
} else
{ //clockwise inter galactic cluster
$distance = ( ( ( fmod( $galaxy, 10 ) ) * 200 ) + 2000 + ( fmod( 9 - $dgalaxy, 10 ) * 200 ) );
}
}
}
$row['distance'] = $distance;
$row['profit'] = round( Sqrt( min( $decon, $econ ) ) * ( 1 + Sqrt( $distance ) / 75 + Sqrt( $players ) / 10 ) );
$resultSet[] = $row;
}
// Perform custom sort
usort( $resultSet, 'sorter' );
function sorter( $a, $b )
{
if ( $a['profit'] == $b['profit'] ) return 0;
return ( $a['profit'] < $b['profit'] ) ? -1 : 1;
}
// Switch to "descending"
array_reverse( $resultSet );
// Output escape the values
$safeForHtml = array_map( 'htmlspecialchars', $resultSet );
foreach( $safeForHtml as $row )
{
echo ( '<tr>
<td width=\'20px\'><center>(' . $row['name'] . ')</center></td>
<td><center>' . $row['base'] . '</center></td>
<td><a href="http://delta.astroempires.com/map.aspx?loc=' . $row['location'] . '">' . $row['location'] . '</a></td>
<td>' . $row['econ'] . '</td>
<td>' . $row['distance'] . ' </td>
<td>' . $row['profit'] . '</td>
<td>[' . $row['comment'] . ']</td>
<td><a href=deleterouteconfirm.php?id=' . $row['id'] . '>Delete</a></td>
</tr>' );
}
?>
</table>
<!--display results table-->
``` | You are getting your data from MySQL. Why not sort your results directly from the query?
```
$query = "SELECT * FROM routes ORDER BY `profit` DESC, `id`;";
```
**EDIT:** Re-read your question, profit isn't a field, but you might want to populate a table with your profit values instead of re-calculating it each time.
**EDIT 2:** Or, belay your output, calculate your profit, put everything in an array, and then use the following:
```
$resultArray; //Your array with all your rows plus a profit key-value pair.
$sortedArray = array_msort($resultArray, array('profit'=>SORT_DESC));
// array_msort by cagret at gmail dot com
function array_msort($array, $cols)
{
$colarr = array();
foreach ($cols as $col => $order) {
$colarr[$col] = array();
foreach ($array as $k => $row) { $colarr[$col]['_'.$k] = strtolower($row[$col]); }
}
$params = array();
foreach ($cols as $col => $order) {
$params[] =& $colarr[$col];
$params = array_merge($params, (array)$order);
}
call_user_func_array('array_multisort', $params);
$ret = array();
$keys = array();
$first = true;
foreach ($colarr as $col => $arr) {
foreach ($arr as $k => $v) {
if ($first) { $keys[$k] = substr($k,1); }
$k = $keys[$k];
if (!isset($ret[$k])) $ret[$k] = $array[$k];
$ret[$k][$col] = $array[$k][$col];
}
$first = false;
}
return $ret;
}
``` | Sorting in a php array | [
"",
"php",
"arrays",
"sorting",
""
] |
I've seen both exception messages with and without a period. And I can think of some reasons of why both could be good.
* No dot would give you the freedom to add the period or leave it out if you wanted to. Could be useful if the message was going in some sort of a title bar or something.
* With a dot, you would always know that you had a "complete sentence" and it looks more finished.
Which one do you recommend?
Could also be an issue in localized resource strings. Obviously, you can't put a period after everything (would look weird with periods after text on buttons and menu items, etc.). But should you then leave the period out from everything to be consistent and add it later where useful? Or would you rather put a period where it seems fit? For example, after all resource strings and exception messages that are sentences, but not after those that are words. But then, how about very short sentences? Like, "Create a new file", for example. Could maybe leave out periods for strings that were considered actions as well... (Just thinking while I'm typing here...)
Not the most important thing in the world, I know, but small things like this that tend to bug me after a while. I like consistency and to know why I do what I do. Problem is, I'm not sure which one to go for :p | Yes I usually treat the exception messages as full sentences, ending them with a period.
However, the message in the exception is meant for the *developer*, and *not the end user*. It may very well be that the same underlying exception should result in two different messages for the end user, depending on the context in which the exception-throwing method was called.
You really should show less technical, more user friendly messages to the user. | > **Q. Do you end your exception messages with a period?**
From [Best Practices for Exceptions](https://msdn.microsoft.com/en-us/library/seyhszts%28v=vs.110%29.aspx)† on MSDN in the section "Creating and raising exceptions":
> * Use grammatically correct error messages, **including ending
> punctuation**. Each sentence in a description string of an exception
> should end in a period. For example, "The log table has overflowed.”
> would be an appropriate description string.
And regarding possible feedback to the *user* via the the application user interface, the question includes:
> **...Could also be an issue in localized resource strings.**
The MSDN article referenced above also states:
> * Include a localized description string in every exception. The error
> message that the user sees is derived from the description string of
> the exception that was thrown, and not from the exception class.
Also, from [Exception.Message Property](https://msdn.microsoft.com/en-us/library/system.exception.message%28v=vs.110%29.aspx)† at the beginning of the section "Remarks":
> Error messages target the developer who is handling the exception. The
> text of the Message property should completely describe the error and,
> when possible, should also explain how to correct the error. Top-level
> exception handlers may display the message to end-users, so you should
> ensure that it is grammatically correct and that each sentence of the
> message ends with a period. Do not use question marks or exclamation
> points. If your application uses localized exception messages, you
> should ensure that they are accurately translated.
---
† .NET Framework 4.6 and 4.5 | Do you end your exception messages with a period? | [
"",
"c#",
"exception",
"coding-style",
"message",
""
] |
So, I have a class. It's a useful class. I like a lot. Let's call it `MyUsefulClass`.
`MyUsefulClass` has a public method. Let's call it `processUsefulData(std::vector<int>&)`.
Now suppose `processUsefulData` really does two things and I want to refactor it from this:
```
std::vector<int> MyUsefulClass::processUsefulData(std::vector<int>& data)
{
for (/*...*/)
{
for (/*...*/)
{
// a bunch of statements...
}
}
for (/*...*/)
{
for (/*...*/)
{
// a bunch of other statements...
}
}
return data;
}
```
Now, I want to split these responsibilities and rewrite the code as
```
std::vector<int> MyUsefulClass::processUsefulData(std::vector<int>& data)
{
doProcessA(data, dataMember_);
doProcessB(data, otherDataMember_);
return data;
}
```
So, I don't know if I should make the two helper functions free functions or member functions, and when each would be appropriate. I also don't know if it's better to make them in an anonymous namespace or not. Does anyone know good times to do this? | **Free function / member function**
I would make them free functions is possible (they do not need access to the internals of the class). If they work on a set of attributes or need access to other members then make it a member function.
**Access**
If the code only has sense in this scope, and will not be used from other code then make them *private*: private if it is a member, or implemented in an unnamed namespace if it is a free function.
If other code will benefit from using the code then publish it in the interface. That means making it protected if it is a member or having the free function accessible through a header in a named namespace (or global namespace). | I generally make helper routines "free" routines in an anonomous namespace if possible. That way I don't complicate the interface (off in the \*.h file) with stuff clients don't need to worry about.
However, you have to be careful that you don't introduce non-reentrancy by doing that. For instance, by modifying global data objects or static locals rather than class members. If you need to do that, you are better off making it a proper class member. | should C++ class "helper functions" be members, free, or anon-namespace free? | [
"",
"c++",
"function",
"oop",
"namespaces",
""
] |
Part of my script is taking values and putting them to a text file delimited by tabs. So I have this:
```
for linesplit in fileList:
for i in range (0, len(linesplit)):
t.write (linesplit[i]+'\t')
```
I get as an output in the file what I expect in the first line but in the following lines they all start with a \t in them, like is:
```
value1 value2 value3
value1 value2 value3
value1 value2 value3
```
Also, why don't I need to add a t.write('\n') after the second FOR loop to create the newlines? I would expect the code above to produce one long line of tab separated values but it doesn't. If I include the t.write('\n') then the tabs issue is resolved but I get double '\n'... | I'm sorry... After I hit submit it dawned on me. My last value already has a \n in it which causes the newline. | it doesn't produce what you want because original lines (`linesplit`) contain end of line character (`\n`) that you're not stripping. insert the following before your second for loop:
```
linesplit = linesplit.strip('\n')
```
That should do the job. | Python write to line flow | [
"",
"python",
"file-io",
""
] |
I am working with a team of developers on a website. The website will be using classes. I am in charge of creating the data access layer for the classes. There is an understanding that all user input will be escaped upon retrieval (from the post or get). Having little control over the input level (unless I personally review everyone's code), I thought it would be cool to throw in escaping on my end as well (right before it hits the database). The problem is that I don't know how to use mysql\_real\_escape\_string without adding even more slashes.
Since user input may very well contain a slash I can't check to make sure there are slashes in it. I might be able to check for all the things that need escaping and make sure they have a slash in front of them, but that doesn't seem like the best way to do it.
Any suggestions? | Have you considered *not* escaping the data until it hits the data access layer? I ask, because their are some gotchas with the approach your team is taking:
* If you need to display form data to the user (e.g., to redisplay the form with an error message because some validation failed), you need to de-escape the data (because `'` is not special to HTML) and then re-escape the data (because `<` is special). If you need to display form data to the user *pulled from the database*, you mustn't do that de-escape step (because it was done by the database, when the data was saved), but still must do the HTML escape step. If you make a mistake and do the wrong procedure, you corrupt data or worse introduce security problems.
* You can deal with the different formats from different sources problem by decided all data passed around your app will be escaped. So, your data access layer will re-escape the data upon getting it from the database. But, as different parts of the app need slightly (or completely) different escapes, this quickly leads to a lot of de-escape/re-escape nonsense. Grab the data from the database, escape it, de-escape it, escape it for HTML, output it.
* Your front-end form handling code has to have intimate knowledge of your database. For example, what does `\'` mean to your database? How should a `'` or `\` be escape — if at all? If you ever change your database engine, or even change its settings, those may change. And then you have a bunch of escaping/de-escaping code to find. Missing a single escape/de-escape may lead to SQL injection.
* Alternatively, you can take that knowledge of the database out of the front-end code by having the database layer do a de-escape/escape cycle to convert from your app-standard escape sequence to your database's. But this seems rather silly!
There is another way: Let whichever layer needs the data escaped escape it itself. Data is always passed between layers in raw, unescaped form. So your data access layer does all database escaping. Your HTML output code does all HTML escaping. When you decide you want to generate PDFs, your PDF code does all PDF escaping.
* Now, when you do form output, its clear what to do: always HTML escape the data. No matter where it came from. Never run a de-escape.
* There is now no de-escape/escape nonsense, as everything is passed around raw. It is only escaped when necessary.
* Your front-end code doesn't care about the data access layer's implementation. The data access layer stores and returns any arbitrary string.
* You have only *one* place to look in your app to make sure you have no SQL injection problems.
* You can easily make use of database driver features such as placeholders. Then not even your data access layer needs to be aware of each database's escaping requirements; the database driver handles it. | There is no way you could add an automatic decision to escape or not if you don't know if the input has been escaped. You can attempt to analyze it but it will never be good and you will encounter double backslash pairs and such.
Take the decision once that data sent to your access layer should be clean and handle the escaping in one place. If you do, the other developers will not have to worry about it (they probably don't want to anyway) and it will be much easier to move to another database in the future. It will also give you the freedom to move over to prepared statements at any time.
**Edit:** Forgot this:
> Having little control over the input
> level (unless I personally review
> everyone's code)
I think it is worth the pain to have them discover it themselves if you just make it very clear that escaping is something that belongs to the database layer and should not be done elsewhere. | Escaping only what is necessary, is that possible? | [
"",
"php",
"mysql",
""
] |
Does anybody know how I can programmatically check (using C#) whether my program will be able to read / write a particular registry key (specifically: "SOFTWARE\Microsoft\Windows\CurrentVersion\Run")?
I am asking because my program has the option to enable or disable the 'run at startup' behaviour. I want to disable this option if the current user is not allowed to make changes to the registry. Is this key always allowed to be written by the current user, or is there the possibility that it has been locked down? If the latter, how do I check this?
I have seen several conflicting ways of checking registry permissions - but basically I can't find a way to check a specific key before I try to read it. I would rather perform the check before accessing the key than trying to access it and receive an exception.
Any help is much appreciated.
Tom | The [RegistryPermission](http://msdn.microsoft.com/en-us/library/system.security.permissions.registrypermission.aspx) class governs the security permissions around reg keys. To check if you may have write access to a permission you use it in the following manner:
```
RegistryPermission perm1 = new RegistryPermission(RegistryPermissionAccess.Write, @"SOFTWARE\Microsoft\Windows\CurrentVersion\Run");
```
You would then use the "Demand" method in a try/catch and return on failure (the raising of a security exception). On success you'd carry on and perform your update. Although this isn't quite what you want, a check on permissions before access, it is the accepted way of ensuring you have the permissions you need before you operate on the keys. In a fully structured manner this would equate to:
```
try
{
RegistryPermission perm1 = new RegistryPermission(RegistryPermissionAccess.Write, @"SOFTWARE\Microsoft\Windows\CurrentVersion\Run");
perm1.Demand();
}
catch (System.Security.SecurityException ex)
{
return;
}
//Do your reg updates here
```
**EDIT:** Thinking on what I mentioned in the comment, here are extension methods to the RegistryPermission class for permission checks:
```
using System.Security.Permissions;
using System.Security;
public static class RegistryExtensions
{
public static bool HavePermissionsOnKey(this RegistryPermission reg, RegistryPermissionAccess accessLevel, string key)
{
try
{
RegistryPermission r = new RegistryPermission(accessLevel, key);
r.Demand();
return true;
}
catch (SecurityException)
{
return false;
}
}
public static bool CanWriteKey(this RegistryPermission reg, string key)
{
try
{
RegistryPermission r = new RegistryPermission(RegistryPermissionAccess.Write, key);
r.Demand();
return true;
}
catch (SecurityException)
{
return false;
}
}
public static bool CanReadKey(this RegistryPermission reg, string key)
{
try
{
RegistryPermission r = new RegistryPermission(RegistryPermissionAccess.Read, key);
r.Demand();
return true;
}
catch (SecurityException)
{
return false;
}
}
}
``` | One thing you should know about permissions is that they are *volatile*. That means you could do your security check on the registry key, attempt to add your value only if the check passes, and then *still fail* with an insufficient access exception because the permissions changed in between when you made the check and when you acted on the results. This is possible even if they are consecutive statements in your program.
Granted security permissions tend to be relatively stable, but the chance still exists. This means that you *must* have code to handle the security exception, and if you have to do that anyway there's not really any point in making the check in the first place. Instead, put your time into making your exception handler a little bit better.
That said, "boo" to any app that wants to run something at start-up. YAGNI. | How do I check whether a user is allowed to read / write a particular registry key? | [
"",
"c#",
"windows",
"permissions",
"registry",
""
] |
I am in the process of setting up user action logging for my PHP site.
Would you recommend storing them in mysql or in text/log files? | Depends on what you want to do with those, I'd say :
* If you need to get data from the logs, storing them in MySQL might help
* If you only need to have some data you almost never use (but need in case of something illegal is done on your site, or stuff like that), a file might be well enough
To not slow things down too much, you can also use both (I've used that on some websites with a bit of traffic, where it wouldn't have been wise to store data in DB immediatly) :
* during the day, store the logs in a file
* and once a day (or once an hour, you get the idea), use a batch to parse those files, and put the data in Database
This way, you don't insert data in DB all the time ; and you can (provided a day or an hour has passed) do all the queries you need | I would recommend using something like [Zend\_Log](http://zendframework.com/manual/en/zend.log.html) to abstract from the actual 'physical' logging. That way you can always change backends later very easily, in case your situation changes for some reason or another. | PHP logs - mysql vs file | [
"",
"php",
"logging",
""
] |
I would like to find the date stamp of monday, tuesday, wednesday, etc. If that day hasn't come this week yet, I would like the date to be this week, else, next week. Thanks! | See [`strtotime()`](http://php.net/strtotime)
```
strtotime('next tuesday');
```
You could probably find out if you have gone past that day by looking at the week number:
```
$nextTuesday = strtotime('next tuesday');
$weekNo = date('W');
$weekNoNextTuesday = date('W', $nextTuesday);
if ($weekNoNextTuesday != $weekNo) {
//past tuesday
}
``` | I know it's a bit of a late answer but I would like to add my answer for future references.
```
// Create a new DateTime object
$date = new DateTime();
// Modify the date it contains
$date->modify('next monday');
// Output
echo $date->format('Y-m-d');
```
The nice thing is that you can also do this with dates other than today:
```
// Create a new DateTime object
$date = new DateTime('2006-05-20');
// Modify the date it contains
$date->modify('next monday');
// Output
echo $date->format('Y-m-d');
```
To make the range:
```
$monday = new DateTime('monday');
// clone start date
$endDate = clone $monday;
// Add 7 days to start date
$endDate->modify('+7 days');
// Increase with an interval of one day
$dateInterval = new DateInterval('P1D');
$dateRange = new DatePeriod($monday, $dateInterval, $endDate);
foreach ($dateRange as $day) {
echo $day->format('Y-m-d')."<br />";
}
```
### References
PHP Manual - [DateTime](http://php.net/manual/en/book.datetime.php)
PHP Manual - [DateInterval](http://php.net/manual/en/class.dateinterval.php)
PHP Manual - [DatePeriod](http://php.net/manual/en/class.dateperiod.php)
PHP Manual - [clone](http://php.net/manual/en/language.oop5.cloning.php) | Get the date of next monday, tuesday, etc | [
"",
"php",
"datetime",
""
] |
I have **Java webserver (no standard software ... self written)**. Everything seems to work fine, but when I try to call a page that contains pictures, those pictures are not displayed. Do I have to send images with the output stream to the client? Am I missing an extra step?
As there is too much code to post it here, here is a little outline what happens or is supposed to happen:
```
1. client logs in
2. client gets a session id and so on
3. the client is connected with an output stream
4. we built the response with the HTML-Code for a certain 'GET'-request
5. look what the GET-request is all about
6. send html response || file || image (not working yet)
```
So much for the basic outline ...
**It sends css-files and stuff, but I still have a problem with images!
Does anybody have an idea? How can I send images from a server to a browser?
Thanks.**
I check requests from the client and responses from the server with charles. It sends the files (like css or js) fine, but doesn't with images: though the status is "200 OK" the transfer-encoding is *chunked* ... I have no idea what that means!? Does anybody know?
EDIT:
Here is the file-reading code:
```
try{
File requestedFile = new File( file );
PrintStream out = new PrintStream( this.getHttpExchange().getResponseBody() );
// File wird geschickt:
InputStream in = new FileInputStream( requestedFile );
byte content[] = new byte[(int)requestedFile.length()];
in.read( content );
try{
// some header stuff
out.write( content );
}
catch( Exception e ){
e.printStackTrace();
}
in.close();
if(out!=null){
out.close();
System.out.println( "FILE " + uri + " SEND!" );
}
}
catch ( /*all exceptions*/ ) {
// catch it ...
}
``` | > Do I have to send those external files
> or images with the output stream to
> the client?
The client will make separate requests for those files, which your server will have to serve. However, those requests can arrive over the same [persisten connection](http://en.wikipedia.org/wiki/HTTP_persistent_connection) (a.k.a. keepalive). The two most likely reasons for your problem:
1. The client tries to send multiple requests over a persistent connection (which is the default with HTTP 1.1) and your server is not handling this correctly. The easiest way to avoid this is to send a `Connection: close` header with the response.
2. The client tries to open a separate connection and your server isn't handling it correctly.
**Edit:**
There's a problem with this line:
```
in.read( content );
```
This method is not guaranteed to fill the array; it will read an arbitrary number of bytes and return that number. You have to use it in a loop to make sure everything is read. Since you have to do a loop anyway, it's a good idea to use a smaller array as a buffer to avoid keeping the whole file in memory and running into an OutOfMemoryError with large files. | Your browser will send separate `GET image.png HTTP 1.1` requests to your server, you should handle these file-gets too. There is no good way to embed and image browser-independent in HTML, only the `<img src="data:base64codedimage">` protocol handler is available in some browsers.
As you create your HTML response, you can include the contents of the external js/css files directly between `<script></script>` and `<style></style>` tags.
**Edit:** I advise to use [Firebug](http://getfirebug.com/) for further diagnostics. | No images displayed when website called from self written webserver | [
"",
"java",
"image",
"file",
"webserver",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.