Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have been looking around for solutions, and tried to implement what is often suggested, but I am not managing to horizontally center a div within another div.
With my CMS I want to show up to four info blocks towards the bottom of the page. So, I am trying to put between zero and four divs within two container divs. The four divs get a width of 19%, 33% or 49% of the available width dependent on how many divs are shown on a page. The inner container is supposed to be horizontally centered within the outer container. Thereby, the group of up to four divs should of course end up in the horizontal center. Since the inner container is the same width as the main content plus two columns above it, which are centered, it should appear in line vertically. The outer container takes the full page width and has a background image.
My code is now as follows:
```
<!-- BEGIN USER MODULES -->
<tr>
<td align="left" valign="top">
<?php if ( $this->countModules( 'user1 and user2' ) || $this->countModules( 'user1 and user3' ) || $this->countModules( 'user1 and user4' ) || $this->countModules( 'user2 and user3' ) || $this->countModules( 'user2 and user4' ) || $this->countModules( 'user3 and user4' ) ) : ?>
<style type="text/css">#user1, #user2, #user3, #user4 { width:49%; }</style>
<?php endif; ?>
<?php if ( $this->countModules( 'user1 and user2 and user3' ) || $this->countModules( 'user1 and user2 and user4' ) || $this->countModules( 'user1 and user3 and user4' ) || $this->countModules( 'user2 and user3 and user4' ) ) : ?>
<style type="text/css">#user1, #user2, #user3, #user4 { width:33%; }</style>
<?php endif; ?>
<?php if ( $this->countModules( 'user1 and user2 and user3 and user4' ) ) : ?>
<style type="text/css">#user1, #user2, #user3, #user4 { width:19%; }</style>
<?php endif; ?>
<?php if ($this->countModules( 'user1 or user2 or user3 or user4' )) : ?><div id="wrap1234"><div id="user1234">
<?php if($this->countModules('user1')) : ?><div id="user1" class="module_bc"><jdoc:include type="modules" name="user1" style="xhtml" /></div><?php endif; ?>
<?php if($this->countModules('user2')) : ?><div id="user2" class="module_bc"><jdoc:include type="modules" name="user2" style="xhtml" /></div><?php endif; ?>
<?php if($this->countModules('user3')) : ?><div id="user3" class="module_bc"><jdoc:include type="modules" name="user3" style="xhtml" /></div><?php endif; ?>
<?php if($this->countModules('user4')) : ?><div id="user4" class="module_bc"><jdoc:include type="modules" name="user4" style="xhtml" /></div><?php endif; ?>
</div><div class="clear"></div></div><?php endif; ?>
</td>
</tr>
```
In my style sheets I have this:
```
#wrap1234 { background:transparent url(images/header_bg.png) no-repeat scroll 0 0; border-bottom:1px solid #444444; border-top:1px solid #444444; margin:25px 0 10px; padding:5px 0; text-align:center; align:center;}
#user1234 { width:1420px; margin-left:auto; margin-right:auto; }
#user1, #user2, #user3, #user4 { float:left; margin:5px 0; padding:5px 0; text-align:left; }
```
The table and body in which all this is placed are as follows, skipping everything that falls outside of the direct hierarchical line:
```
<body><div id="wrapper_main"><center><table border="0" cellpadding="0" cellspacing="0" width="<?php echo $this->params->get('width'); ?>" id="main_table"><tbody>
```
The css of the body and table is below. Using Firebug I cannot find anything in there that makes a difference when switched off.
```
html, body, form, fieldset{margin:0; padding:0;}
body {background:#222222 none repeat scroll 0 0; color:#777777; font-family:Helvetica,Tahoma,sans-serif; font-size:0.72em; height:100%; text-align:center;}
#wrapper_main {background:#FFFFFF url(images/wrapper_main_bg.gif) repeat-x scroll left top; border-bottom:2px solid #CCCCCC; padding-bottom:20px; position:relative; width:100%; z-index:1;}
td, div {font-size:100%;}
```
The actual page is available on my development site at jldev d o t joomlaloft.com.
As you can see, I have given the inner container a fixed width as well as left and right margin auto, as is often suggested as the way to center a div horizontally. However, the inner container with the divs in it ends up left aligned.
Can this be made to work? Or, should I try an alternative, for example by putting a variable margin left and right on the most left and most right info block divs?
I have seen that there are many very good answers on stackoverflow so I am hoping that someone is able to tell me where I went wrong. As it is I am out of inspiration... Many thanks in advance for any help you can give!
PS Btw this must be one of the most intuitive and practical forums I have ever seen! | The inner centered div cannot be wider than it's containing div. In my browser your site is 1200 px wide and the #user1234 is 1420px;
Set #user1234 to width: 90%; and each user div to width: 25%; when there are 4 divs. Put overflow:hidden; on #user1234 to clear the floats.
It does not look like the #user1234 div is centered because the #user4 div text does not fill the available space. Put background-color:#c00; on #user1234 to see where it's boundaries are. (note: you must have overflow:hidden otherwise #user1234 does not have a height and you will not see the background-color).
Re: previous comment - There are no problems with using width and margin on the same tag. | I'm not sure why are you using width:1420px for. Please follow [this](http://www.intensivstation.ch/en/templates/) or [this](http://layouts.ironmyers.com/) site for proper layout tutorials in css. Another option would be to use a [css grid](http://960.gs).
I personally recommend, taking the table out and switching to a fluid grid, that'll fit your site best. It has a bit of a learning curve but trust me, you'll be happier ever after. There is a lot of work done on your site; albeit the CSS could be changed for the better. CSS freenode usually gives good advice and critique if you need immediate help.
Another thing, it's not recommended to use width and margin in the same tag | centering divs does not work as it should | [
"",
"php",
"css",
"html",
""
] |
After loosing much sleep I still cannot figure this out:
The code below (its a simplification from larger code that shows only the problem) Identifies Item1 and Item2 on FF but does not on IE7. I'm clueless.
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
</head>
<body>
<table><tr>
<td><img src=imgs/site/trash.jpg border=1></td><td><font style="">Item1</font></td>
<td><img src=imgs/site/trash.jpg border=1></td><td><font style="">Item2</font></td>
</tr></table>
<script type="text/javascript">
var _pattern =/trash.*?<font.*?>(.*)<\/font>/gim;
alert (_pattern);
var thtml = document.documentElement.innerHTML;
alert (thtml);
while ( _match =_pattern.exec(thtml)){
alert (_match[1]);
}
</script>
</body>
</html>
```
Notes: 1. I know there are better ways to get Item1 and Item2. this example is for showing the Regex problem I'm facing in the simplest way.
2. When I remove the table and /table tags it works.
Thanks in advance | The problem is that JScripts multiline implementation is buggy. It doesn't allow the any char . to match a newline character.
Use this regex instead:-
```
var _pattern = /trash[\s\S]*?<font[^>]*>([^<]*)<\/font>/gi;
```
This eliminates . altogether, note [\s\S] is equivalent but will match a new line.
The reason why removing table changes things is the IE's .innerHTML implementation doesn't rely on original markup received. Instead the markup is created dynamically by examining the DOM. When it sees a table element it places newlines in the output in different places to than when table is missing. | Seriously this is horrible. A solution based on getElementById / getElementsByTagName will be *considerably* more reliable and flexible.
As for the actual problem it's probably because **javascript multiline regex support is not x-browser safe** and IE in particular has problems. Removing the table declaration will probably force IE to internally format the remaining markup to a single line (=success) where adding it back in will make IE add carriage returns etc (=fail).
I know that you did say you know there are better ways, but you didn't explain why you'd persist with this. Relying on regex and further relying on IE plaintext interpretation of a DOM is going to get you into problems like this. Don't do it. | Javascript regex - why is it not working as expected on IE? | [
"",
"javascript",
"regex",
""
] |
Given Java's "write once, run anywhere" paradigm and the fact that the Java tutorials give explicit bit sizes for all the primitive data types without the slightest hint that this is dependent on anything, I would say that, yes, an `int` is always 32 bit.
But are there any caveats? The [language spec](http://java.sun.com/docs/books/jls/third_edition/html/typesValues.html#4.2.1) defines the value range, but says nothing about the internal representation, and I guess that it probably shouldn't. However, **I have some code which does bitwise operations on `int` variables that assume 32 bit width, and I was wondering whether that code is safe on all architectures**.
Are there good in-depth resources for this type of question? | Java code always works as though ints are 32-bit, regardless of the native architecture.
In the specification, there's also [a part](http://java.sun.com/docs/books/jls/third_edition/html/typesValues.html#85587) that is definitive about representation:
> The integral types are byte, short, int, and long, whose values are 8-bit, 16-bit, 32-bit and 64-bit signed two's-complement integers, respectively, and char, whose values are 16-bit unsigned integers representing UTF-16 code units | While the behaviour of Java's primitives is specified completely and exactly in the language spec, there is one caveat: on a 64bit architetcture, it's possible that ints will be word-aligned, which means that an array of ints (or any non-64bit primitive type) could take twice as much memory as on a 32bit achitecture. | In Java, is it safe to assume a certain size of the primitive types for bitwise operations? | [
"",
"java",
"64-bit",
""
] |
What is the best method of using native menus in C#?
Edit:
I want to use the native menu because it's better looking than the ugly .NET menu.
Can someone explain to me exactly how I can use it? | In the [System.Windows.Forms](http://msdn.microsoft.com/en-us/library/system.windows.forms.aspx) namespace, you'll find the classes: `Menu`, `ContextMenu`, `MenuItem`, `MenuStrip`, `ContextMenuStrip`, `ToolBar`, `ToolStrip`, and others.
These classes are simply wrappers around the Win32 API calls that implement these controls natively. The same is true for *nearly every* class in the System.Windows.Forms namespace; when you use these classes, you're actually using the native controls.
So the best method of using native menus in .NET is to use the controls .NET provides for to display menus, as mentioned above. Whichever ones you use depends on what you're trying to accomplish. The simplest option for attaching a main menu to a form is using the `Menu` object, populated with a `MenuItem` set. | If you are referring to the System Menu (the context menu that pops up when right-clicking on a window's title bar), [here is a great article on CodeGuru](http://www.codeguru.com/csharp/csharp/cs_misc/userinterface/article.php/c9327) for manipulating it in C#. | Native menu in C# | [
"",
"c#",
".net",
"contextmenu",
""
] |
I am using oracle DB to maintain more than 30 tables, how can I delete all the data from all the tables? I only want to delete the data but not drop the tables. | Generate a script to truncate (= remove all rows from) all tables:
```
select 'truncate table ' || table_name || ';' from user_tables
```
And then execute the script. | There is no command 'ALTER TABLE XXX DISABLE ALL CONSTRAINTS'
I propose this;
```
BEGIN
FOR c IN (SELECT table_name, constraint_name FROM user_constraints WHERE constraint_type = 'R')
LOOP
EXECUTE IMMEDIATE ('alter table ' || c.table_name || ' disable constraint ' || c.constraint_name);
END LOOP;
FOR c IN (SELECT table_name FROM user_tables)
LOOP
EXECUTE IMMEDIATE ('truncate table ' || c.table_name);
END LOOP;
FOR c IN (SELECT table_name, constraint_name FROM user_constraints WHERE constraint_type = 'R')
LOOP
EXECUTE IMMEDIATE ('alter table ' || c.table_name || ' enable constraint ' || c.constraint_name);
END LOOP;
END;
``` | SQL: delete all the data from all available tables | [
"",
"sql",
"oracle",
""
] |
I recently came across a great data structures book,"[Data Structures Using C](https://rads.stackoverflow.com/amzn/click/com/0131997467)" (c) 1991, at a local Library book sale for **only $2**. As the book's title implies, the book covers data structures using the C programming language.
I got the book knowing it would be out-dated but would probably contain lots of advanced C topics that I wouldn't encounter elsewhere.
Sure enough within 5 minutes I found something I didn't know about C. I came across a section talking about the `union` keyword and I realized that I had never used it, nor ever seen any code that does. I was grateful for learning something interesting and quickly bought the book.
For those of you not knowledgeable about what a union is, the book uses a good metaphor to explain:
> To fully understand the concept of a
> union, it is necessary to examine its
> implementation. A Structure may be
> regarded as a road map to an area of
> memory. It defines how the memory is
> to be interpreted. A union provides
> several different road maps for the
> same area of memory, and it is the
> responsibility of the programmer to
> determine which road map is in current
> use. In practice, the compiler
> allocates sufficient storage to
> contain the largest member of the
> union. It is the road map, however,
> that determines how that storage is to
> be interpreted.
**I could easily come up with contrived situations or hacks where I would use a Union. (But I am not interested in contrived situations or hacks...)**
**Have you used or seen an implementation where using Union solved the problem \*\*more elegantly\*\* than not using a Union?**
Added bonus if you include a quick explanation of why using union was better/easier than not using a union. | UNIONs implement some sort of polymorphism in a non-OOP world. Usually, you have a part which is common and depending on that part, you use the rest of the UNIONs. Therefore, in such cases where you do not have an OOP language and you want to avoid excessive pointer arithmetic, unions can be more elegant in some cases. | It's useful for setting bits in, say, registers instead of shift/mask operations:
```
typedef union {
unsigned int as_int; // Assume this is 32-bits
struct {
unsigned int unused1 : 4;
unsigned int foo : 4;
unsigned int bar : 6;
unsigned int unused2 : 2;
unsigned int baz : 3;
unsigned int unused3 : 1;
unsigned int quux : 12;
} field;
} some_reg;
```
Note: Which way the packing happens is machine-dependent.
```
some_reg reg;
reg.field.foo = 0xA;
reg.field.baz = 0x5;
write_some_register(some_address, reg.as_int);
```
I might have blown some syntax somewhere in there, my C is rusty :)
**EDIT:**
Incidentally, this works the opposite way also:
```
reg.as_int = read_some_register(some_address);
if(reg.field.bar == BAR_ERROR1) { ...
``` | Union – useless anachronism or useful old school trick? | [
"",
"c++",
"c",
"data-structures",
"unions",
""
] |
How do I securely delete files in java? I tried the code at [javafaq.nu](http://www.javafaq.nu/java-example-code-1121.html), but the problem is you can't actually delete the file on windows once it has been mapped due to [this bug](https://bugs.java.com/bugdatabase/view_bug?bug_id=4715154).
Then I tried just using [sysinternals sdelete](http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx) on windows, but you have to click a usage agreement the first time you use it which I want to avoid. | On a journaling filesystem like NTFS there is actually no way to securely erase a single file without wiping all the free space on the drive. The problem is that the new blocks (which you've presumably overwritten with random data) are not guaranteed to be in the same place on disk as the old ones.
Utilities like sdelete might work for you, but one could simply replace that executable with one that does nothing to thwart that method of defence.
In order to keep your data secure, the only real solution you have is to completely encrypt the drive. | I coded and tried this function, and it seemed to work:
```
public static void secureDelete(File file) throws IOException {
if (file.exists()) {
long length = file.length();
SecureRandom random = new SecureRandom();
RandomAccessFile raf = new RandomAccessFile(file, "rws");
raf.seek(0);
raf.getFilePointer();
byte[] data = new byte[64];
int pos = 0;
while (pos < length) {
random.nextBytes(data);
raf.write(data);
pos += data.length;
}
raf.close();
file.delete();
}
}
``` | How to securely delete files in java | [
"",
"java",
"security",
"delete-file",
""
] |
I'm having some weird issues with static initalization. I'm using a code generator to generate structs and serialization code for a message passing system I wrote. In order to have a way of easily allocating a message based on it's message id I have my code generator ouput something similar to the following for each message type:
```
MessageAllocator s_InputPushUserControllerMessageAlloc(INPUT_PUSH_USER_CONTROLLER_MESSAGE_ID, (AllocateMessageFunc)Create_InputPushUserControllerMessage);
```
The MessageAllocator class basically looks like this:
```
MessageAllocator::MessageAllocator( uint32_t messageTypeID, AllocateMessageFunc func )
{
if (!s_map) s_map = new std::map<uint32_t, AllocateMessageFunc>();
if (s_map->insert(std::make_pair(messageTypeID, func)).second == false)
{
//duplicate key!
ASSERT(false, L"Nooooo!");
}
s_count++;
}
MessageAllocator::~MessageAllocator()
{
s_count--;
if (s_count == 0) delete s_map;
}
```
where s`_`map and s`_`count are static members of MessageAllocator. This works most of the time but sometimes messages are not added to the map. For example, this particular message is not added unless i call Create`_`InputPushUserControllerMessage() somewhere in my startup code, however other messages work fine. I thought this might be something to do with the linker incorrectly thinking the type is unreferenced and removing it so I disabled that using the /OPT:NOREF switch (I'm using Visual Studio 2008 SP1) but that had no effect.
I'm aware of the problem of the "static initialization order fiasco" but as far as I know the order in which these objects are created shouldn't alter the result so this seems ok to me.
Any insight here would be appreciated. | Turns out that the object files containing the static initializers were not included by the linker because nothing referenced any functions in them. To work around this I extern "C"-ed one of the generated functions so that it would have a predictable non-mangled name and then forced a reference to it using a pragma like this for each message
```
#pragma comment(linker, "/include:Create_GraphicsDynamicMeshCreationMessage")
```
which I put in the generated header file that is later included in all the other non-generated files. It's MSVC only and kind of hack but I assume I can do something similar on GCC once I eventually port it. | Put the static into a class so it is a static member of a class
```
struct InputPushUserControllerMessageAlloc { static MessageAllocator s_obj; };
MessageAllocator InputPushUserControllerMessageAlloc::s_obj(
INPUT_PUSH_USER_CONTROLLER_MESSAGE_ID,
(AllocateMessageFunc)Create_InputPushUserControllerMessage);
```
The Standard allows it to delay initialization of objects having namespace scope until any function/object from its translation unit is used. If the initialization has side-effect, it can't be optimized out. But that doesn't forbid delaying it.
Not so of objects having class-scope. So that might forbid it optimizing something there. | Problems with Static Initialization | [
"",
"c++",
"visual-c++",
""
] |
I am having what I believe should be a fairly simple problem, but for the life of me I cannot see my problem. The problem is related to ScriptManager.RegisterStartupScript, something I have used many times before.
The scenario I have is that I have a custom web control that has been inserted into a page. The control (and one or two others) are nested inside an UpdatePanel. They are inserted onto the page onto a PlaceHolder:
```
<asp:UpdatePanel ID="pnlAjax" runat="server">
<ContentTemplate>
<asp:PlaceHolder ID="placeholder" runat="server">
</asp:PlaceHolder>
...
protected override void OnInit(EventArgs e){
placeholder.Controls.Add(Factory.CreateControl());
base.OnInit(e);
}
```
This is the only update panel on the page.
The control requires some initial javascript be run for it to work correctly. The control calls:
```
ScriptManager.RegisterStartupScript(this, GetType(),
Guid.NewGuid().ToString(), script, true);
```
and I have also tried:
```
ScriptManager.RegisterStartupScript(Page, Page.GetType(),
Guid.NewGuid().ToString(), script, true);
```
The problem is that the script runs correctly when the page is first displayed, but does not re-run after a partial postback. I have tried the following:
1. Calling RegisterStartupScript from CreateChildControls
2. Calling RegisterStartupScript from OnLoad / OnPreRender
3. Using different combinations of parameters for the first two parameters (in the example above the Control is Page and Type is GetType(), but I have tried using the control itself, etc).
4. I have tried using persistent and new ids (not that I believe this should have a major impact either way).
5. I have used a few breakpoints and so have verified that the Register line is being called correctly.
The only thing I have not tried is using the UpdatePanel itself as the Control and Type, as I do not believe the control should be aware of the update panel (and in any case there does not seem to be a good way of getting the update panel?).
Can anyone see what I might be doing wrong in the above?
Thanks :)
---
Well, to answer the query above - it does appear as if the placeholder somehow messes up the ScriptManager.RegisterStartupScript.
When I pull the control out of the placeholder and code it directly onto the page the Register script works correctly (I am also using the control itself as a parameter).
```
ScriptManager.RegisterStartupScript(this, GetType(), Guid.NewGuid().ToString(), script, true);
```
Can anyone throw any light on why an injected control onto a PlaceHolder would prevent the ScriptManager from correctly registering the script? I am guessing this might have something to do with the lifecycle of dynamic controls, but would appreciate (for my own knowledge) if there is a correct process for the above. | I think you should indeed be using the [Control overload](http://msdn.microsoft.com/en-us/library/bb359558.aspx) of the RegisterStartupScript.
I've tried the following code in a server control:
```
[ToolboxData("<{0}:AlertControl runat=server></{0}:AlertControl>")]
public class AlertControl : Control{
protected override void OnInit(EventArgs e){
base.OnInit(e);
string script = "alert(\"Hello!\");";
ScriptManager.RegisterStartupScript(this, GetType(),
"ServerControlScript", script, true);
}
}
```
Then in my page I have:
```
protected override void OnInit(EventArgs e){
base.OnInit(e);
Placeholder1.Controls.Add(new AlertControl());
}
```
Where Placeholder1 is a placeholder in an update panel. The placeholder has a couple of other controls on in it, including buttons.
This behaved exactly as you would expect, I got an alert saying "Hello" every time I loaded the page or caused the update panel to update.
The other thing you could look at is to hook into some of the page lifecycle events that are fired during an update panel request:
```
Sys.WebForms.PageRequestManager.getInstance()
.add_endRequest(EndRequestHandler);
```
The [PageRequestManager endRequestHandler](http://msdn.microsoft.com/en-us/library/bb383810.aspx) event fires every time an update panel completes its update - this would allow you to call a method to set up your control.
My only other questions are:
* What is your script actually doing?
* Presumably you can see the script in the HTML at the bottom of the page (just before the closing </form> tag)?
* Have you tried putting a few "alert("Here");" calls in your startup script to see if it's being called correctly?
* Have you tried Firefox and [Firebug](http://getfirebug.com/) - is that reporting any script errors? | I had an issue using this in a user control (in a page this worked fine); the Button1 is inside an `updatepanel`, and the `scriptmanager` is on the `usercontrol`.
```
protected void Button1_Click(object sender, EventArgs e)
{
string scriptstring = "alert('Welcome');";
ScriptManager.RegisterStartupScript(this, this.GetType(), "alertscript", scriptstring, true);
}
```
Now it seems you have to be careful with the first two arguments, they need to reference your page, not your control
```
ScriptManager.RegisterStartupScript(this.Page, this.Page.GetType(), "alertscript", scriptstring, true);
``` | Can't get ScriptManager.RegisterStartupScript in WebControl nested in UpdatePanel to work | [
"",
"c#",
"asp.net",
"ajax",
"updatepanel",
"scriptmanager",
""
] |
Do the following on the default Python install on Mac OS X 10.5 (Leopard) w/ Developer Tools:
```
noel ~ : python
Python 2.5.1 (r251:54863, Jan 13 2009, 10:26:13)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import bsddb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/bsddb/__init__.py", line 51, in <module>
import _bsddb
ImportError: No module named _bsddb
```
nice, huh? How do I fix this without giving up and installing/configuring/maintaining my own Python package as per [TMNC's suggestion](http://blogs.23.nu/c0re/2007/12/antville-16716/) or using MacPorts etc?
### Edit
I've gone around the problem by installing Python2.4 and BSDDB via MacPorts.
My question still stands: why is the default install broken and is it possible to fix it. | Follow the instructions at <http://marc-abramowitz.com/archives/2007/11/28/hacking-os-xs-python-dbhash-and-bsddb-modules-to-work/> . | The patch did not work for me and I had to replace the bsddb folder in
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
with the bsddb3 in:
/usr/local/lib/python2.7/site-packages/bsddb3
Make sure you backup the bsddb folder just in case. | How to Fix the Broken BSDDB Install in the Default Python Package on Mac OS X 10.5 Leopard? | [
"",
"python",
"macos",
"osx-leopard",
"berkeley-db",
"bsddb",
""
] |
I have a vector of pointers to a class. I need to call their destructors and free their memory. Since they are vector of pointers vector.clear() does not do the job.So I went on to do it manually like so :
```
void Population::clearPool(std::vector<Chromosome*> a,int size)
{
Chromosome* c;
for(int j = 0 ;j < size-1;j++)
{
c = a.back();
a.pop_back();
delete c;
printf(" %d \n\r",j);
c = NULL;
}
}
```
The printf in there is since I have a talking destructor to see in which Chromosome the segmentation fault happens. When clearPool() is called and say we got a size of 100, it can give a segmentation fault in any Chromosome between 0 and 100.
I have no idea why this might be happening nor do I have a way to actually find what's wrong since while debugging with breakpoints all I see is that it happens in there at random chromosomes.
I am using codeblocks IDE and the gdb debugger. The stack trace when the segmentation fault happens has 4 memory addresses and a function `wsncpy()`. | ```
void Population::clearPool( std::vector <Chromosome*> & a )
{
for ( int i = 0; i < a.size(); i++ ) {
delete a[i];
}
a.clear();
}
```
Notice that the vector is passed by reference. In your code, a copy of the vector is used, which means that it is unchanged in the calling program. Because you delete the pointers in the copy, the pointers in the original are now all invalid - I suspect you are using those invalid pointers in some way not shown in the code you posted.
As a couple of template solutions have been posted that use C++ library algorithms, you might also want to consider a template solution that does not:
```
template <class C> void FreeClear( C & cntr ) {
for ( typename C::iterator it = cntr.begin();
it != cntr.end(); ++it ) {
delete * it;
}
cntr.clear();
}
```
Using this you can free any container of dynamically allocated objects:
```
vector <Chromosome *> vc;
list <Chromosome *> lc;
// populate & use
FreeClear( lc );
FreeClear( vc );
``` | Slight modified version compared to (@1800 INFORMATION).
```
struct DeleteFromVector
{
template <class T>
void operator() ( T* ptr) const
{
delete ptr;
}
};
std::for_each(aVec.begin(), aVec.end(), DeleteFromVector());
``` | C++ std::vector of pointers deletion and segmentation faults | [
"",
"c++",
"pointers",
"memory-management",
"segmentation-fault",
""
] |
```
$items = (isset($_POST['items'])) ? $_POST['items'] : array();
```
I don't understand the last snippet of this code "`? $_POST['items'] : array();`"
What does that combination of code do exactly?
I use it to take in a bunch of values from html text boxes and store it into a session array. But the problem is, if I attempt to resubmit the data in text boxes the new array session overwrites the old session array completely blank spaces and all.
I only want to overwrite places in the array that already have values. If the user decides to fill out only a few text boxes I don't want the previous session array data to be overwritten by blank spaces (from the blank text boxes).
I'm thinking the above code is the problem, but I'm not sure how it works. Enlighten me please. | This is **a** [ternary operator](http://www.php.net/ternary#language.operators.comparison.ternary):
The expression `(expr1) ? (expr2) : (expr3)` evaluates to `expr2` if `expr1` evaluates to `TRUE`, and `expr3` if `expr1` evaluates to `FALSE`. | That last part is known as the *conditional operator*. Basically it is a condensed `if/else` statement.
It works like this:
```
$items =
// if this expression is true
(isset($_POST['items']))
// then "$_POST['items']" is assigned to $items
? $_POST['items']
// else "array()" is assigned
: array();
```
Also here is some pseudo-code that may be simpler:
```
$items = (condition) ? value_if_condition_true : value_if_condition_false;
```
---
**Edit:** Here is a quick, pedantic side-note: The PHP documentation calls this operator a *ternary operator*. While the conditional operator is technically a ternary operator (that is, an operator with 3 operands) it is a misnomer (and rather presumptive) to call it ***the*** ternary operator. | What does ? ... : ... do? | [
"",
"php",
"syntax",
"ternary-operator",
""
] |
In my Java application, I need to connect to the same host using SSL, but using a different certificate each time. The reason I need to use different certificates is that the remote site uses a user ID property embedded in the certificate to identify the client.
This is a server application that runs on 3 different operating systems, and I need to be able to switch certificates without restarting the process.
[Another user](https://stackoverflow.com/questions/759603/how-do-i-use-multiple-ssl-certificates-in-java) suggested importing multiple certificates into the same keystore. I'm not sure that helps me, though, unless there is a way to tell Java which certificate in the keystore to use. | SSL can provide hints to the client about which certificate to present. This *might* allow you to use one key store with multiple identities in it, but, unfortunately, most servers don't use this hinting feature. So, it will be more robust if you specify the client certificate to use on for each connection.
Here is sample code to set up one `SSLContext` with specified identity and trust stores. You can repeat these steps to create multiple contexts, one for each client certificate you want to use. Each `SSLContext` would probably use the same trust store, but a different identity store (containing the single client key entry to be used in that context).
Initialize the contexts that you will need one time, and reuse the the correct one for each connection. If you are making multiple connections, this will allow you to take advantage of SSL sessions.
```
KeyManagerFactory kmf =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
kmf.init(identityStore, password);
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(trustStore);
SSLContext ctx = SSLContext.getInstance("TLS");
ctx.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);
```
Later, you can create a socket directly:
```
SSLSocketFactory factory = ctx.getSocketFactory();
Socket socket = factory.createSocket(host, port);
```
Or, if you are using the `URL` class, you can specify the `SSLSocketFactory` to use when making HTTPS requests:
```
HttpsURLConnection con = (HttpsURLConnection) url.openConnection();
con.setSSLSocketFactory(ctx.getSocketFactory());
```
Java 6 has some additional API that makes it easier to configure sockets according to your preferences for cipher suites, etc. | Implementations may have changed a lot since the question was asked. My understanding is that the server will send trusted issues to the client:
```
Found trusted certificate:
[
[
Version: V3
Subject: CN=localhost, OU=Spring, O=Pivotal, L=Holualoa, ST=HI, C=US
Signature Algorithm: SHA256withRSA, OID = 1.2.840.113549.1.1.11
```
Then the client will receive a CertificateRequest:
```
*** CertificateRequest
Cert Types: RSA, DSS, ECDSA
Supported Signature Algorithms: SHA512withECDSA, SHA512withRSA, SHA384withECDSA, SHA384withRSA, SHA256withECDSA, SHA256withRSA, SHA256withDSA, SHA224withECDSA, SHA224withRSA, SHA224withDSA, SHA1withECDSA, SHA1withRSA, SHA1withDSA
Cert Authorities:
<CN=localhost, OU=Spring, O=Pivotal, L=Holualoa, ST=HI, C=US>
```
Then the client will scan its local keystore with issuers:
```
Set<X500Principal> certIssuers =
credentials.getIssuerX500Principals();
for (int i = 0; i < x500Issuers.length; i++) {
if (certIssuers.contains(issuers[i])) {
aliases.add(alias);
if (debug != null && Debug.isOn("keymanager")) {
System.out.println("matching alias: " + alias);
}
break;
}
```
If [found](https://github.com/openjdk-mirror/jdk7u-jdk/blob/f4d80957e89a19a29bb9f9807d2a28351ed7f7df/src/share/classes/sun/security/ssl/SunX509KeyManagerImpl.java#L380), it will use that certificate. | Using multiple SSL client certificates in Java with the same host | [
"",
"java",
"ssl",
"certificate",
""
] |
I have a string of that displays like this....
1235, 3, 1343, 5, 1234, 1
I need to replace every second comma with a semicolon
i.e.
1235, 3; 1343, 5; 1234, 1
the string length will always be different but will follow the same pattern as the above i.e. digits comma space digits comma space, etc.
how can I do this with javascript? Is it possible?
Thanks,
Mike | ```
'1235, 3, 1343, 5, 1234, 1'.replace(/([0-9]+),\s([0-9]+),\s/g, '$1, $2; ')
``` | ```
var s = '1235, 3, 1343, 5, 1234, 1';
var result = s.replace(/(,[^,]*),/g,"$1;");
``` | Help with regexp replacing every second comma in the string | [
"",
"javascript",
"regex",
""
] |
I'm trying to compare time in a **datetime field** in a SQL query, but I don't know if it's right. I don't want to compare the date part, just the time part.
I'm doing this:
```
SELECT timeEvent
FROM tbEvents
WHERE convert(datetime, startHour, 8) >= convert(datetime, @startHour, 8)
```
Is it correct?
I'm asking this because I need to know if `08:00:00` is less or greater than `07:30:00` and I don't want to compare the date, just the **time** part.
Thanks! | Your compare will work, but it will be slow because the dates are converted to a string for each row. To efficiently compare two time parts, try:
```
declare @first datetime
set @first = '2009-04-30 19:47:16.123'
declare @second datetime
set @second = '2009-04-10 19:47:16.123'
select (cast(@first as float) - floor(cast(@first as float))) -
(cast(@second as float) - floor(cast(@second as float)))
as Difference
```
Long explanation: a date in SQL server is stored as a floating point number. The digits before the decimal point represent the date. The digits after the decimal point represent the time.
So here's an example date:
```
declare @mydate datetime
set @mydate = '2009-04-30 19:47:16.123'
```
Let's convert it to a float:
```
declare @myfloat float
set @myfloat = cast(@mydate as float)
select @myfloat
-- Shows 39931,8244921682
```
Now take the part after the comma character, i.e. the time:
```
set @myfloat = @myfloat - floor(@myfloat)
select @myfloat
-- Shows 0,824492168212601
```
Convert it back to a datetime:
```
declare @mytime datetime
set @mytime = convert(datetime,@myfloat)
select @mytime
-- Shows 1900-01-01 19:47:16.123
```
The 1900-01-01 is just the "zero" date; you can display the time part with convert, specifying for example format 108, which is just the time:
```
select convert(varchar(32),@mytime,108)
-- Shows 19:47:16
```
Conversions between datetime and float are pretty fast, because they're basically stored in the same way. | ```
convert(varchar(5), thedate, 108) between @leftTime and @rightTime
```
Explanation:
if you have `varchar(5)` you will obtain `HH:mm`
if you have `varchar(8)` you obtain `HH:mm ss`
108 obtains only the time from the SQL date
`@leftTime` and `@rightTime` are two variables to compare | How can I compare time in SQL Server? | [
"",
"sql",
"sql-server",
"datetime",
""
] |
I need a data structure which behaves like a Map,
but uses multiple (differently-typed) keys to access its values.
*(Let's not be too general, let's say **two** keys)*
**Keys are guaranteed to be unique.**
Something like:
```
MyMap<K1,K2,V> ...
```
With methods like:
```
getByKey1(K1 key)...
getByKey2(K2 key)...
containsKey1(K1 key)...
containsKey2(K2 key)...
```
**Do you have any suggestions?**
The only thing I can think of is:
Write a class which uses two Maps internally.
**EDIT**
Some people suggest me to use a **tuple**, a **pair**, or similar as a key for
Java's Map, but this **would not work** for me:
I have to be able, as written above, to search values by only one of the two keys specified.
Maps use hash codes of keys and check for their equality. | Two maps. One `Map<K1, V>` and one `Map<K2, V>`. If you must have a single interface, write a wrapper class that implements said methods. | Commons-collections provides just what you are looking for:
<https://commons.apache.org/proper/commons-collections/apidocs/>
Looks like now the commons-collections is typed.
A typed version can be found at:
<https://github.com/megamattron/collections-generic>
This will exactly support your use case:
```
MultiKeyMap<k1,k2,...,kn,v> multiMap = ??
``` | How to implement a Map with multiple keys? | [
"",
"java",
"data-structures",
""
] |
I've defined a C# class with a string member. For all intents an purposes, think of this class as being a subclass of string (except that's not allowed). I'm using it to represent a strongly typed string field that matches a specific format (I've simplified this significantly).
```
public class field
{
private readonly string m_field;
public field(string init_value)
{
//Check the syntax for errors
if (CheckSyntax(init_value))
{
m_field = init_value;
}
else
{
throw new ArgumentOutOfRangeException();
}
}
public override string ToString()
{
return m_field;
}
}
```
Now, I want to be able to compare this class directly to any other string (object or literal). Therefore, I implemented the following in the class:
```
public override bool Equals(object obj)
{
if (obj == null)
{
return false;
}
return this.m_field == obj.ToString();
}
public override int GetHashCode()
{
return this.m_field.GetHashCode();
}
public static bool operator ==(field x, Object y)
{
if ((object)x == null && y == null)
{
return true;
}
else if ((object)x == null || y == null)
{
return false;
}
else
{
return (x.m_field == y.ToString());
}
}
public static bool operator !=(field x, Object y)
{
return !(x == y);
}
```
Now when I'm writing a unit test, depending on the order that I'm passing in the arguments to Assert.AreEqual, I get different results:
```
string valid = "Some String";
field target = new field(valid);
Assert.AreEqual(target, valid); // PASSES
Assert.AreEqual(valid, target); // FAILS
```
I'm assuming this is because in the first assert, it's calling field.Equals() and in the second it's calling String.Equals(). Obviously I'm approaching this from the wrong angle. Can anyone give me some insight?
One other thing. I can't use a struct here (value type) because in my actual case I'm defining all this in a base class and inheriting from it. | This is described in detail in [Effective Java](http://books.google.com/books?id=ka2VUBqHiWkC&pg=PA33&dq=obey+general+contract+equals+java) as Item 8: Obey the general contract when overriding `equals`.
> The `equals` method implements an equivalence relation.
It is Reflexive, Symmetric, Transitive, Consistent, and for any non-null reference x, `x.equals(null)` must return `false`. The example cited to break symmetry is similar to yours.
`field` class is aware of `string` class, but the built-in `string` class is not aware of `field`. This a one-way interoperability, and should be removed. | Basically you can't do what you want to - there's no way you can make `string` recognise your class for equality purposes. You'll never be able to make it reflexive - you'll never be able to make it obey the contract of `object.Equals`.
I would personally try to redesign it so that you didn't have the validation as part of the type itself - make it part of the relevant properties of the business entities (or whatever they are). | Overriding Equals and comparing to string | [
"",
"c#",
"equals",
""
] |
Here's the method. I want to know if I am violating any best practices here or if I am doing something wrong as far as the language is concerned.
```
private List<String> breakStringInChunks(String text, int chunkSize) {
List<String> chunks = new ArrayList<String>();
String temporary = "";
int numberOfChunks = text.length() / chunkSize;
int beginIndex = 0;
int endIndex = 0;
// Add one iteration if numberOfChunks*chunkSize is less than the length of text.
if ((numberOfChunks * chunkSize) < text.length()) {
numberOfChunks++;
}
// Cut strings and add in the list.
for (int i = 0; i < numberOfChunks; i++) {
endIndex+=chunkSize;
if ((i + 1) == numberOfChunks) {
temporary = text.substring(beginIndex);
}
else {
temporary = text.substring(beginIndex, endIndex);
}
beginIndex=endIndex;
chunks.add(temporary);
}
return chunks;
}
``` | Briefer still, and avoids potential resizing of the resulting list.
```
private static List<String> breakStringInChunks(final String text, final int chunkSize) {
final int numChunks = 0 == (text.length() % chunkSize) ? text.length() / chunkSize : 1 + (text.length() / chunkSize);
final List<String> chunks = new ArrayList<String>(numChunks);
for (int startIndex = 0; startIndex < text.length(); startIndex += chunkSize) {
final int endIndex = Math.min(text.length(), startIndex + chunkSize);
chunks.add(text.substring(startIndex, endIndex));
}
return chunks;
}
``` | It's a bit verbose, and there is no need to declare the `temporary` string at the start of your method, which could make garbage collection a bit slower. The following would be briefer:
```
private List<String> breakStringInChunks(String text, int chunkSize) {
int nChunks = (int)Math.ceil(((double)text.length())/chunkSize));
List<String> chunks = new ArrayList<String>(nChunks);
// Cut strings and add in the list.
for (int i = 0; i < text.length(); i+=chunkSize) {
int endIndex=i+chunksize;
if (endIndex >= text.length()) {
chunks.add(text.substring(i));
} else {
chunks.add(text.substring(i, endIndex));
}
}
return chunks;
}
```
One good thing about your method and the text above is that because you always call substring() on the original String, Java will only reference the original character array, so it will save you some memory allocations.
I think the `} else {` is a more common coding standard for Java. | How do I optimize this method for breaking a string in chunks? | [
"",
"java",
"algorithm",
"optimization",
"string",
""
] |
After some experience with functional languages, I'm starting to use recursion more in Java - But the language seems to have a relatively shallow call stack of about 1000.
Is there a way to make the call stack bigger? Like can I make functions that are millions of calls deep, like in Erlang?
I'm noticing this more and more when I do Project Euler problems.
Thanks. | I guess you could use these parameters
> -ss Stacksize to increase the native
> stack size or
>
> -oss Stacksize to increase the Java
> stack size,
>
> The default native stack size is 128k,
> with a minimum value of 1000 bytes.
> The default java stack size is 400k,
> with a minimum value of 1000 bytes.
<http://edocs.bea.com/wls/docs61/faq/java.html#251197>
EDIT:
After reading the first comment (Chuck´s), as well as re reading the question and reading another answers, i´d like to clarify that i interpreted the question as just "increase stack size". I didn´t intend to say that you can have infinite stacks, such as in functional programming (a programming paradigm which i´ve only scratched its surface). | Increasing the stack size will only serve as a temporary bandage. As others have pointed out, what you really want is tail call elimination, and Java does not have this for various reasons. However, you can cheat if you want.
Red pill in hand? OK, this way please.
There are ways in which you can exchange stack for heap. For example, instead of making a recursive call within a function, have it return a **lazy datastructure** that makes the call when evaluated. You can then unwind the "stack" with Java's for-construct. I'll demonstrate with an example. Consider this Haskell code:
```
map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = (f x) : map f xs
```
Note that this function never evaluates the tail of the list. So the function doesn't actually need to make a recursive call. In Haskell, it actually returns a *thunk* for the tail, which is called if it's ever needed. We can do the same thing in Java (this uses classes from [Functional Java](http://functionaljava.org)):
```
public <B> Stream<B> map(final F<A, B> f, final Stream<A> as)
{return as.isEmpty()
? nil()
: cons(f.f(as.head()), new P1<Stream<A>>()
{public Stream<A> _1()
{return map(f, as.tail);}});}
```
Note that `Stream<A>` consists of a value of type `A` and a value of type `P1` which is like a thunk that returns the rest of the stream when \_1() is called. While it certainly looks like recursion, the recursive call to map is not made, but becomes part of the Stream datastructure.
This can then be unwound with a regular for-construct.
```
for (Stream<B> b = bs; b.isNotEmpty(); b = b.tail()._1())
{System.out.println(b.head());}
```
Here's another example, since you were talking about Project Euler. This program uses mutually recursive functions and does not blow the stack, even for millions of calls:
```
import fj.*; import fj.data.Natural;
import static fj.data.Enumerator.naturalEnumerator;
import static fj.data.Natural.*; import static fj.pre.Ord.naturalOrd;
import fj.data.Stream; import fj.data.vector.V2;
import static fj.data.Stream.*; import static fj.pre.Show.*;
public class Primes
{public static Stream<Natural> primes()
{return cons(natural(2).some(), new P1<Stream<Natural>>()
{public Stream<Natural> _1()
{return forever(naturalEnumerator, natural(3).some(), 2)
.filter(new F<Natural, Boolean>()
{public Boolean f(final Natural n)
{return primeFactors(n).length() == 1;}});}});}
public static Stream<Natural> primeFactors(final Natural n)
{return factor(n, natural(2).some(), primes().tail());}
public static Stream<Natural> factor(final Natural n, final Natural p,
final P1<Stream<Natural>> ps)
{for (Stream<Natural> ns = cons(p, ps); true; ns = ns.tail()._1())
{final Natural h = ns.head();
final P1<Stream<Natural>> t = ns.tail();
if (naturalOrd.isGreaterThan(h.multiply(h), n))
return single(n);
else {final V2<Natural> dm = n.divmod(h);
if (naturalOrd.eq(dm._2(), ZERO))
return cons(h, new P1<Stream<Natural>>()
{public Stream<Natural> _1()
{return factor(dm._1(), h, t);}});}}}
public static void main(final String[] a)
{streamShow(naturalShow).println(primes().takeWhile
(naturalOrd.isLessThan(natural(Long.valueOf(a[0])).some())));}}
```
Another thing you can do to exchange stack for heap is to use **multiple threads**. The idea is that instead of making a recursive call, *you create a thunk that makes the call, hand this thunk off to a new thread and let the current thread exit the function.* This is the idea behind things like Stackless Python.
The following is an example of that in Java. Apologies that it's a bit opaque to look at without the `import static` clauses:
```
public static <A, B> Promise<B> foldRight(final Strategy<Unit> s,
final F<A, F<B, B>> f,
final B b,
final List<A> as)
{return as.isEmpty()
? promise(s, P.p(b))
: liftM2(f).f
(promise(s, P.p(as.head()))).f
(join(s, new P1<Promise<B>>>()
{public Promise<B> _1()
{return foldRight(s, f, b, as.tail());}}));}
```
`Strategy<Unit> s` is backed by a thread pool, and the `promise` function hands a thunk to the thread pool, returning a `Promise`, which is very much like `java.util.concurrent.Future`, only better. [See here.](http://apocalisp.wordpress.com/2008/09/02/a-better-future/) The point is that the method above *folds a right-recursive datastructure to the right in O(1) stack*, which ordinarily requires tail-call elimination. So we've effectively achived TCE, in exchange for some complexity. You would call this function as follows:
```
Strategy<Unit> s = Strategy.simpleThreadStrategy();
int x = foldRight(s, Integers.add, List.nil(), range(1, 10000)).claim();
System.out.println(x); // 49995000
```
Note that this latter technique works perfectly well for nonlinear recursion. That is, it will run in constant stack even algorithms that don't have tail calls.
Another thing you can do is employ a technique called **trampolining**. A trampoline is a computation, reified as a data structure, that can be stepped through. The [Functional Java library](https://github.com/functionaljava/functionaljava/blob/master/core/src/main/java/fj/control/Trampoline.java) includes a [`Trampoline`](https://github.com/functionaljava/functionaljava/blob/master/core/src/main/java/fj/control/Trampoline.java) data type that I wrote, which effectively lets you turn any function call into a tail call. As an example [here is a trampolined `foldRightC` that folds to the right in constant stack:](https://github.com/functionaljava/functionaljava/blob/master/core/src/main/java/fj/data/List.java#L635)
```
public final <B> Trampoline<B> foldRightC(final F2<A, B, B> f, final B b)
{return Trampoline.suspend(new P1<Trampoline<B>>()
{public Trampoline<B> _1()
{return isEmpty()
? Trampoline.pure(b)
: tail().foldRightC(f, b).map(f.f(head()));}});}
```
It's the same principle as using multiple threads, except that instead of invoking each step in its own thread, we construct each step on the heap, very much like using a `Stream`, and then we run all the steps in a single loop with `Trampoline.run`. | Stack overflows from deep recursion in Java? | [
"",
"java",
"functional-programming",
"stack",
"overflow",
""
] |
For example:
```
string element = 'a';
IEnumerable<string> list = new List<string>{ 'b', 'c', 'd' };
IEnumerable<string> singleList = ???; //singleList yields 'a', 'b', 'c', 'd'
``` | I take it you can't just `Insert` into the existing list?
Well, you could use `new[] {element}.Concat(list)`.
Otherwise, you could write your own extension method:
```
public static IEnumerable<T> Prepend<T>(
this IEnumerable<T> values, T value) {
yield return value;
foreach (T item in values) {
yield return item;
}
}
...
var singleList = list.Prepend("a");
``` | Since .NET framework 4.7.1 there is LINQ method for that:
```
list.Prepend("a");
```
<https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable.prepend?view=netframework-4.7.1> | Is there a statement to prepend an element T to a IEnumerable<T> | [
"",
"c#",
"linq",
""
] |
What's the simplest/canonical way to create an empty file in C#/.NET?
The simplest way I could find so far is:
```
System.IO.File.WriteAllLines(filename, new string[0]);
``` | Using just `File.Create` will leave the file open, which probably isn't what you want.
You could use:
```
// Results in compiler warning "warning CS0642: Possible mistaken empty statement"
using (File.Create(filename)) ;
```
That looks slightly odd, mind you. You could use braces instead:
```
using (File.Create(filename)) {}
```
Or just call `Dispose` directly:
```
File.Create(filename).Dispose();
```
Either way, if you're going to use this in more than one place you should probably consider wrapping it in a helper method, e.g.
```
public static void CreateEmptyFile(string filename)
{
File.Create(filename).Dispose();
}
```
Note that calling `Dispose` directly instead of using a `using` statement doesn't really make much difference here as far as I can tell - the only way it *could* make a difference is if the thread were aborted between the call to `File.Create` and the call to `Dispose`. If that race condition exists, I suspect it would *also* exist in the `using` version, if the thread were aborted at the very end of the `File.Create` method, just before the value was returned... | ```
File.WriteAllText("path", String.Empty);
```
or
```
File.CreateText("path").Close();
``` | Creating an empty file in C# | [
"",
"c#",
".net",
""
] |
I would like to learn how to use NUnit. I learn best by reading then playing with real code. Where can I find a small, simple C# project that uses NUnit in an exemplary manner? | There are many fine examples on [NUnit's developer wiki](http://nunit.com/devwiki.cgi?NunitExamplePage).
**Update as the original link is broken**:
Basic examples can be found on the [NUnit Documentation Page](http://www.nunit.org/index.php?p=docHome&r=2.5.10). Check out the Getting Started/QuickStart subsection, and the Assertions/\* subsection. | From my own projects (real life, so not just demos where everything will be nice and simple :)
* [MiscUtil](http://pobox.com/~skeet/csharp/miscutil)
* [MoreLINQ](http://code.google.com/p/morelinq/)
Both are reasonably small, and although MiscUtil is the bigger of the two, it's mostly a collection of very small, individual components.
MoreLINQ is heavily tested; MiscUtil has patchier coverage as I started it before getting into unit testing. | NUnit example code? | [
"",
"c#",
"unit-testing",
"nunit",
""
] |
I have a J2EE app deployed as an EAR file, which in turn contains a JAR file for the business layer code (including some EJBs) and a WAR file for the web layer code. The EAR file is deployed to JBoss 3.2.5, which unpacks the EAR and WAR files, but not the JAR file (this is not the problem, it's just FYI).
One of the files within the JAR file is an MS Word template whose absolute path needs to be passed to some native MS Word code (using [Jacob](http://danadler.com/jacob/), FWIW).
The problem is that if I try to obtain the File like this (from within some code in the JAR file):
```
URL url = getClass().getResource("myTemplate.dot");
File file = new File(url.toURI()); // <= fails!
String absolutePath = file.getAbsolutePath();
// Pass the absolutePath to MS Word to be opened as a document
```
... then the `java.io.File` constructor throws the IllegalArgumentException "URI is not hierarchical". The URL and URI both have the same toString() output, namely:
```
jar:file:/G:/jboss/myapp/jboss/server/default/tmp/deploy/tmp29269myapp.ear-contents/myapp.jar!/my/package/myTemplate.dot
```
This much of the path is valid on the file system, but the rest is not (being internal to the JAR file):
```
G:/jboss/myapp/jboss/server/default/tmp/deploy/tmp29269myapp.ear-contents
```
What's the easiest way of getting the absolute path to this file? | My current solution is to copy the file to the server's temporary directory, then use the absolute path of the copy:
```
File tempDir = new File(System.getProperty("java.io.tmpdir"));
File temporaryFile = new File(tempDir, "templateCopy.dot");
InputStream templateStream = getClass().getResourceAsStream("myTemplate.dot");
IOUtils.copy(templateStream, new FileOutputStream(temporaryFile));
String absolutePath = temporaryFile.getAbsolutePath();
```
I'd prefer a solution that doesn't involve copying the file. | Unless the code or application you are passing the URI String to accepts a format that specifies a location within a jar/zip file, your solution of copying the file to a temporary location is probably the best one.
If these files are referenced often, you may want to cache the locations of the extract files and just verify their existance each time they are needed. | Getting the absolute path of a file within a JAR within an EAR? | [
"",
"java",
"file",
"jakarta-ee",
"path",
"jar",
""
] |
In Python, if I had a range, and I wanted to iterate over it and divide each number by another number, could I do that in a if statement.
```
a = range(20)
for i in a:
if i / 3 == True:
print i
``` | Everyone here has done a good job explaining how to do it right. I just want to explain what you are doing wrong.
```
if i / 3 == True
```
Is equivalent to:
```
if i / 3 == 1
```
Because True == 1. So you are basicly checking if i when divided by 3 equals 1. Your code will actually print 3 4 5.
I think what you wanted to do is to check if i is a multiple of 3. Like this:
```
if i % 3 == 0
```
You can of course use an if statement to do it. Or you can use [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) with if
```
[x for x in range(20) if x % 3 == 0]
```
---
To those how are down voting this, from [python documentation](http://www.python.org/doc/2.5.2/lib/node34.html):
Boolean values are the two constant objects False and True. They are used to represent truth values (although other values can also be considered false or true). **In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively.** | Yes, but.
Please, please, please. Never say `if some expression == True`. It's redundant and causes many people to wonder what you're thinking.
More importantly.
`i/3` is the quotient.
`i%3` is the remainder. If i is a multiple of 3, `i%3 == 0`. | Dividing in an if statement | [
"",
"python",
""
] |
I'm trying to find open source profilers rather than using one of the commercial profilers which I have to pay $$$ for. When I performed a search on SourceForge, I have come across these four C++ profilers that I thought were quite promising:
1. Shiny: C++ Profiler
2. Low Fat Profiler
3. Luke Stackwalker
4. FreeProfiler
I'm not sure which one of the profilers would be the best one to use in terms of learning about the performance of my program. It would be great to hear some suggestions. | You could try [Windows Performance Toolkit](http://msdn.microsoft.com/en-us/performance/default.aspx). Completely free to use. This [blog entry](http://blogs.msdn.com/pigscanfly/archive/2008/03/02/using-the-windows-sample-profiler-with-xperf.aspx) has an example of how to do sample-based profiling. | * [Valgrind](https://valgrind.org/info/tools.html) (And related tools like cachegrind, etc.)
* [Google performance tools](https://github.com/gperftools/gperftools) | Recommended Open Source Profilers | [
"",
"c++",
"windows",
"profiler",
""
] |
Let me state up front that I have an infantile understanding of Monads. I have read the various threads on Monads here and have done a few hours of study on the concept. I hardly feel comfortable with the term, but I think it is safe to say that I *generally* understand what a Monad is/does.
I'm a C# developer who is looking to improve the way I work. What would help me further in my Monaducation is see a real world application of a Monad in C# (i.e. via a linq SelectMany() or somesuch) that is clearly an improvement over other ways of solving the same sort of problem in oldskool C#.
Has anyone seen such a beast? | Here is one such scenario: you want to author a parsing library (a nice example of an embedded DSL), and you discover that the best ones are monadic parser combinator libraries. So you write it leveraging LINQ syntax sugars to author C# code that has the same structure as the grammar of the language you're parsing, and you get the benefits of an awesome programming model for on-the-fly semantic analysis and error-recovery. See [this blog](http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!136.entry) for a description. | Find Pythagorean triples:
```
var r = from a in Enumerable.Range(1, 25)
from b in Enumerable.Range(a, 25-a)
from c in Enumerable.Range(b, 25-b)
where a*a + b*b == c*c
select new [] { a, b, c };
``` | What is the compelling scenario for using Monads in C# | [
"",
"c#",
".net",
"monads",
""
] |
I have a simple code below:
```
import java.util.ArrayList;
public class BoidList extends ArrayList
{
synchronized public Boid GetBoid( int idx_ ) throws ArrayIndexOutOfBoundsException
{
if( idx_ < super.size() &&
idx_ >= 0 )
{
return (Boid) super.get(idx_);
}
else
{
throw new ArrayIndexOutOfBoundsException();
}
}
synchronized public void RemoveBoid( int idx_ ) throws ArrayIndexOutOfBoundsException
{
if( idx_ < super.size() &&
idx_ >= 0 )
{
super.remove(idx_);
}
else
{
throw new ArrayIndexOutOfBoundsException();
}
}
}
```
There's a lot of similarity between the 2 methods, yet they do two different things. Is it possible to refactor this? | ```
public class BoidList extends ArrayList<Boid> {
private void checkIndex(int idx) {
if (idx >= super.size() || idx < 0) {
throw new ArrayIndexOutOfBoundsException(String.valueOf(idx));
}
}
synchronized public Boid getBoid(int idx) {
checkIndex(idx);
return super.get(idx);
}
synchronized public void removeBoid(int idx) {
checkIndex(idx);
super.remove(idx);
}
}
```
this fixes your code in several ways:
* the checkIndex() method is what you want
* never declare to throws a RuntimeException since it is a programming error
* follow java naming conventions (GetBoid -> getBoid, RemoveBoid -> removeBoid)
* remove useless cast simply extending ArrayList
**EDIT**
you may also want to check the [CopyOnWriteArrayList](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/CopyOnWriteArrayList.html)<Boid> | What is the real purpose of the `BoidList`? Consider the following:
```
List<Boid> boids = Collections.synchronizedList(new ArrayList<Boid>());
```
This line of code is more-or-less equivalent to the subclass you are attempting to create:
* Type-safety is enforced at compile-time
* `ArrayList` already throws an `IndexOutOfBoundsException` if the index is invalid
* `Collections.synchronizedList()` ensures synchronized access to the list
I don't see any reason to create your own subclass, based on the source code you provided. | Is it possible to refactor this Java code? | [
"",
"java",
"refactoring",
""
] |
I'm trying to modify all links on a page so they perform some additional work when they are clicked.
A trivial approach might be something like this:
```
function adaptLinks()
{
var links = document.getElementsByTagName('a');
for(i = 0; i != links.length; i++)
{
links[i].onclick = function (e)
{
<do some work>
return true;
}
}
}
```
But some of the links already have an *onClick* handler that should be preserved. I tried the following:
```
function adaptLinks()
{
var links = document.getElementsByTagName('a');
for(i = 0; i != links.length; i++)
{
var oldOnClick = links[i].onclick;
links[i].onclick = function (e)
{
if(oldOnClick != null && !oldOnClick())
{
return false;
}
<do some work>
return true;
}
}
}
```
But this doesn't work because *oldOnClick* is only evaluated when the handler is called (it contains the value of the **last** link as this point). | You need to create a closure to preserve the original `onclick` value of each link:
```
<a href="#" onclick="alert('hi');return false;">Hi</a>
<a href="#" onclick="alert('there');return true;">There</a>
<script type="text/javascript">
function adaptLinks() {
var links = document.getElementsByTagName('a');
for (i = 0; i != links.length; i++) {
links[i].onclick = (function () {
var origOnClick = links[i].onclick;
return function (e) {
if (origOnClick != null && !origOnClick()) {
return false;
}
// do new onclick handling only if
// original onclick returns true
alert('some work');
return true;
}
})();
}
}
adaptLinks();
</script>
```
Note that this implementation only performs the new onclick handling if the original onclick handler returns true. That's fine if that's what you want, but keep in mind you'll have to modify the code slightly if you want to perform the new onclick handling even if the original handler returns false.
More on closures at the [comp.lang.javascript FAQ](http://www.jibbering.com/faq/faq_notes/closures.html) and from [Douglas Crockford](http://www.crockford.com/javascript/private.html). | Don't assign to an event handler directly: use the subscribe model **addEventListener** / **attachEvent** instead (which also have remove pairs!).
Good introduction [here](http://www.quirksmode.org/js/events_advanced.html). | JavaScript: Adding an onClick handler without overwriting the existing one | [
"",
"javascript",
"jquery",
"onclick",
""
] |
I have a mathparser that can do functions like "IntPow(3,2)". If a user pastes "1,000,000" and then adds a plus symbol, making the full equation "1,000,000+IntPow(3,2)" the parser fails because it does not work with numbers that contain commas.
I need to remove the commas from the "1,000,000", but not from the "IntPow(3,2)" because IntPow has two parameters separated by a comma. The final equation will be "1000000+IntPow(3,2)". The equation is stored in one string. How would I remove only the commas that are outside of parenthesis? I'm assuming and saying that numbers that contain commas will not be placed inside the IntPow parameters list.
When I say "remove commas" I really mean remove "CultureInfo.CurrentCulture.NumberFormat.NumberGroupSeparator" which could be a comma or a period depending on the local. This part will be easy because I assume RegEx will be used and I can just concatentate that value in the RegEx comma place.
I have this RegEx: (.\*?) for finding the parenthesis and values inside of them but I'm not sure how to only remove the commas outside of the RegEx matches. | The easiest way is to not try and make a regex do this. Just loop over the string one character at a time. If you read a '(', increment a counter. If you read a ')', decrement that counter. If you read a comma, delete it if the counter is 0, otherwise leave it alone. | But what if a user pastes:
```
1,000+IntPow(3,000,2,000)
```
Now the 3,000 is between comma's. | How would I remove all commas not inside parenthesis from a string in C#? | [
"",
"c#",
"regex",
""
] |
Using PHP, I can convert MySQL data or static table data to csv, Excel, JSON, MySQL etc but is there a useful conversion script or tool that can convert table data into other formatted/styled formats such as PDF and/or JPG/PNG using the PHP GD Library or other? | I've used [this](http://www.digitaljunkies.ca/dompdf/) before to turn a HTML table into a PDF. I generated the table from a MySQL query. | To export to Excel I use the following code:
```
<?php
/* Define our Database and Table Info */
$username="";
$password="";
$database="";
$table="";
mysql_connect(localhost,$username,$password);
@mysql_select_db($database) or die( "Unable to select database");
$select = "SELECT * FROM $table";
$export = mysql_query($select);
$fields = mysql_num_fields($export);
for ($i = 0; $i < $fields; $i++) {
$header .= mysql_field_name($export, $i) . "\t";
}
while($row = mysql_fetch_row($export)) {
$line = '';
foreach($row as $value) {
if ((!isset($value)) OR ($value == "")) {
$value = "\t";
} else {
$value = str_replace('"', '""', $value);
$value = '"' . $value . '"' . "\t";
}
$line .= $value;
}
$data .= trim($line)."\n";
}
$data = str_replace("\r","",$data);
if ($data == "") {
$data = "\n(0) Records Found!\n";
}
header("Content-type: application/x-msdownload");
header("Content-Disposition: attachment; filename=mailinglist.xls");
header("Pragma: no-cache");
header("Expires: 0");
print "$header\n$data";
?>
```
Now be careful with how you include this. It's using the Headers to send the file information to force a download, by doing this you can't have any white space anywhere before these headers are sent otherwise it may throw an error. I usually have this link open as a new window to prevent anything from happening... Again this is just a pretty basic script that can be expanded greatly. Hope this Helps! | Is there a PHP script that can convert HTML table data to various formats? | [
"",
"php",
"file-format",
""
] |
I have an idea for a few web apps to write to help me, and maybe others, learn Japanese better since I am studying the language.
My problem is the site will be in mostly english, so it needs to mix fluently Japanese Characters, usually hirigana and katakana, but later kanji. I am getting closer to accomplishing this; I have figured out that the pages and source files need to be unicode and utf-8 content types.
However, my problem comes in the actual coding. What I need is to manipulate strings of text that are kana. One example is:
けす I need to take that verb and convert it to the te-form けして. I would prefer to do this in javascript as it will help down the road to do more manipulation, but if I have to will just do DB calls and hold everything in a DB.
My question is not only how to do it in javascript, but what are some tips and strategies to doing these kinds of things in other languages, too. I am hoping to get more into doing language learning apps, but am lost when it comes to this. | > My question is not only how to do it
> in javascript, but what are some tips
> and strategies to doing these kinds
> of things in other langauges too.
What you want to do is pretty basic string manipution - apart from the missing word separators, as Barry notes, though that's not a technical problem.
Basically, for a modern Unicode-aware programming language (which JavaScript has been since version 1.3, I believe) there is no real difference between a Japanese kana or kanji, and a latin letter - they're all just characters. And a string is just, well, a string of characters.
Where it gets difficult is when you have to convert between strings and bytes, because then you need to pay attention to what encoding you are using. Unfortunately, many programmers, especially native English speakers tend to gloss over this problem because ASCII is the de facto standard encoding for latin letters and other encodings usually try to be compatible. If latin letters are all you need, then you can get along being blissfully ignorant about character encodings, believe that bytes and characters are basically the same thing - and write programs that mutilate anything that's not ASCII.
So the "secret" of Unicode-aware programming is this: learn to recognize when and where strings/characters are converted to and from bytes, and make sure that in all those places the correct encoding is used, i.e. the same that will be used for the reverse conversion and one that can encode all the character's you're using. UTF-8 is slowly becoming the de-facto standard and should normally be used wherever you have a choice.
Typical examples (non-exhaustive):
* When writing source code with non-ASCII string literals (configure encoding in the editor/IDE)
* When compiling or interpreting such source code (compiler/interpreter needs to know the encoding)
* When reading/writing strings to a file (encoding must be specified somewhere in the API, or in the file's metadata)
* When writing strings to a database (encoding must be specified in the configuration of the DB or the table)
* When delivering HTML pages via a webserver (encoding must be specified in the HTML headers or the pages' meta header; forms can be even more tricky) | * Stick to Unicode and utf-8 everywhere.
* Stay away from the native Japanese encodings: euc-jp, shiftjis, iso-2022-jp, but be aware that you'll probably encounter them at some point if you continue.
* Get familiar with a segmenter for doing complicated stuff like POS analysis, word segmentation, etc. the standard tools used by most people who do NLP (natural language processing) work on Japanese are, in order of popularity/power.
[**MeCab**](http://taku910.github.io/mecab/) (originally on [SourceForge](http://mecab.sourceforge.net/)) is awesome: it allows you to take text like,
```
「日本語は、とても難しいです。」
```
and get all sorts of great info back
```
kettle:~$ echo 日本語は、難しいです | mecab
日本語 名詞,一般,*,*,*,*,日本語,ニホンゴ,ニホンゴ
は 助詞,係助詞,*,*,*,*,は,ハ,ワ
、 記号,読点,*,*,*,*,、,、,、
難しい 形容詞,自立,*,*,形容詞・イ段,基本形,難しい,ムズカシイ,ムズカシイ
です 助動詞,*,*,*,特殊・デス,基本形,です,デス,デス
EOS
```
which is basically a detailed run-down of the parts-of-speech, readings, pronunciations, etc. It will also do you the favor of analyzing verb tenses,
```
kettle:~$ echo メキシコ料理が食べたい | mecab
メキシコ 名詞,固有名詞,地域,国,*,*,メキシコ,メキシコ,メキシコ
料理 名詞,サ変接続,*,*,*,*,料理,リョウリ,リョーリ
が 助詞,格助詞,一般,*,*,*,が,ガ,ガ
食べ 動詞,自立,*,*,一段,連用形,食べる,タベ,タベ
たい 助動詞,*,*,*,特殊・タイ,基本形,たい,タイ,タイ
EOS
```
However, the documentation is all in Japanese, and it's a bit complicated to set up and figure out how to format the output the way you want it. There are packages available for ubuntu/debian, and bindings in a bunch of languages including perl, python, ruby...
Apt-repos for ubuntu:
```
deb http://cl.naist.jp/~eric-n/ubuntu-nlp intrepid all
deb-src http://cl.naist.jp/~eric-n/ubuntu-nlp intrepid all
```
Packages to install:
`$ apt-get install mecab-ipadic-utf8 mecab python-mecab`
should do the trick I think.
The other alternatives to mecab are, [ChaSen](http://chasen.naist.jp/hiki/ChaSen/), which was written years ago by the author of MeCab (who incidentally works at google now), and [Kakasi](http://kakasi.namazu.org/), which is much less powerful.
I would definitely try to avoid rolling your own conjugation routines. the problem with this is just that it will require tons and tons of work, which others have already done, and covering all the edge cases with rules is, at the end of the day, impossible.
MeCab is statistically driven, and trained on loads of data. It employs a sophisticated machine learning technique called *conditional random fields* (CRFs) and the results are really quite good.
Have fun with the Japanese. I'm not sure how good your Japanese is, but if you need help with the docs for mecab or whatever feel free to ask about that as well. Kanji can be quite intimidating at the beginning. | Programming tips with Japanese Language/Characters | [
"",
"javascript",
"language-agnostic",
"unicode",
"nlp",
"cjk",
""
] |
I have to compute a value involving data from several tables. I was wondering if using a stored procedure with cursors would offer a performance advantage compared to reading the data into a dataset (using simple select stored procedures) and then looping through the records? The dataset is not large, it consists in 6 tables, each with about 10 records, mainly GUIDs, several nvarchar(100) fields, a float column, and an nvarchar(max). | That would probably depend on the dataset you may be retrieving back (the larger the set, the more logical it may be to perform inside SQL Server instead of passing it around), but I tend to think that if you are looking to perform computations, do it in your code and away from your stored procedures. If you need to use cursors to pull the data together, so be it, but using them to do calculations and other non-retrieval functions I think should be shied away from.
**Edit**: This [Answer](https://stackoverflow.com/questions/139843/cursor-vs-while-loop-what-are-advantage-disadvantages-of-cursors-mssql/139905#139905) to another related question will give some pros and cons to cursors vs. looping. This answer would seem to conflict with my previous assertion (read above) about scaling. Seems to suggest that the larger you get, the more you will probably want to move it off to your code instead of in the stored procedure. | Cursors should be faster, but if you have a lot of users running this it will eat up your server resources. Bear in mind you have a more powerful coding language when writing loops in .Net rather than SQL.
There are very few occasions where a cursor cannot be replaced using standard set based SQL. If you are doing this operation on the server you may be able to use a set based operation. Any more details on what you are doing?
If you do decide to use a cursor bear in mind that a FAST\_FORWARD read only cursor will give you the best performance, and make sure that you use the deallocate statement to release it. See [here](http://www.mssqlcity.com/Tips/tipCursor.htm) for cursor tips | SQL Server - SQL Cursor vs ADO.NET | [
"",
"sql",
"ado.net",
"cursors",
""
] |
I need to deploy only the referenced classes in a very limited environment as [A data carousel](http://en.wikipedia.org/wiki/Data_and_object_carousel) for Interactive TV. Bandwidth is expensive and .jar files are not supported. | Check out [ProGuard](http://proguard.sourceforge.net/) which is an obfuscator that will list code and classes that are not used. Obfuscating itself usually results in a smaller foot print.
> ProGuard is a free Java class file shrinker, optimizer, obfuscator, and preverifier. It detects and removes unused classes, fields, methods, and attributes. It optimizes bytecode and removes unused instructions. It renames the remaining classes, fields, and methods using short meaningless names. Finally, it preverifies the processed code for Java 6 or for Java Micro Edition. | Sounds like you need a dependency analyzer. [This one](http://www.dependency-analyzer.org/) might do the trick.
[ProGuard](http://proguard.sourceforge.net/) might be even better, since it can also shrink existing .class files. | How to know which classes inside a .jar file are referenced? | [
"",
"java",
"class",
""
] |
I was wondering if it was possible (and, if so, how) to chain together multiple managers to produce a query set that is affected by both of the individual managers. I'll explain the specific example that I'm working on:
I have multiple abstract model classes that I use to provide small, specific functionality to other models. Two of these models are a DeleteMixin and a GlobalMixin.
The DeleteMixin is defined as such:
```
class DeleteMixin(models.Model):
deleted = models.BooleanField(default=False)
objects = DeleteManager()
class Meta:
abstract = True
def delete(self):
self.deleted = True
self.save()
```
Basically it provides a pseudo-delete (the deleted flag) instead of actually deleting the object.
The GlobalMixin is defined as such:
```
class GlobalMixin(models.Model):
is_global = models.BooleanField(default=True)
objects = GlobalManager()
class Meta:
abstract = True
```
It allows any object to be defined as either a global object or a private object (such as a public/private blog post).
Both of these have their own managers that affect the queryset that is returned. My DeleteManager filters the queryset to only return results that have the deleted flag set to False, while the GlobalManager filters the queryset to only return results that are marked as global. Here is the declaration for both:
```
class DeleteManager(models.Manager):
def get_query_set(self):
return super(DeleteManager, self).get_query_set().filter(deleted=False)
class GlobalManager(models.Manager):
def globals(self):
return self.get_query_set().filter(is_global=1)
```
The desired functionality would be to have a model extend both of these abstract models and grant the ability to only return the results that are both non-deleted and global. I ran a test case on a model with 4 instances: one was global and non-deleted, one was global and deleted, one was non-global and non-deleted, and one was non-global and deleted. If I try to get result sets as such: SomeModel.objects.all(), I get instance 1 and 3 (the two non-deleted ones - great!). If I try SomeModel.objects.globals(), I get an error that DeleteManager doesn't have a globals (this is assuming my model declaration is as such: SomeModel(DeleteMixin, GlobalMixin). If I reverse the order, I don't get the error, but it doesn't filter out the deleted ones). If I change GlobalMixin to attach GlobalManager to globals instead of objects (so the new command would be SomeModel.globals.globals()), I get instances 1 and 2 (the two globals), while my intended result would be to only get instance 1 (the global, non-deleted one).
I wasn't sure if anyone had run into any situation similar to this and had come to a result. Either a way to make it work in my current thinking or a re-work that provides the functionality I'm after would be very much appreciated. I know this post has been a little long-winded. If any more explanation is needed, I would be glad to provide it.
**Edit:**
I have posted the eventual solution I used to this specific problem below. It is based on the link to Simon's custom QuerySetManager. | See this snippet on Djangosnippets: <http://djangosnippets.org/snippets/734/>
Instead of putting your custom methods in a manager, you subclass the queryset itself. It's very easy and works perfectly. The only issue I've had is with model inheritance, you always have to define the manager in model subclasses (just: "objects = QuerySetManager()" in the subclass), even though they will inherit the queryset. This will make more sense once you are using QuerySetManager. | Here is the specific solution to my problem using the custom QuerySetManager by Simon that Scott linked to.
```
from django.db import models
from django.contrib import admin
from django.db.models.query import QuerySet
from django.core.exceptions import FieldError
class MixinManager(models.Manager):
def get_query_set(self):
try:
return self.model.MixinQuerySet(self.model).filter(deleted=False)
except FieldError:
return self.model.MixinQuerySet(self.model)
class BaseMixin(models.Model):
admin = models.Manager()
objects = MixinManager()
class MixinQuerySet(QuerySet):
def globals(self):
try:
return self.filter(is_global=True)
except FieldError:
return self.all()
class Meta:
abstract = True
class DeleteMixin(BaseMixin):
deleted = models.BooleanField(default=False)
class Meta:
abstract = True
def delete(self):
self.deleted = True
self.save()
class GlobalMixin(BaseMixin):
is_global = models.BooleanField(default=True)
class Meta:
abstract = True
```
Any mixin in the future that wants to add extra functionality to the query set simply needs to extend BaseMixin (or have it somewhere in its heirarchy). Any time I try to filter the query set down, I wrapped it in a try-catch in case that field doesn't actually exist (ie, it doesn't extend that mixin). The global filter is invoked using globals(), while the delete filter is automatically invoked (if something is deleted, I never want it to show). Using this system allows for the following types of commands:
```
TemporaryModel.objects.all() # If extending DeleteMixin, no deleted instances are returned
TemporaryModel.objects.all().globals() # Filter out the private instances (non-global)
TemporaryModel.objects.filter(...) # Ditto about excluding deleteds
```
One thing to note is that the delete filter won't affect admin interfaces, because the default Manager is declared first (making it the default). I don't remember when they changed the admin to use Model.\_default\_manager instead of Model.objects, but any deleted instances will still appear in the admin (in case you need to un-delete them). | Django Manager Chaining | [
"",
"python",
"django",
"django-models",
"django-managers",
""
] |
What's an easy way to create a directory on an FTP server using C#?
I figured out how to upload a file to an already existing folder like this:
```
using (WebClient webClient = new WebClient())
{
string filePath = "d:/users/abrien/file.txt";
webClient.UploadFile("ftp://10.128.101.78/users/file.txt", filePath);
}
```
However, if I want to upload to `users/abrien`, I get a `WebException` saying the file is unavailable. I assume this is because I need to create the new folder before uploading my file, but `WebClient` doesn't seem to have any methods to accomplish that. | Use `FtpWebRequest`, with a method of [`WebRequestMethods.Ftp.MakeDirectory`](http://msdn.microsoft.com/en-us/library/system.net.webrequestmethods.ftp.makedirectory.aspx).
For example:
```
using System;
using System.Net;
class Test
{
static void Main()
{
WebRequest request = WebRequest.Create("ftp://host.com/directory");
request.Method = WebRequestMethods.Ftp.MakeDirectory;
request.Credentials = new NetworkCredential("user", "pass");
using (var resp = (FtpWebResponse) request.GetResponse())
{
Console.WriteLine(resp.StatusCode);
}
}
}
``` | Here is the answer if you want to create nested directories
There is no clean way to check if a folder exist on the ftp so you have to loop and create all the nested structure one folder at the time
```
public static void MakeFTPDir(string ftpAddress, string pathToCreate, string login, string password, byte[] fileContents, string ftpProxy = null)
{
FtpWebRequest reqFTP = null;
Stream ftpStream = null;
string[] subDirs = pathToCreate.Split('/');
string currentDir = string.Format("ftp://{0}", ftpAddress);
foreach (string subDir in subDirs)
{
try
{
currentDir = currentDir + "/" + subDir;
reqFTP = (FtpWebRequest)FtpWebRequest.Create(currentDir);
reqFTP.Method = WebRequestMethods.Ftp.MakeDirectory;
reqFTP.UseBinary = true;
reqFTP.Credentials = new NetworkCredential(login, password);
FtpWebResponse response = (FtpWebResponse)reqFTP.GetResponse();
ftpStream = response.GetResponseStream();
ftpStream.Close();
response.Close();
}
catch (Exception ex)
{
//directory already exist I know that is weak but there is no way to check if a folder exist on ftp...
}
}
}
``` | How do I create a directory on FTP server using C#? | [
"",
"c#",
".net",
"ftp",
"webclient",
""
] |
I am serializing Lists of classes which are my data entities. I have a DataProvider that contains a List.
I always modify items directly within the collection.
What is the best way of determining if any items in the List have changed? I am using the Compact Framework.
My only current idea is to create a hash of the List (if that's possible) when I load the list. Then when I do a save I re-get the hash of the list and see if they're different values. If they're different I save and then update the stored Hash for comparison later, if they're the same then I don't save.
Any ideas? | If the items you add to the list implement the [`INotifyPropertyChanged`](http://msdn.microsoft.com/en-us/library/system.componentmodel.inotifypropertychanged.aspx) interface, you could build your own generic list that hooks the event in that interface for all objects you add to the list, and unhooks the event when the items are removed from the list.
There's a [`BindingList<T>`](http://msdn.microsoft.com/en-us/library/ms132679.aspx) class in the framework you can use, or you can write your own.
Here's a sample add method, assuming the type has been declared with `where T: INotifyPropertyChanged`:
```
public void Add(T item)
{
// null-check omitted for simplicity
item.PropertyChanged += ItemPropertyChanged;
_List.Add(item);
}
```
and the `this[index]` indexer property:
```
public T this[Int32 index]
{
get { return _List[index]; }
set {
T oldItem = _List[index];
_List[index] = value;
if (oldItem != value)
{
if (oldItem != null)
oldItem.PropertyChanged -= ItemPropertyChanged;
if (value != null)
value.PropertyChanged += ItemPropertyChanged;
}
}
}
```
If your items doesn't support `INotifyPropertyChanged`, but they're your classes, I would consider adding that support. | You could create your own `IList<T>` class, say `DirtyList<T>` that can record when the list has changed. | C# - Determine if List<T> is dirty? | [
"",
"c#",
"compact-framework",
"c#-2.0",
"hash",
"generic-list",
""
] |
Some code for context:
```
class a
{
}
class b
{
public a a{get;set;}
public static implicit operator a(b b)
{
return b.a;
}
}
a a=null;
b b=null;
a = b;
//compiler: cannot apply operator '==' to operands of type tralala...
bool c = a == b;
```
Is it possible to use == operator on different type instances, where one can implicitly convert to another? What did i miss?
**Edit:**
If types must be the same calling ==, then why
```
int a=1;
double b=1;
bool c=a==b;
```
works? | The `implicit` operator only works for assignment.
You want to overload the equality (`==`) operator, as such:
```
class a
{
public static bool operator ==(a x, b y)
{
return x == y.a;
}
public static bool operator !=(a x, b y)
{
return !(x == y);
}
}
class b
{
public a a{get;set;}
public static implicit operator a(b b)
{
return b.a;
}
}
```
This should then allow you to compare two objects of type `a` and `b` as suggested in your post.
```
var x = new a();
var y = new b();
bool c = (x == y); // compiles
```
**Note:**
I recommmend simply overriding the `GetHashCode` and `Equals` method, as the compiler warns, but as you seem to want to supress them, you can do that as follows.
Change your class declaration of `a` to:
```
#pragma warning disable 0660, 0661
class a
#pragma warning restore 0660, 0661
{
// ...
}
``` | > Is it possible to use == operator on
> different type instances, where one
> can implicitly convert to another?
Yes.
> What did i miss?
Here's the relevant portion of the specification. You missed the highlighted word.
> The predefined reference type equality
> operators require [that] both operands
> are reference-type values or the
> literal null. Furthermore, a **standard**
> implicit conversion exists from the
> type of either operand to the type of
> the other operand.
A user-defined conversion is by definition not a standard conversion. These are reference types. Therefore, the predefined reference type equality operator is not a candidate.
> If types must be the same calling ==,
> then why [double == int] works?
Your supposition that the types must be the same is incorrect. There is a standard implicit conversion from int to double and there is an equality operator that takes two doubles, so this works.
I think you also missed this bit:
> It is a compile-time error to use the
> predefined reference type equality
> operators to compare two references
> that are known to be different at
> compile-time. For example, if the
> compile-time types of the operands are
> two class types A and B, and if
> neither A nor B derives from the
> other, then it would be impossible for
> the two operands to reference the same
> object. Thus, the operation is
> considered a compile-time error. | C# implicit conversions and == operator | [
"",
"c#",
"operators",
"type-conversion",
"equals-operator",
""
] |
In a java app, assuming I have option of choosing the following comparison methods
**equalsIgnoreCase(String anotherString)**
**compareToIgnoreCase(String str)**
Which one is faster? | `equalsIgnoreCase` *can* be a lot faster. For example, consider two strings which start with the same 10,000 characters - but one of them has an extra character at the end. `equalsIgnoreCase` can return immediately; `compareToIgnoreCase` has to iterate to the end of the string to see the difference.
But generally I'd go with whichever expresses your intention better. This works well for performance too: assuming I'm right in saying that `equalsIgnoreCase` is at least as fast as `compareToIgnoreCase`, it means you should use that where you can - if you need an actual ordering, you've got to use `compareToIgnoreCase` anyway. | if you worry about performances... **measure it** | Which is faster, equalsIgnoreCase or compareToIgnoreCase | [
"",
"java",
"string",
""
] |
What's the pythonic way to designate unreachable code in python as in:
```
gender = readFromDB(...) # either 'm' or 'f'
if gender == 'm':
greeting = 'Mr.'
elif gender == 'f':
greeting = 'Ms.'
else:
# What should this line say?
``` | ```
raise ValueError('invalid gender %r' % gender)
``` | You could raise an exception:
```
raise ValueError("Unexpected gender; expected 'm' or 'f', got %s" % gender)
```
or use an assert False if you expect the database to return only 'm' or 'f':
```
assert False, "Unexpected gender; expected 'm' or 'f', got %s" % gender
``` | How to designate unreachable python code | [
"",
"python",
""
] |
In order to be able to detect RT of a particular tweet, I plan to store hashes of each formatted tweet in the database.
What hashing algorithm should I use. Cryptic is of course not essential. Just a minimal way of storing a data as something which can then be compared if it is the same, in an efficient way.
My first attempt at this was by using md5 hashes. But I figured there can be hashing algorithms that are much more efficient, as security is not required. | You are trying to hash a string right? Builtin types can be hashed right away, just do `hash("some string")` and you get some int. Its the same function python uses for dictonarys, so it is probably the best choice. | Do you really need to hash at all? Twitter messages are short enough (and disk space cheap enough) that it may be better to just store the whole message, rather than eating up clock cycles to hash it. | Detecting Retweets using computationally inexpensive Python hashing algorithms | [
"",
"python",
"hash",
"twitter",
"md5",
""
] |
i hope that everyone here know the php 'variable variable' syntax:
```
$color = 'red';
$red = 'yes, im red';
echo $$color;
//output: 'yes, im red';
```
but my problem is: how this syntax is named?
i'm trying to find the reference on php.net, with no results (i wanna know if this feature will be kept in php6, the others attributes, etc...) | [Variable Variables](http://is.php.net/manual/en/language.variables.variable.php)
And yes it will be kept in PHP6 as far as I know. | Just to share some tips.
To avoid confusion, you may use the curly braces.
```
${$color}
```
One of my team member removed the double $, thinking of typo error.
Perhaps doing so will avoid confusion. | Where to find the reference for the double dollar PHP syntax? | [
"",
"php",
"syntax",
""
] |
I have a web service method where I would like to throw some custom exceptions e.g. SomeException, SomeOtherException etc which the web service would then turn into a SOAP fault which the client would be able to handle. In Java I can have wsdl:fault elements within the wsdl:operation element in the WSDL. It appears it .NET that this is not catered for and that there is no way of putting attributes on a WebMethod to show what SOAP faults may occur.
If I create a Java web service which has wsdl:fault elements and add a web reference to a .NET project I would have expected the wsdl:fault elements to cause appropriately named exceptions to be created just as the other entities are created, this however does not seem to be the case.
Is it the case that wsdl:fault elements in a WSDL are completly ignored by .NET? They are part of the WSDL specification defined at <http://www.w3.org/TR/wsdl> so this wasn't the behaviour I was expecting.
If this is the case possible work arounds might be returning a result object which contains a success/failure boolean value and an error message/enum. Or by using SoapExceptions. If I choose to use SoapExceptions I am then putting the emphesis on the user of my web service to handle these and deserialize it properly. Both of these don't seem a great way of handling this and add extra problems and code to workaround this.
Any advice? | Since you ask for .net 2.0 i guess you know that this is "fixed" in WCF where you can add the attribute [FaultContract(typeof(YourCustomException))].
The "normal" way this was done in 2.0 is as you says, add a Response message with a success-boolean, Result and a Error property.
You can typically see how this is done in EntLib. | ASMX web services did not support the wsdl:fault element, either on the client or the server. They never will.
As [ThorHalvor](https://stackoverflow.com/users/63243/thorhalvor) has said, the bug fix for this is called "WCF".
I have successfully hand-written a WSDL that includes wsdl:fault elements, then returned those faults through an ASMX web service by including the fault message as the Detail property of a SoapException. Java and WCF clients then properly saw this as an exception of the appropriate kind. | .NET 2.0 Web Services WSDL Fault element being ignored? | [
"",
"c#",
".net",
"web-services",
"wsdl",
"asmx",
""
] |
Ok, I asked about this very error earlier this week and had some very helpful answers and without doubt things have drastically improved since I started following the suggestions.
However, now I am using the 'correct', best practice method to access the database I still get this error on some functions and I cannot get it to disappear for that block. Here is my code:
```
Public Shared Function doesBasketExist(ByVal baskethash As String) As Boolean
Dim _r As Boolean
Using db As New SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings("pitstopConnectionString").ConnectionString)
Using cmd As New SqlCommand("doGetBasketByHash", db)
cmd.CommandType = CommandType.StoredProcedure
cmd.Parameters.AddWithValue("@baskethash", baskethash)
Using dr As SqlDataReader = cmd.ExecuteReader()
If dr.HasRows() = True Then
_r = True
Else
_r = False
End If
dr.Close()
End Using
End Using
End Using
Return _r
End Function
```
Now no matter what I do I get: ExecuteReader requires an open and available Connection. The connection's current state is closed. on this connection. I do have functions with objects called the same thing within this class (cmd, dr etc.) but Using closes up after itself doesn't it?
Suggestions welcome :) | I think you have forgotten to open the connection.
Open it before this line:
```
cmd.Parameters.AddWithValue("@baskethash", baskethash)
```
Using -
```
db.Open()
``` | You actually forgot to `Open` connection:
```
db.Open()
Using dr As SqlDataReader = cmd.ExecuteReader()
``` | ExecuteReader requires an open and available Connection. The connection's current state is closed | [
"",
"sql",
"sqldatareader",
"sqlconnection",
""
] |
I have a javascript function that calls a generic function to make an ajax call to the server. I need to retrieve a result (true/false) from the callback function of the ajax call, but the result I get is always 'undefined'.
A super-simplified version of the generic function without all my logic would be:
```
function CallServer(urlController) {
$.ajax({
type: "POST",
url: urlController,
async: false,
data: $("form").serialize(),
success:
function(result) {
if (someLogic)
return true;
else
return false;
},
error:
function(errorThrown) {
return false;
}
});
}
```
And the function calling it would be something like:
```
function Next() {
var result = CallServer("/Signum/TrySave");
if (result == true) {
document.forms[0].submit();
}
}
```
The "result" variable is always 'undefined', and debugging it I can see that the "return true" line of the callback function is being executed.
Any ideas of why this is happening? How could I *bubble* the return value from the callback function to the CallServer function?
Thanks | Just found how to do it :) Declaring a variable and updating it accordingly from the callback function. Afterwards I can return that variable. I place the code for future readers:
```
function CallServer(urlController) {
var returnValue = false;
$.ajax({
type: "POST",
url: urlController,
async: false,
data: $("form").serialize(),
success:
function(result) {
if (someLogic){
returnValue = true;
return;
}
},
error:
function(errorThrown) {
alert("Error occured: " + errorThrown);
}
});
return returnValue;
}
``` | Just in case you want to go the asynchronous way (which is a better solution because it will not freeze your browser while doing the request), here is the code:
```
function CallServer(urlController, callback) {
$.ajax({
type: "POST",
url: urlController,
async: true,
data: $("form").serialize(),
success:
function(result) {
var ret = ( someLogic );
callback(ret);
},
error:
function(errorThrown) {
return false;
}
});
}
function Next() {
CallServer("/Signum/TrySave", function(result) {
if (result == true) {
document.forms[0].submit();
}
});
}
``` | Promote callback onSuccess return value to the Caller Function return value | [
"",
"javascript",
"ajax",
"jquery",
""
] |
I have a class that requires a specific method to be called before being used by other objects, this method implements all the required logic and sets the properties of the class to their respective values. How can I ensure that the method of this class is called before the object is returned for use by other objects? I heard that it is a bad idea to implement logic in the constructor so I cannot call this method in the constructor. A code example for this sort of implementation is as follows:
```
SomeClass myClass = new SomeClass("someName");
//Class must call this method if object is to be of any use
myClass.ConvertNameToFunnyCharacters();
return myClass;
``` | Putting a lot of logic in the constructor can lead to a few problems:
* If the constructor calls methods of the object, those methods run in a partially constructed object. This can really bite you when you override the method in subclasses: in Java and C# the subclass' implementation will run before the subclass' constructor has initialised the extended state of the object and so fail with null pointer exceptions. C++ works more "correctly" but can cause different confusing effects.
* It makes unit testing with mock objects a bit more complicated if the constructor calls back out to objects passes as parameters.
So, I prefer to keep constructors as simple as possible: just assign parameters to instance variables. If I need to perform more complex logic to initialise an object I write a static factory function that calculates the constructor parameter values and passes them to a simple constructor. | If it's essential that the object is constructed correctly then it's not a bad idea to put the logic in the constructor. You should consider putting the logic in another method - which should be `public` if you want to be able to "reset" the object back to its default state. | What is the best way to ensure that all necessary class properties are set before returning an object which will be used somewhere else | [
"",
"c#",
""
] |
Just looking for a good PHP Image Library, I want to display images with rounded corners, do some resizing, and blur some other pictures either on the fly, or on upload. | I'd suggest to have a look around ImageMagick.
There are excellent wrappers for the library in PHP too: <http://www.imagemagick.org/script/api.php#php> | Have a go with <http://wideimage.sourceforge.net/wiki/MainPage>
It doesn't do it out of the box but you could have a partially transparent PNG that you could put on top of your original image, making it blurry. | Looking for a PHP Image Library... rounded corners & resizing | [
"",
"php",
"image-processing",
""
] |
I want to switch over to Mono from .Net (plus Visual Studio to whatever IDE in Mono).Is this going to be really tough?
What are the issues I would face?
Thanks | As others have pointed out, I'm assuming that what you meant was moving from the .Net framework to Mono. With that said, the amount of difficulty you'll face depends a lot on what kind of code you are trying to move and what parts of the framework you use. For example, if your code is littered with P/Invoke calls, you're going to have a rough time. If it's straightforward .Net code you will probably have *relatively* smooth sailing.
You're going to want to spend some time with the [Mono Migration Analyzer](http://mono-project.com/MoMA) (MoMA). You'll run into things that aren't caught by the tool, but it will catch a lot of the basics and leave you free to tackle bigger things. | I believe Mono hasn't quite caught up with all the .Net 3.5 libraries and hence has either no or partial implementations of them. Mono will always lag a bit behind Microsoft, but it's a fantastic achievement and a brilliant piece of work.
have a look at the [Mono Roadmap](http://mono-project.com/Roadmap#Partial_Implementations) | Limitations in Mono for a .Net framework developer | [
"",
"c#",
"visual-studio-2008",
"mono",
""
] |
> ### Duplicate: [PHP validation/regex for URL](https://stackoverflow.com/questions/206059/php-validation-regex-for-url)
My goal is create a PHP regex for website name. The regex is for a lead gathering form and should accept any legit kind of website name syntax that someone might enter. After an exhaustive search, I'm surprised that I can't find one out there.
Here are the regex matches that I'm looking for:
* somewebsite.com
* <http://somewebsite.com>
* <http://www.somewebsite.com>
AND, it should also match:
* any of the above with a trailing backslash, such as: somewebsite.com/
* subdomains | You might need to tweak it:
```
<?php
$pattern = '/^(([\w]+:)?\/\/)?(([\d\w]|%[a-fA-f\d]{2,2})+(:([\d\w]|%[a-fA-f\d]{2,2})+)?@)?([\d\w][-\d\w]{0,253}[\d\w]\.)+[\w]{2,4}(:[\d]+)?(\/([-+_~.\d\w]|%[a-fA-f\d]{2,2})*)*(\?(&?([-+_~.\d\w]|%[a-fA-f\d]{2,2})=?)*)?(#([-+_~.\d\w]|%[a-fA-f\d]{2,2})*)?$/';
$url1 = "http://www.somewebsite.com";
$url2 = "https://www.somewebsite.com";
$url3 = "https://somewebsite.com";
$url4 = "www.somewebsite.com";
$url5 = "somewebsite.com";
function valURL($pattern, $url) {
$return = false;
if(preg_match($pattern, $url)) {
$return = true;
}
if($return == true) {
echo "Match URL: <font color='green'>" . $url . "</font><br /><br />";
} else {
echo "Try Again: <font color='red'>URL: " . $url . "</font><br /><br />";
}
}
valURL($pattern, $url1);
valURL($pattern, $url2);
valURL($pattern, $url3);
valURL($pattern, $url4);
valURL($pattern, $url5);
?>
``` | No RegEx necessary.
```
$subject = 'example.com';
$part = (stripos($subject, 'http://') === FALSE) ? 'http://' : '' ;
var_dump(filter_var($part.$subject, FILTER_VALIDATE_URL));
``` | PHP RegEx for "Website Name" | [
"",
"php",
"regex",
"dns",
""
] |
Yes, I know you have to embed the google analytics javascript into your page.
But how is the collected information submitted to the google analytics server?
For example an AJAX request will not be possible because of the browsers security settings (cross domain scripting).
Maybe someone had already a look at the confusing google javascript code? | When html page makes a request for a ga.js file the http protocol sends big amount of data, about IP, refer, browers, language, system. There is no need to use ajax.
But still some data cant be achieved this way, so GA script puts image into html with additional parameters, take a look at this example:
`http://www.google-analytics.com/__utm.gif?utmwv=4.3&utmn=1464271798&utmhn=www.example.com&utmcs=UTF-8&utmsr=1920x1200&utmsc=32-bit&utmul=en-us&utmje=1&utmfl=10.0%20r22&utmdt=Page title&utmhid=1805038256&utmr=0&utmp=/&utmac=cookie value`
This is a blank image, sometimes called a [tracking pixel](http://en.wikipedia.org/wiki/Web_bug), that GA puts into HTML. | Some good answers here which individually tend to hit on one method or another for sending the data. There's a valuable reference which I feel is missing from the above answers, though, and covers all the methods.
Google refers to the different methods of sending data 'transport mechanisms'
From the Analytics.js documentation Google mentions the [three main transport mechanisms](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#transport) that it uses to send data.
> This specifies the transport mechanism with which hits will be sent. The options are 'beacon', 'xhr', or 'image'. By default, analytics.js will try to figure out the best method based on the hit size and browser capabilities. If you specify 'beacon' and the user's browser does not support the `navigator.sendBeacon` method, it will fall back to 'image' or 'xhr' depending on hit size.
1. One of the common and standard ways to send some of the data to Google (which is shown in Thinker's answer) is by adding the data as GET parameters to a tracking pixel. This would fall under the category which Google calls an 'image' transport.
2. Secondly, Google can use the 'beacon' transport method if the client's browser supports it. This is often my preferred method because it will attempt to send the information immediately. Or in Google's words:
> This is useful in cases where you wish to track an event just before a user navigates away from your site, without delaying the navigation.
3. The 'xhr' transport mechanism is the third way that Google Analytics can send data back home, and the particular transport mechanism that is used can depend on things such as the size of the hit. (I'm not sure what other factors go into GA deciding the optimal transport mechanism to use)
In case you are curious how to force GA into using a specific transport mechanism, here is a sample code snippet which forces this event hit to be sent as a 'beacon':
```
ga('send', 'event', 'click', 'download-me', {transport: 'beacon'});
```
Hope this helps.
---
Also, if you are curious about this topic because you'd like to capture and send this data to your own site too, I recommend creating a binding to Google Analytics' send, which allows you to grab the payload and AJAX it to your own server.
```
ga(function(tracker) {
// Grab a reference to the default sendHitTask function.
originalSendHitTask = tracker.get('sendHitTask');
// Modifies sendHitTask to send a copy of the request to a local server after
// sending the normal request to www.google-analytics.com/collect.
tracker.set('sendHitTask', function(model) {
var payload = model.get('hitPayload');
originalSendHitTask(model);
var xhr = new XMLHttpRequest();
xhr.open('POST', '/index.php?task=mycollect', true);
xhr.send(payload);
});
});
``` | How does google analytics collect its data? | [
"",
"javascript",
"google-analytics",
"tracking",
""
] |
Like most web developers these days, I'm thoroughly enjoying the benefits of solid MVC architecture for web apps and sites. When doing MVC with PHP, autoloading obviously comes in extremely handy.
I've become a fan of [`spl_autoload_register`](http://www.php.net/spl_autoload_register) over simply defining a single `__autoload()` function, as this is obviously more flexible if you are incorporating different base modules that each use their own autoloading. However, I've never felt great about the loading functions that I write. They involve a lot of string checking and directory scanning in order to look for possible classes to load.
For example, let's say I have an app that has a base path defined as `PATH_APP`, and a simple structure with directories named `models`, `views` and `controllers`. I often employ a naming structure whereby files are named `IndexView.php` and `IndexController.php` inside the appropriate directory, and models generally have no particular scheme by default. I might have a loader function for this structure like this that gets registered with `spl_autoload_register`:
```
public function MVCLoader($class)
{
if (file_exists(PATH_APP.'/models/'.$class.'.php')) {
require_once(PATH_APP.'/models/'.$class.'.php');
return true;
}
else if (strpos($class,'View') !== false) {
if (file_exists(PATH_APP.'/views/'.$class.'.php')) {
require_once(PATH_APP.'/views/'.$class.'.php');
return true;
}
}
else if (strpos($class,'Controller') !== false) {
if (file_exists(PATH_APP.'/controllers/'.$class.'.php')) {
require_once(PATH_APP.'/controllers/'.$class.'.php');
return true;
}
}
return false;
}
```
If it's not found after that, I might have another function to scan sub-directories in the models directory. However, all the if/else-ing, string checking and directory scanning seems inefficient to me, and I'd like to improve it.
I'm very curious what file naming and autoloading strategies other developers might employ. I'm looking specifically for good techniques to employ for efficient autoloading, and not alternatives to autoloading. | This is what I have been using in all of my projects (lifted straight from the source of the last one):
```
public static function loadClass($class)
{
$files = array(
$class . '.php',
str_replace('_', '/', $class) . '.php',
);
foreach (explode(PATH_SEPARATOR, ini_get('include_path')) as $base_path)
{
foreach ($files as $file)
{
$path = "$base_path/$file";
if (file_exists($path) && is_readable($path))
{
include_once $path;
return;
}
}
}
}
```
If I look for SomeClass\_SeperatedWith\_Underscores it will look for SomeClass\_SeperatedWith\_Underscores.php followed by SomeClass/SeperatedWith/Underscores.php rooted at each directory in the current include path.
**EDIT:** I just wanted to put out there that I use this for efficiency in development, and not necessarily processing time. If you have PEAR on your path then with this you can just use the classes and don't have to include them when you need them.
I tend to keep my classes in a hierarchy of directories, with underscores breaking up namespaces... This code lets me keep the file structure nice and tidy if I want, or to inject a quick class file without nested directories if I want (for adding a single class or two to a library that it is defendant on, but not part of the project I am currently working on.) | I landed on this solution:
I created a single script that traverses my class library folder (which contains subfolders for separate modules / systems), and parses the file contents looking for class definitions. If it finds a class definition in a php file (pretty simple regex pattern), it creates a symlink:
```
class_name.php -> actual/source/file.php
```
This lets me use a single, simple autoload function that needs only the class name and the path to the main symlink folder, and doesn't have to do any path/string manipulation.
The best part is that I can rearrange my source code completely or add a new subsystem and just run the link generating script to have everything autoloaded. | Efficient PHP auto-loading and naming strategies | [
"",
"php",
"naming-conventions",
"spl-autoload-register",
""
] |
I have a basic class that derived subclasses inherit from, it carries the basic functions that should be the same across all derived classes:
```
class Basic {
public:
Run() {
int input = something->getsomething();
switch(input)
{
/* Basic functionality */
case 1:
doA();
break;
case 2:
doB();
break;
case 5:
Foo();
break;
}
}
};
```
Now, based on the derived class, I want to 'add' more case statements to the switch. What are my options here? I can declare virtual functions and only define them in the derived classes that are going to use them:
```
class Basic {
protected:
virtual void DoSomethingElse();
public:
Run() {
int input = something->getsomething();
switch(input)
{
/* Basic functionality */
...
case 6:
DoSomethingElse();
}
}
};
class Derived : public Basic {
protected:
void DoSomethingElse() { ... }
}
```
But this would mean when changing functions in any derived class, I would have to edit my base class to reflect those changes.
Is there a design pattern specifically for this kind of issue? I purchased a number of books on Design Patterns but I'm studying them on "by-need" basis, so I have no idea if there is such a pattern that I am looking for. | You may find useful to read about [Chain of responsibility pattern](http://en.wikipedia.org/wiki/Chain-of-responsibility_pattern) and rethink your solution in that way.
Also you can declare 'doRun' as protected method and call it in the base default case.
```
default:
doRun(input);
```
And define doRun in derived classes.
This is so called [Template Method pattern](http://en.wikipedia.org/wiki/Template_method_pattern) | I think the pattern you need is **Chain Of Responsibility** or maybe **Strategy** combined with a dynamic call table... | Design Pattern for optional functions? | [
"",
"c++",
"design-patterns",
"inheritance",
"derived-class",
""
] |
I'm new to C# and don't have any programming experience. But I've finish a C# basics.
Now I would like to design a simple tree view by adding parent node and child node.
I would like to add a second child for the Second node, I'm quite stuck here and don't know what's next.
Any ideas?
Here is the code:
```
private void addParentNode_Click(object sender, EventArgs e)
{
string yourParentNode;
yourParentNode = textBox1.Text.Trim();
treeView2.Nodes.Add(yourParentNode);
}
private void addChildNode_Click(object sender, EventArgs e)
{
string yourChildNode;
yourChildNode = textBox1.Text.Trim();
treeView2.Nodes[0].Nodes.Add(yourChildNode);
}
```
Sorry I wasn't clear, I'm not sure if I really need this one here:
```
//treeView1.BeginUpdate();
//treeView1.Nodes.Clear();
```
What I'm trying to do, is to add Parent Nodes and child node. In my code, I can add several Parent Nodes, but if I want to add a child node, it only add in the first parent node.
I want that if I add a child node, I want to add it to the second parent or third parent.
In my code I only use one treeview here which names as treeview2
Here is the [**screenshot**](http://www.mypicx.com/05192009/error/)
this is how my final code looks like:
Before I put the else, I'm getting an error if I don't select anything. So I made it that way that if there is nothing selected it will add the "child node" to the "default node" or (parent1 node). It seems to work good. Thanks guys;-)
```
//This is for adding a parent node
private void addParentNode_Click(object sender, EventArgs e)
{
treeView2.BeginUpdate();
string yourParentNode;
yourParentNode = textBox1.Text.Trim();
treeView2.Nodes.Add(yourParentNode);
treeView2.EndUpdate();
}
//This is for adding child node
private void addChildNode_Click(object sender, EventArgs e)
{
if (treeView2.SelectedNode != null)
{
string yourChildNode;
yourChildNode = textBox1.Text.Trim();
treeView2.SelectedNode.Nodes.Add(yourChildNode);
treeView2.ExpandAll();
}
//This is for adding the child node to the default node(parent 1 node)
else
{
string yourChildNode;
yourChildNode = textBox1.Text.Trim();
treeView2.Nodes[0].Nodes.Add(yourChildNode);
}
```
Additional question: Are there any other ways on how the code be better? Because here, I declare the string "yourChildNode" twice. One in the if and other one in the else, are there any simplification? | It's not that bad, but you forgot to call `treeView2.EndUpdate()` in your `addParentNode_Click()` method.
You can also call `treeView2.ExpandAll()` at the end of your `addChildNode_Click()` method to see your child node directly.
```
private void addParentNode_Click(object sender, EventArgs e) {
treeView2.BeginUpdate();
//treeView2.Nodes.Clear();
string yourParentNode;
yourParentNode = textBox1.Text.Trim();
treeView2.Nodes.Add(yourParentNode);
treeView2.EndUpdate();
}
private void addChildNode_Click(object sender, EventArgs e) {
if (treeView2.SelectedNode != null) {
string yourChildNode;
yourChildNode = textBox1.Text.Trim();
treeView2.SelectedNode.Nodes.Add(yourChildNode);
treeView2.ExpandAll();
}
}
```
I don't know if it was a mistake or not but there was 2 TreeViews. I changed it to only 1 TreeView...
EDIT: Answer to the additional question:
You can declare the variable holding the child node name outside of the if clause:
```
private void addChildNode_Click(object sender, EventArgs e) {
var childNode = textBox1.Text.Trim();
if (!string.IsNullOrEmpty(childNode)) {
TreeNode parentNode = treeView2.SelectedNode ?? treeView2.Nodes[0];
if (parentNode != null) {
parentNode.Nodes.Add(childNode);
treeView2.ExpandAll();
}
}
}
```
Note: see <http://www.yoda.arachsys.com/csharp/csharp2/nullable.html> for info about the ?? operator. | May i add to Stormenet example some KISS (Keep It Simple & Stupid):
If you already have a treeView or just created an instance of it:
Let's populate with some data - Ex. One parent two child's :
```
treeView1.Nodes.Add("ParentKey","Parent Text");
treeView1.Nodes["ParentKey"].Nodes.Add("Child-1 Text");
treeView1.Nodes["ParentKey"].Nodes.Add("Child-2 Text");
```
Another Ex. two parent's first have two child's second one child:
```
treeView1.Nodes.Add("ParentKey1","Parent-1 Text");
treeView1.Nodes.Add("ParentKey2","Parent-2 Text");
treeView1.Nodes["ParentKey1"].Nodes.Add("Child-1 Text");
treeView1.Nodes["ParentKey1"].Nodes.Add("Child-2 Text");
treeView1.Nodes["ParentKey2"].Nodes.Add("Child-3 Text");
```
Take if farther - sub child of child 2:
```
treeView1.Nodes.Add("ParentKey1","Parent-1 Text");
treeView1.Nodes["ParentKey1"].Nodes.Add("Child-1 Text");
treeView1.Nodes["ParentKey1"].Nodes.Add("ChildKey2","Child-2 Text");
treeView1.Nodes["ParentKey1"].Nodes["ChildKey2"].Nodes.Add("Child-3 Text");
```
As you see you can have as many child's and parent's as you want and those can have sub child's of child's and so on....
Hope i help! | adding child nodes in treeview | [
"",
"c#",
"treeview",
""
] |
Is it possible to add messages to the built-in error console of Firefox from JavaScript code running in web pages?
I know that I there's Firebug, which provides a `console` object and its own error console, but I was looking for a quick fix earlier on and couldn't find anything.
I guess it might not be possible at all, to prevent malicious web pages from spamming the log? | You cannot write to the console directly from untrusted JavaScript (e.g. scripts coming from a page). However, even if installing Firebug does not appeal to you, I'd recommend checking out [Firebug Lite](http://getfirebug.com/lite.html), which requires no installation into the browser (nor, in fact, does it even require Firefox). It's a script which you can include into any web page (even dynamically), which will give you some basic Firebug functionality (such as `console.log()`). | If you define a global function that checks for the existence of window.console, you can use Firebug for tracing and still plays nice with other browsers and/or if you turn Firebug's console tracing off:
```
debug = function (log_txt) {
if (typeof window.console != 'undefined') {
console.log(log_txt);
}
}
debug("foo!");
``` | Log to Firefox Error Console from JavaScript | [
"",
"javascript",
"firefox",
"debugging",
""
] |
Will Java code built and compiled against a 32-bit JDK into 32-bit byte code work in a 64-bit JVM? Or does a 64-bit JVM require 64-bit byte code?
To give a little more detail, I have code that was working in a Solaris environment running a 32-bit JVM, but now I'm getting issues after upgrading the JDK and Weblogic Server to 64-bit. | Yes, Java bytecode (and source code) is platform independent, assuming you use platform independent libraries. 32 vs. 64 bit shouldn't matter. | I accidentally ran our (largeish) application on a 64bit VM rather than a 32bit VM and didn't notice until some external libraries (called by JNI) started failing.
Data serialized on a 32bit platform was read in on the 64bit platform with no issues at all.
What sort of issues are you getting? Do some things work and not others? Have you tried attaching JConsole etc and have a peak around?
If you have a very big VM you may find that GC issues in 64 bit can affect you. | Java 32-bit vs 64-bit compatibility | [
"",
"java",
"jvm",
"64-bit",
"compatibility",
"32-bit",
""
] |
I have application which needs to use a dll (also written by me) which has be independently verified by a government agency. We plan to modify the dll very rarely due to the re-verification which would be required. I want to prevent inadvertent modifications to this dll being picked up by my application. Is there a way to create a hash code for the dll and set up my application to only use this specific version.
This way if someone modified some of the code for the dll, when we build and run the application the app would fail to load the dll (because it had changed).
Any ideas/suggestions?
Cheers,
James | Using Strong Names does part of this and prevent anyone else tampering with your assembly, but doesn't stop you doing it by accident and then resigning.
We use an independent process to start our main application. Before launching the main app, the start up app MD5's all the assmeblies and compares them against a list of those it expects to see, if something has changed, the MD5 fails and the main app is not loaded.
If you really wanted compile time checking, you could probably write a pre-build step that did the same MD5 comparison and failed the build if it had changed. | I know that if you click on a dll Reference in your project you can select 'Specific Version' in its properties and set it to True, will this not do what you are after?
Phill | Forcing my app to use a specific version of a dll | [
"",
"c#",
".net",
"dll",
""
] |
I was wondering, is there a way to create a timestamp in c# from a datetime?
I need a millisecond precision value that also works in Compact Framework(saying that since DateTime.ToBinary() does not exist in CF).
My problem is that i want to store this value in a database agnostic way so i can sortby it later and find out which value is greater from another etc. | I always use something like the following:
```
public static String GetTimestamp(this DateTime value)
{
return value.ToString("yyyyMMddHHmmssfff");
}
```
This will give you a string like 200905211035131468, as the string goes from highest order bits of the timestamp to lowest order simple string sorting in your SQL queries can be used to order by date if you're sticking values in a database | I believe you can create a unix style datestamp accurate to a second using the following
```
//Find unix timestamp (seconds since 01/01/1970)
long ticks = DateTime.UtcNow.Ticks - DateTime.Parse("01/01/1970 00:00:00").Ticks;
ticks /= 10000000; //Convert windows ticks to seconds
timestamp = ticks.ToString();
```
Adjusting the denominator allows you to choose your level of precision | Function that creates a timestamp in c# | [
"",
"c#",
"timestamp",
"compact-framework",
"database-agnostic",
"time-precision",
""
] |
In a Python system for which I develop, we usually have this module structure.
```
mymodule/
mymodule/mymodule/feature.py
mymodule/test/feature.py
```
This allows our little testing framework to easily import test/feature.py and run unit tests. However, we now have the need for some shell scripts (which are written in Python):
```
mymodule/
mymodule/scripts/yetanotherfeature.py
mymodule/test/yetanotherfeature.py
```
yetanotherfeature.py is installed by the module Debian package into /usr/bin. But we obviously don't want the .py extension there. So, in order for the test framework to still be able to import the module I have to do this symbolic link thingie:
```
mymodule/
mymodule/scripts/yetanotherfeature
mymodule/scripts/yetanotherfeature.py @ -> mymodule/scripts/yetanotherfeature
mymodule/test/yetanotherfeature.py
```
Is it possible to import a module by filename in Python, or can you think of a more elegant solution for this? | You could most likely use some tricker by using import [hooks](http://www.python.org/dev/peps/pep-0302/), I wouldn't recommend it though. On the other hand I would also probably do it the other way around , have your .py scripts somewhere, and make '.py'less symbolic links to the .py files. So your library could be anywhere and you can run the test from within by importing it normall (since it has the py extension), and then /usr/bin/yetanotherfeature points to it, so you can run it without the py.
Edit: Nevermind this (at least the hooks part), the import imp solution looks very good to me :) | The [imp module](http://docs.python.org/library/imp.html) is used for this:
```
daniel@purplehaze:/tmp/test$ cat mymodule
print "woho!"
daniel@purplehaze:/tmp/test$ cat test.py
import imp
imp.load_source("apanapansson", "mymodule")
daniel@purplehaze:/tmp/test$ python test.py
woho!
daniel@purplehaze:/tmp/test$
``` | Python imports: importing a module without .py extension? | [
"",
"python",
"debian",
""
] |
Good Day
All we are trying to do is inside a trigger make sure the user is not inserting two fees that have 'alone' in the name. Those fees need to be handled individually.
For some reason, it appears the top section of sql quit working two weeks ago. To get around it I recoded it the second way and get the correct results. What I am confused is why would the first portion *seemed* to have worked for the last several years and now it does not?
```
SELECT @AloneRecordCount = count(*)
FROM inserted i
INNER JOIN deleted d on i.id = d.id
WHERE i.StatusID = 32
AND d.StatusID <> 32
AND i.id IN
(SELECT settlementid FROM vwFundingDisbursement fd
WHERE fd.DisbTypeName LIKE '%Alone'
AND fd.PaymentMethodID = 0)
SELECT @AloneRecordCount = count(i.id)
FROM inserted i INNER JOIN
deleted d on i.id = d.id
JOIN vwFundingDisbursement fd on i.id = fd.settlementid
WHERE i.StatusID = 32
AND d.StatusID <> 32
AND fd.DisbTypeName like '%Alone'
AND fd.PaymentMethodID = 0
```
this is on SQL Server 2005
there is no error, instead the top statement will only return 1 or zero
while the bottom statement will return the actual number found. | A schema (or at least how the view is created) would help, but here's a guess...
If you are looking for multiple rows in vwFundingDisbursement with the value "Alone" in the distribution type name, then a JOIN is going to return multiple rows as your source table (INSERTED) joins to multiple rows in the view. If you use IN though, SQL doesn't care if it returns multiple matches, it's only going to give you one row.
As an example:
```
CREATE TABLE dbo.Test_In_vs_Join1
(
my_id INT NOT NULL
)
CREATE TABLE dbo.Test_In_vs_Join2
(
my_id INT NOT NULL
)
INSERT INTO dbo.Test_In_vs_Join1 (my_id) VALUES (1)
INSERT INTO dbo.Test_In_vs_Join1 (my_id) VALUES (2)
INSERT INTO dbo.Test_In_vs_Join1 (my_id) VALUES (3)
INSERT INTO dbo.Test_In_vs_Join1 (my_id) VALUES (4)
INSERT INTO dbo.Test_In_vs_Join1 (my_id) VALUES (5)
INSERT INTO dbo.Test_In_vs_Join2 (my_id) VALUES (1)
INSERT INTO dbo.Test_In_vs_Join2 (my_id) VALUES (1)
INSERT INTO dbo.Test_In_vs_Join2 (my_id) VALUES (2)
INSERT INTO dbo.Test_In_vs_Join2 (my_id) VALUES (3)
INSERT INTO dbo.Test_In_vs_Join2 (my_id) VALUES (3)
SELECT
T1.my_id,
COUNT(*)
FROM
dbo.Test_In_vs_Join1 T1
INNER JOIN dbo.Test_In_vs_Join2 T2 ON
T2.my_id = T1.my_id
GROUP BY
T1.my_id
SELECT
T1.my_id,
COUNT(*)
FROM
dbo.Test_In_vs_Join1 T1
WHERE
T1.my_id IN (SELECT T2.my_id FROM dbo.Test_In_vs_Join2 T2)
GROUP BY
T1.my_id
```
On a side note, burying a column inside of another column like this is a violation of the normalized form and just asking for problems. Performing this kind of business logic in a trigger is also a dangerous path to go down as you're finding out. | Have you got a null vwFundingDisbursement.settlementid value? Has that appeared recently? | What is the difference between the two sql statements? | [
"",
"sql",
"sql-server-2005",
"triggers",
""
] |
Is there any way to get the effect of running `python -u` from within my code? Failing that, can my program check if it is running in `-u` mode and exit with an error message if not? This is on Linux (Ubuntu 8.10 Server). | The best I could come up with:
```
>>> import os
>>> import sys
>>> unbuffered = os.fdopen(sys.stdout.fileno(), 'w', 0)
>>> unbuffered.write('test')
test>>>
>>> sys.stdout = unbuffered
>>> print 'test'
test
```
Tested on GNU/Linux. It seems it should work on Windows too. If I knew how to reopen sys.stdout, it would be much easier:
```
sys.stdout = open('???', 'w', 0)
```
References:
<http://docs.python.org/library/stdtypes.html#file-objects>
<http://docs.python.org/library/functions.html#open>
<http://docs.python.org/library/os.html#file-object-creation>
[Edit]
Note that it would be probably better to close sys.stdout before overwriting it. | You could always pass the -u parameter in the shebang line:
```
#!/usr/bin/python -u
``` | Unbuffered stdout in python (as in python -u) from within the program | [
"",
"python",
""
] |
If I have a service definition/implementation like this:
```
using System;
using System.ServiceModel;
namespace aspace.service
{
[ServiceContract(Namespace = "http://aspace.service")]
public interface IUpdate
{
[OperationContract]
ConfirmationMessage UpdatePerson(string PersonIdentifier);
}
}
public class UpdateService : IUpdate
{
public ConfirmationMessage UpdatePerson(string PersonIdentifier)
{
// some implementation here
}
}
```
I can create a servicehost like this:
```
ServiceHost host = new ServiceHost(typeof(UpdateService), someEndpointAddress);
```
Then, after creating a binding and adding metadatabehavior, I can open the host. Which will, upon a request from a client, call UpdatePerson(aPersonIdentifier).
I would like to talk to a database from UpdatePerson. Answers to [a previous question of mine](https://stackoverflow.com/questions/878047/database-access-from-everywhere-in-application) suggest I should use dependency injection for this sort of thing.
The problem is that I never create an instance of the class UpdateService. So how can I inject a dependency? How would you solve this?
Thanks, regards, Miel. | Take a look at the [IInstanceProvider](http://msdn.microsoft.com/en-us/library/system.servicemodel.dispatcher.iinstanceprovider.aspx) interface. Basically you need to implement this interface and in the method [GetInstance](http://msdn.microsoft.com/en-us/library/ms587089.aspx) instantiate the WCF class yourself providing any dependencies. | Basically you need to implement an IInstanceProvider based on your IOC container and an IServiceBehaviour that uses the instance provider you wrote. This will enable the IOC container to build up your object heirarchy for you.
There's an [example implementation here](http://orand.blogspot.com/2006/10/wcf-service-dependency-injection.html) | How to inject an object into a WCF service | [
"",
"c#",
"wcf",
"dependency-injection",
""
] |
I am designing a class where some methods won't cause any harm if they are exposed as public. But they can be private as well, since they will be used only from the same class in my project.
Making them public has the following advantages:
1. Unit Testable without the need of accessors.
2. Flexibility.
Making them private has the following advantages:
1. Public documentation simplification.
2. Some unknown bugs aren't exposed.
Which are the general guidelines in this case? | Oh, please please read Ch. 06 Of Code Complete 2 by Steve McConnell, if you have access to it. That will answer your question perfectly.
In general, If the Method fits in the overall "Persona" of the class, make it public. More technically, try not to break the abstraction. I would have given an example, but I do not know the context of your work and so examples might be irrelevant.
If you do not need it, there is no need to make a Method public.
For testing, +1 to John sanders.
But I really can not explain you here as Steve has explained in CC2.
I hope its OK to post Book references on Satckoverflow? (Please comment.) | Always make everything as private as possible.
For unit testing, I sometimes make a member *internal*, then use the `InternalsVisibleTo` attribute in the AssemblyInfo.cs to permit the unit test assembly access to the internal members. | How to decide if a method will be private, protected, internal or public? | [
"",
"c#",
".net",
"oop",
""
] |
Suppose I have an ADO.NET DataTable that I was to 'persist' by saving it to a ***new*** table in a SQL Server database - is there a fast way of doing this?
I realise I could write code generating the DDL for the 'CREATE TABLE' statement by looping through the DataColumns collection and working out the right type mappings and so on ... but I'm wondering if there is an existing method to do this, or a framework someone has written?
(NB: I need to be able to handle arbitrary columns, nothing too fancy like blobs; just common column types like strings, numbers, guids and dates. The program won't know what the columns in the DataTable are until run-time so they can't be hard-coded.) | ADO.net cannot create tables in SQL Server directly, however, SMO can do this with the .Create method of the Table class. Unfortunately, there is no built-in way to use a DataTable to define an SMO Table object.
Fortunately, Nick Tompson wrote just such a DataTable-to-SMO.Table routine back in 2006. It is posted as one of the replies to this MSDN forums topic <http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/4929a0a8-0137-45f6-86e8-d11e220048c3/> (edit: I can make hyperlinks now).
Note also, the reply post that shows how to add SQLBulkCopy to it. | If the table exists, you can use [`SqlBulkCopy`](http://msdn.microsoft.com/en-us/library/ex21zs8x.aspx) (which will accept a `DataTable`) to get the data into the table in the fastest possible way (much faster than via an adapter). I don't think it will create the table though. You might have to write the DDL yourself, or find some existing code to loop over the `DataTable.Columns` to do it. | How to save an arbitrary DataTable to a new SQL Table | [
"",
"c#",
"sql-server",
"ado.net",
""
] |
What is the best and cleanest way to close a console application on windows mobile?
The application by default is invisible and you cannot see it in the running programs, which is great for running a background process, but sometimes the user might need to close it.. | I decided to to read a boolean (keep alive) in the config file and have another application set it to false when I want to exit.
Its not that responsive but at least I can exit cleanly.. | Exit Main. Seriously. If you need someone to be able to exit is manually, there needs to be some mechanism like a shell icon and menu or a program in the Programs folder of something. How else would the user even know it's running? Any one of those visual cues would then set a named system event, and inside your Console app you'd have something listening for the same event (likely a worker). When it gets set, you take the actions required to shut down. | Closing a Windows Mobile Console Application | [
"",
"c#",
".net",
"windows-mobile",
"console-application",
""
] |
This is a simplified example to illustrate the question:
```
class A {};
class B
{
B(A& a) : a(a) {}
A& a;
};
class C
{
C() : b(a) {}
A a;
B b;
};
```
So `B` is responsible for updating a part of `C`. I ran the code through lint and it whinged about the reference member: [lint#1725](http://gimpel-online.com/MsgRef.html#1725).
This talks about taking care over default copy and assignments which is fair enough, but default copy and assignment is also bad with pointers, so there's little advantage there.
I always try to use references where I can since naked pointers introduce uncertaintly about who is responsible for deleting that pointer. I prefer to embed objects by value but if I need a pointer, I use `std::auto_ptr` as a data member of the class that owns the pointer, and pass the object around as a reference.
I would generally only use a pointer as a data member when the pointer could be null or could change. Are there any other reasons to prefer pointers over references for data members?
Is it true to say that an object containing a reference should not be assignable, since a reference should not be changed once initialized? | Avoid reference members, because they restrict what the implementation of a class can do (including, as you mention, preventing the implementation of an assignment operator) and provide no benefits to what the class can provide.
Example problems:
* you are forced to initialise the reference in each constructor's initialiser list: there's no way to factor out this initialisation into another function (~~until C++0x, anyway~~ **edit:** C++ now has [delegating constructors](https://thenewcpp.wordpress.com/2013/07/25/delegating-constructors))
* the reference cannot be rebound or be null. This can be an advantage, but if the code ever needs changing to allow rebinding or for the member to be null, all uses of the member need to change
* unlike pointer members, references can't easily be replaced by smart pointers or iterators as refactoring might require
* Whenever a reference is used it looks like value type (`.` operator etc), but behaves like a pointer (can dangle) - so e.g. [Google Style Guide](https://google.github.io/styleguide/cppguide.html#Reference_Arguments) discourages it | My own rule of thumb :
* **Use a reference member when you want the life of your object to be dependent on the life of other objects** : it's an explicit way to say that you don't allow the object to be alive without a valid instance of another class - because of no assignment and the obligation to get the references initialization via the constructor. **It's a good way to design your class without assuming anything about its instance being member or not of another class.** You only assume that their lives are directly linked to other instances. It allows you to change later how you use your class instance (with new, as a local instance, as a class member, generated by a memory pool in a manager, etc.)
* **Use pointer in other cases** : When you want the member to be changed later, use a pointer or a const pointer to be sure to only read the pointed instance. **If that type is supposed to be copyable, you cannot use references anyway.** Sometimes you also need to initialize the member after a special function call ( init() for example) and then you simply have no choice but to use a pointer. **BUT : use asserts in all your member functions to quickly detect wrong pointer state!**
* **In cases where you want the object lifetime to be dependent on an external object's lifetime, and you also need that type to be copyable, then use pointer members but reference argument in constructor** That way you are indicating on construction that the lifetime of this object depends on the argument's lifetime BUT the implementation use pointers to still be copyable. As long as these members are only changed by copy, and your type doesn't have a default constructor, the type should fulfill both goals. | Should I prefer pointers or references as data members? | [
"",
"c++",
"reference",
"class-members",
""
] |
Recently I've been working on some embedded devices, where we have some structs and unions that need to be initialized at compile time so that we can keep certain things in flash or ROM that don't need to be modified, and save a little flash or SRAM at a bit of a performance cost. Currently the code compiles as valid C99, but without this adjustment it used to compile as C++ code as well, and it would be great to support things being compiled that way as well. One of the key things that prevents this is that we're using C99 designated initializers which do not work within the C subset of C++. I'm not much of a C++ buff, so I'm wondering what simple ways there might be to make this happen in either C++ compatible C, or in C++ that still allow initialization at compile time so that the structs and unions not need be initialized after program startup in SRAM.
One additional point of note: a key reason for designated initializer usage is initalizing as NOT the first member of a union. Also, sticking with standard C++ or ANSI C is a plus in order to maintain compatibility with other compilers (I know about the GNU extensions that provide something like designated initializers without C99). | I'm not sure you can do it in C++. For the stuff that you need to initialize using designated initializers, you can put those separately in a `.c` file compiled as C99, e.g.:
```
// In common header file
typedef union my_union
{
int i;
float f;
} my_union;
extern const my_union g_var;
// In file compiled as C99
const my_union g_var = { .f = 3.14159f };
// Now any file that #include's the header can access g_var, and it will be
// properly initialized at load time
``` | Building on Shing Yip's answer, and with benefit of 3 year's time, C++11 can now guarantee compile time initialization:
```
union Bar
{
constexpr Bar(int a) : a_(a) {}
constexpr Bar(float b) : b_(b) {}
int a_;
float b_;
};
extern constexpr Bar bar1(1);
extern constexpr Bar bar2(1.234f);
```
Assembly:
```
.globl _bar1 ## @bar1
.p2align 2
_bar1:
.long 1 ## 0x1
.globl _bar2 ## @bar2
.p2align 2
_bar2:
.long 1067316150 ## float 1.23399997
``` | C++ Equivalent to Designated Initializers? | [
"",
"c++",
"c",
"designated-initializer",
""
] |
My WCF server needs to go up and down on a regular basis, the client sometimes uses the server, but if it is down the client just ignore it.
So each time I need to use the server services I check the connection state and if it's not open I open it.
The problem is that if I attempt to open while the server is down there is a delay which hits performance.
My question is, is there a way to do some kind of `myClient.CanOpen()`? so I'd know if there is any point to open the connection to the server. | There is an implementation of WS-Discovery that would allow you to listen for up/down announcements for your service. This is also a very convenient form of service address resolution because it utilizes UDP multicast messages to find the service, rather than configuring one set address on the client.
[WS-Discovery for WCF](http://www.codeproject.com/KB/WCF/ws-discovery.aspx)
There's also an implementation done by a Microsoft employee:
[WS-Discovery Sample Implementation](http://blogs.msdn.com/vipulmodi/archive/2006/12/21/ws-discovery-sample-implementation.aspx)
.NET 4.0 will include this natively. You can read about .NET 4.0's implementation on Jesus Rodriguez's blog. It has a great chart that details the ad-hoc communication that goes on in WS-Disco [Using WS-Discovery in WCF 4.0](http://weblogs.asp.net/gsusx/archive/2009/02/13/using-ws-discovery-in-wcf-4-0.aspx)
Another thing you might consider, especially if your messages are largely one-way, is a protocol that works natively disconnected, like MSMQ. I don't know what your design for your application looks like, but MSMQ would allow a client to send a message regardless of the state of the service and the service will get it when it comes back up. This way your client doesn't have to block quite so much trying to get confirmation that a service is up before communicating... it'll just fire and forget.
Hope this helps. | If you are doing a synchronous call expecting a server timeout in an application with a user interface, you should be doing it in another thread. I doubt that the performance hit is due to exception overhead.
Is your performance penalty in CPU load, gui availability or wall clock time?
You could investigate to see if you can create a custom binding on TCP, but with faster timeout.
I assume you know that "IsOneWay=true" is faster than request->response in your case because you wouldn't be expecting a response anyway, but then you are not getting confirmation or return values.
You could also implement a two-way communication that is not request->response. | How to check the availability of a net.tcp WCF service | [
"",
"c#",
"wcf",
"net.tcp",
""
] |
I have a regular texbox control. I need to highlight some words with a red color. Is it any possible to do with JavaScript (jquery or anything else)? | Most rich text javascript editors use an `iframe` with `designMode='on'` since that yields the best cross browser results:
```
<iframe ID="rtbox"></iframe>
```
To make the `iframe` editable and insert rich text via Javascript you can use the following sample code:
```
var rtbox = document.getElementById('rtbox');
var doc = rtbox.document ? rtbox.document : rtbox.contentDocument;
doc.designMode = 'on';
doc.body.innerHTML = 'one <span style="color:red">red</span> word';
``` | No, you can't do that. Your only way would be to use a rich text editor component like FCKEditor or similar. | Is it possible to change the color of a word in a textbox? | [
"",
"javascript",
"text",
""
] |
I'm redesigning a class constructor in C++ and need it to catch an unspecified bool. I have used default values for all of the other parameters, but from my understanding bool can only be initialized to true or false. Since both of those cases have meaning in the class, how should I handle checking for change from a default value? | The reality is that you can't do this. A bool has value, either true or false, and if you haven't initialized it then it is randomly true or false, possibly different at each run of the program or allocation of that class.
If you need to have a type with more options, define an enum.
```
typedef enum MyBool {
TRUE,
FALSE,
FILENOTFOUND
} MyBool;
``` | *Tristate bool is the path to the dark side. Tristate bool leads to anger. Anger leads to hate. Hate leads to suffering.*
---
**Prefer not to use a tristate bool.**
Instead, use one additional boolean for whether the first boolean is "initialized" (or better, "known") or not.
```
class Prisoner : public Person
{
...
bool is_veredict_known;
bool is_guilty;
}
```
If the veredict is not known yet, you can't tell if the Prisoner is really guilty, but your code can differentiate between the different situations. Of course the Constitution assures that the default value of is\_guilty should be false, but, still... :)
By the way, the class invariant should include:
```
assert(is_veredict_known || !is_guilty);
``` | Default value for bool in C++ | [
"",
"c++",
"boolean",
""
] |
We are using jQuery in our project. We have numerous custom javascript files in our web-app that have UDFs utilizing the jQuery features. We need to reduce the size (as a part of performance improvement activities) and I am looking for a reliable 'minifier' for these files (it would be great if the same tool could minify the CSS files too)
We tried JSLint and JSMin - but JSLint does not complete and throws many exceptions as soon as it encounters jQuery code. | The [YUI Compressor](http://developer.yahoo.com/yui/compressor/) is a tool I use, it compresses both JS and CSS well, and it is written in Java (so you can work it into a build process via ant).
Someone's even made an [online version](http://www.refresh-sf.com/yui/) of it. | There's also [a .NET port of YUI Compressor](http://yuicompressor.codeplex.com/) which allows you to:-
* intergrate the minification/file combining into Visual Studio post-build events
* intergrate into a TFS Build (including CI)
* if you wish to just use the dll's in your own code (eg. on the fly minification).
because this is a port of the (original) java version YUI Compressor, which a few peeps mention above, it should give you the same results BUT all in the .NET environment -- no need for java.
HTH. | Minify jQuery based js files | [
"",
"javascript",
"css",
"minify",
""
] |
Got a strange problem in PHP land. Here's a stripped down example:
```
$handle = fopen("file.txt", "r");
while (($line = fgets($handle)) !== FALSE) {
echo $line;
}
fclose($handle);
```
As an example, if I have a file that looks like this:
```
Lucien Frégis
```
Then the above code run from the command line outputs the same name, but instead of an e acute I get :
```
Lucien FrÚgis
```
Looking at a hex dump of the file I see that the byte in question is E9, which is what I would expect for e acute in php's default encoding (ISO-8859-1), confirmed by outputting the current value of default\_charset.
Any thoughts?
EDIT:
As suggested, I've checked the windows codepage, and apparently its 850, which is obsolete (but does explane why 0xE9 is being displayed the way it is...) | `0xE9` is the encoding for **é** in iso-8859-1. It's also the unicode codepoint for the same character. If your console interprets output in a different encoding (Such as cp-850), then the same byte will translate to a different codepoint, thus displaying a different character on screen. If you look at [the code page for cp-850](http://en.wikipedia.org/wiki/Code_page_850), you can see that the byte `0xE9` translates to **Ú** (Unicode codepoint `0xDA`). So basically your console interprets the bytes wrongly. I'm not sure how, but you should change the charset of your console to iso-8859-1. | Before running your php on the command line, try executing the command:
```
chcp 1252
```
This will change the codepage to one where the accented characters are as you expect.
See the following links for the difference between the 850 and 1252 codepages:
<http://en.wikipedia.org/wiki/Code_page_850>
<http://en.wikipedia.org/wiki/Windows-1252> | Problem reading accented characters in PHP | [
"",
"php",
"encoding",
""
] |
What I'm suppose to do is use a trigger to update the quantity sold in the inventory table when an order is placed in the concessions\_sold table(an insert is used). I admit that I suck at PLSQL so I'm not sure if I'm going about this the right way. The error I'm getting is:
```
SQL> insert into concessions_sold
2 values(33, 104, '26-Apr-09', 50);
insert into concessions_sold
*
ERROR at line 1:
ORA-01779: cannot modify a column which maps to a non key-preserved table
```
My code:
```
create or replace trigger LabEx5_1 after insert on Concessions_sold
for each row
begin
if inserting then
update
(
select i.quantity
from inventory i, concessions_sold cs, concession c
where i.inventory_id = c.inventory_id and c.concession_id = cs.concession_id
)
set quantity = :new.quantity;
end if;
end LabEx5_1;
/
``` | First of all if you use a "For Each Row" trigger then you MUST NOT operate on the whole table, just one row, so
```
select i.quantity
from inventory i, concession c
where i.inventory_id = c.inventory_id and c.concession_id = cs.concession_id
```
should be changed to
```
select i.quantity
from inventory i, concessions_sold cs, concession c
where i.inventory_id = c.inventory_id and c.concession_id = :new.concession_id
```
Second, the update should be something like:
```
update inventory
set quantity = :new.quantity
where inventory_id = (
select inventory_id from concession c where concession_id = :new.concession_id
) ;
```
So the trigger should look like this:
```
create or replace
trigger LabEx5_1 after insert on Concessions_sold
for each row
begin
if inserting then
update inventory
set quantity = :new.quantity
where inventory_id = (
select inventory_id from concession c
where concession_id = :new.concession_id
) ;
end if;
end LabEx5_1;
``` | You're trying to update a join view in your trigger, which has several constraints on when this can be done; see the [Oracle documentation](http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/views001.htm#insertedID5) for more details.
This UPDATE should do what you're trying to achieve:
```
UPDATE inventory i
SET i.quantity = :new.quantity
WHERE i.inventory_id =
(SELECT c.inventory_id
FROM concessions c
WHERE c.concession_id = :new.concession_id)
``` | PLSQL - Triggers: Cannot modify a column which maps to a non key-preserved table | [
"",
"sql",
"plsql",
"triggers",
""
] |
I am dealing with a race condition, I believe, in my JAVA GUI.
I have some methods that create an "anonymous method" inside an anonymous class like this:
```
synchronized foo()
{
someMethod(new TimerTask()
{
public synchronized run()
{
//stuff
}
};
}
```
QUESTION: is that run method synchronized on the TimerTask object or the class that foo is in?
QUESTION2: if I got rid of the "synchronized" in the run() declaration, and instead have a synchronized(this) {} block inside the run() body, would "this" refer to the TimerTask object or to the object that is an instance of the method that contains foo()?
Please help me out here.
Thanks,
jbu | The `run` method is synchronized on the `TimerTask` itself. Synchronized [instance methods are *always* synchronized on `this`](http://java.sun.com/docs/books/jls/third_edition/html/classes.html#8.4.3.6) object. (Class methods are synchronized on the `Class` object.)
If you want to synchronize on the object of which `foo` is a member, you need to [qualify the `this` keyword.](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.8.4) Suppose `foo()` is a member of the `Bar` class, inside the `run()` method of `TimerTask`, you can use
```
public void run() {
synchronized(Bar.this) {
...
}
}
``` | I'm pretty sure of these answers, but I can't dig up a good source atm.
The first question:
synchronized will lock on the TimerTask.
Second question:
this refers to the TimerTask; if you wanted to lock on the containing object you'd use MyContainingObject.this | java anonymous classes and synchronization and "this" | [
"",
"java",
"synchronization",
"class",
"this",
"anonymous",
""
] |
I just came across a weird error:
```
private bool GetBoolValue()
{
//Do some logic and return true or false
}
```
Then, in another method, something like this:
```
int? x = GetBoolValue() ? 10 : null;
```
Simple, if the method returns true, assign 10 to the Nullable`int` x. Otherwise, assign null to the **nullable** int. However, the compiler complains:
> Error 1 Type of conditional expression cannot be determined because there is no implicit conversion between `int` and `<null>`.
Am I going nuts? | The compiler first tries to evaluate the right-hand expression:
```
GetBoolValue() ? 10 : null
```
The `10` is an `int` literal (not `int?`) and `null` is, well, `null`. There's no implicit conversion between those two hence the error message.
If you change the right-hand expression to one of the following then it compiles because there is an implicit conversion between `int?` and `null` (#1) and between `int` and `int?` (#2, #3).
```
GetBoolValue() ? (int?)10 : null // #1
GetBoolValue() ? 10 : (int?)null // #2
GetBoolValue() ? 10 : default(int?) // #3
``` | Try this:
```
int? x = GetBoolValue() ? 10 : (int?)null;
```
Basically what is happening is that conditional operator is unable to determine the "return type" of the expression. Since the compiler implictitly decides that `10` is an `int` it then decides that the return type of this expression shall be an `int` as well. Since an `int` cannot be `null` (the third operand of the conditional operator) it complains.
By casting the `null` to a `Nullable<int>` we are telling the compiler explicitly that the return type of this expression shall be a `Nullable<int>`. You could have just as easily casted the `10` to `int?` as well and had the same effect. | Nullable types and the ternary operator: why is `? 10 : null` forbidden? | [
"",
"c#",
".net",
"nullable",
"conditional-operator",
""
] |
I currently have a need for a custom `ListViewItem` class - let's call it `MyListViewItem`. It needs to have some additional data associated with each item, and perform some operations when the Checked property is changed. I've tried several things, but currently the relevant code looks like this:
```
class MyListViewItem : ListViewItem {
new public bool Checked {
get {
return base.Checked;
}
set {
base.Checked = value;
// do some other things here based on value
}
}
public MyListViewItem(Object otherData) {
// ...
}
}
```
The problem I'm having is that when I click on the item's checkbox in the ListView, my setter is never called. Does anyone know what I am doing wrong? I'm aware that I could use the ItemChecked event of the parent ListView, but that seems like a much less clean solution. (Also I'm not actually passing an Object to the constructor, but that part isn't important here). | It's not working cause the "new" keyword doesn't override it just "hides".
This means that if you call Checked on an instance of object that is referenced through the type definition of MyListViewItem you *will* run your code. However the ListView references to this object via the type definition of ListViewItem and therefore will not call your "new" method.
"new" is *not* override. The better solution is to probably handle the code in a custom list view. It isn't really that ugly. | [`new`](http://msdn.microsoft.com/en-us/library/aa645625(VS.71).aspx) does not `override` the `base` member. It declares a new method with the same name. In VB.NET it's called `Shadows`.
Indeed, `new` doesn't do anything except turning off a compiler warning. The member you do not declare as `override` (and you can only do this if the `base` member is `virtual` or `override`) will be completely unrelated to the inheritance tree of the `base` member. | C# inheritance and overriding base properties | [
"",
"c#",
"inheritance",
"properties",
""
] |
I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.
On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere.
What will be a problem if I don't do this?
Are there any issues that will come up if I do do this?
I'm using Pylons right now as my framework. | In Python 3, all strings are Unicode. So, you can prepare for this by using `u''` strings everywhere you need to, and then when you eventually upgrade to Python 3 using the `2to3` tool all the `u`s will disappear. And you'll be in a better position because you will have already tested your code with Unicode strings.
See [Text Vs. Data Instead Of Unicode Vs. 8-bit](http://docs.python.org/3.0/whatsnew/3.0.html#text-vs-data-instead-of-unicode-vs-8-bit) for more information. | You can avoid the `u''` in python 2.6 by doing:
```
from __future__ import unicode_literals
```
That will make `'string literals'` to be unicode objects, just like it is in python 3; | Should my python web app use unicode for all strings? | [
"",
"python",
"django",
"web-applications",
"unicode",
"pylons",
""
] |
I have a little experience developing small command-line applications with Python. I want to move on to developing GUIs with Python. From the available GUI toolkits for Python, the ones I feel the most inclined to are *wxPython* and *Tkinter*; but I don't want to code all of the GUI by myself all of the time.
Are there any good GUI IDEs for any of these toolkits? It doesn't need to be free or open source. | I will talk only about WxPython because it's the only toolkit I have experience with. TkInter is nice to write small programs (then it doesn't require a GUI Designer), but is not really appropriate for large application development.
* [wxFormBuilder](http://wxformbuilder.org) is really good: it generates `.XRC` files you need to load in your program, and it can generate `.py` files by subclassing them when you use.
* [DialogBlocks](http://www.dialogblocks.com) and [wxDesigner](https://wiki.python.org/moin/WxDesigner) are two **commercial** software which can generate Python code directly. I didn't tested these much because of their price.
* [wxGlade](http://wxglade.sourceforge.net) is (I think) not yet mature enough for large programs, but it's worth a try.
After trying all these, I realized they had all *flaws* and that nothing is better than just writing the GUI in an editor. The problem is the extended learning curve. But then you will be much more faster and your code will be much more flexible than when using a GUI designer.
Have a look at this [list of major applications](http://wiki.wxpython.org/wxPythonPit%20Apps) written with wxPython. You will probably see that none of these use a GUI Designer, there must be a reason for this.
You then understand **gs** is right when saying that either you switch to PyQt or you write your application by hand. I had a look at Qt Designer in the past and thought this was what I needed. Unfortunately PyQt has some license restrictions. | This may not answer your question directly, but I chose [PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro) because there were no good UI designers for wxPython.
Apparently you either write your GUIs by hand or switch to PyQt.
Because Nokia and Riverbankcomputing couldn't agree on a LGPL solution, Nokia decided to build its own bindings: [PySide](https://wiki.qt.io/Category:LanguageBindings::PySide). | Nice IDE with GUI designer for wxPython or Tkinter | [
"",
"python",
"user-interface",
"ide",
"wxpython",
"tkinter",
""
] |
In the course of programming we encounter large javascript files which are open source and written in an object oriented manner( like JQuery ).
If we need to modify these files we have to have a basic knowledge of the members and the flow. If we have multiple files then the task is much more difficult.
Where do I start to get the flow of this?? | First of all I think that you have to understand how JavaScript object orientation works, JavaScript OO is [Prototype-based](http://en.wikipedia.org/wiki/Prototype-based_programming), in which *classes* are not present, and behavior reuse is implemented by prototyping.
I've seen that this can be hard to catch at the beginning for programmers that have been working on conventional class-based object-oriented languages (like C++, C#, Java, etc).
Recommended articles:
* [Introduction to Object-Oriented JavaScript](https://developer.mozilla.org/en/Introduction_to_Object-Oriented_JavaScript)
* [JavaScript: The World's Most Misunderstood Programming Language](http://javascript.crockford.com/javascript.html)
* [Classical Inheritance in JavaScript](http://javascript.crockford.com/inheritance.html)
* [Private Members in JavaScript](http://www.crockford.com/javascript/private.html)
* [Class-Based vs. Prototype-Based Languages](https://developer.mozilla.org/En/Core_JavaScript_1.5_Guide:Class-Based_vs._Prototype-Based_Languages) | There are two things I would do:
1. Read. If there's documentation files, read those. If there's comments, read those. If neither of those help you, then go to the source and read that.
2. When you talk about open source Javascript, I assume you mean this JS is collected into some kind of project; all client-side JS is open source :P. In that case, the authors may be willing to tell you about their code. Locate their email on the project page, and ask them to give you a high-level overview of the code so you can start reading it and understanding it yourself. They probably won't be willing to hold your hand through the entire thing, but having that as a starting point would probably help. | Object Oriented Javascript | [
"",
"javascript",
""
] |
So I have this nice spiffy MVC-architected application in Java Swing, and now I want to add a progress bar, and I'm confused about Good Design Methods to incorporate a JProgressBar into my view. Should I:
* add a DefaultBoundedRangeModel to my controller's state, and export it?
```
class Model {
final private DefaultBoundedRangeModel progress
= new DefaultBoundedRangeModel();
public void getProgressModel() { return progress; }
public void setProgressCount(int i) { progress.setValue(i); }
}
class Controller {
Model model;
int progressCount;
void doSomething()
{
model.setProgressCount(++progressCount);
}
}
class View {
void setup(Model m)
{
JProgressBar progressBar = /* get or create progress bar */ ;
progressBar.setModel(m.getProgressModel());
}
}
/* dilemma: Model allows progress to be exported so technically
all of the progress state could be set by someone else; should it be put
into a read-only wrapper? */
```
* use JGoodies Binding to try to connect the JProgressBar's visual state to my model's state?
```
class Model {
private int progress;
public void getProgressCount() { return progress; }
public void setProgressCount(int i) { progress = i; }
}
class View {
void setup(Model m)
{
ProgressBar progressBar = /* get or create progress bar */ ;
CallSomeMagicMethodToConnect(m, "progressCount", progressBar, "value");
// is there something that works like the above?
// how do I get it to automatically update???
}
}
```
* or something else???
**edit:** more specifically: could someone point me to a Good Example of realistic source for an application in Java that has a status bar that includes a progress bar, and has a decent MVC implementation of it? | No (to 1) and NOOOO (to 2). At least in my opinion.
No (to 1): First, DefaultBoundedRangeModel is a javax.swing class. In my opinion, these classes have no place in models. For example, think about the model living on the server, being accessed via RMI - All of the sudden putting a javax.swing class there seems "not right".
However, the real problem is that you're giving a part of your model (the bounded model) to someone else, with no control over events fired or queries made.
No (to 2): Ugh. Binding is fun but (at least in my opinion) should be used to synchronize between UI model and UI components, not between data model and UI model. Again, think what would happen if your data model lived on a remote server, accessed by RMI.
So what? Well, this is only a suggestion, but I'd add an event listener interface and add the standard event listener subscription methods (addListner(...), removeListener(...)). I'd call these listeners from within my model when I have updates going on. Of course, I'd make sure to document the calling thread (or say it cannot be determined) in order for the client (the UI in this case) to be able to synchronize correctly (invokeLater and friends). Since the listener service will be exposed by the controller, this will allow the model to live anywhere (even allowing for listeners to be remotely invoked or pooled). Also, this would decouple the model from the UI, making it possible to build more models containing it (translators / decorators / depending models).
Hope this helps. | I would say, something else.
The problem I have had with MVC, is to define the level of abstraction of the model.
* Model could be some sort of objects for the UI components
* Model could also be some other sort of objects for the program it self.
and
* Model could be as high as business models.
In this case I would have separated model/component pairs for the progress bar and handle them in a separate controller class.
This **[article](http://java.sun.com/products/jfc/tsc/articles/architecture/)** describes swing architecture and might clarify the way it uses models inside. | progress bars + MVC in Java =? | [
"",
"java",
"model-view-controller",
"swing",
"progress-bar",
""
] |
I Have an odd issue with some JavaScript code (again, I hate debugging JS code). I am working on a regular table - which I fill up from a JSON call, and have added in support for some paging (sort of 2x paging I guess you could call it), sorting and some selecting of rows. Everything is working nicely - BUT when a row is DESELECTED (and deselected only) my add\_navigate event gets fired twice, which results in a bit of reloading of data that is not needed - and an indication of loading that is even more not needed.
First here's my JS code:
```
var customerType;
var selYear;
var selMonth;
var sdir;
var sort;
var page;
var noteId;
var hasDoneCall;
var customerId;
var customerIdChanged = false;
function initValues() {
customerType = "Publisher";
selYear = new Date().getFullYear();
selMonth = new Date().getMonth()+1;
sdir = false;
sort = "CustomerName";
page = 1;
noteId = false;
customerId = 0;
hasDoneCall = location.href.indexOf('#') > 0;
}
function flash(elm, color, duration) {
var current = elm.css('backgroundColor');
elm.animate({ backgroundColor: 'rgb(' + color + ')' }, duration / 2).animate({ backgroundColor: current }, duration / 2);
}
function createNotes(elm) {
var btn = jQuery(elm);
btn.attr('disabled', 'disabled');
bulkCreditOption('true', '', function(changeSet) {
var i = 0;
while (i < changeSet.length) {
var selector = "input[type=checkbox][value=" + changeSet[i] + "].check:checked";
var row = jQuery(selector).parent().parent();
var cell = row.find("td:nth-child(2)");
cell.html("<a href=\"javascript:showNotes('" + changeSet[i] + "')\">" + cell.html() + "</a>");
flash(row, '60, 130, 200', 500);
i++;
}
btn.removeAttr('disabled');
});
}
function deleteNotes(elm) {
var btn = jQuery(elm);
btn.attr('disabled', 'disabled');
bulkCreditOption('', 'true', function(changeSet) {
var i = 0;
while (i < changeSet.length) {
var selector = "input[type=checkbox][value=" + changeSet[i] + "].check:checked";
var row = jQuery(selector).parent().parent();
var cell = row.find("td:nth-child(2)");
cell.html(cell.text());
flash(row, '60, 130, 200', 500);
i++;
}
btn.removeAttr('disabled');
});
}
function bulkCreditOption(createNotes, deleteNotes, callback) {
var path = "/BulkCredit";
var data = "";
var checked = jQuery("input[type=checkbox].check:checked");
checked.each(function(chk) {
data += "&ids=" + urlencode(jQuery(this).val());
});
jQuery.ajax({
type: 'POST',
url: path,
dataType: 'json',
data: "createNotes=" + urlencode(createNotes) + data + "&deleteNotes=" + urlencode(deleteNotes),
success: function(msg) {
callback(msg);
}
});
}
initValues();
Sys.Application.add_init(function() {
Sys.Application.add_navigate(function(sender, e) {
var reinstate = e.get_state();
if (typeof (reinstate) != 'undefined' && typeof (reinstate.customerType) != 'undefined') {
customerType = reinstate.customerType;
selYear = reinstate.selYear;
selMonth = reinstate.selMonth;
sdir = reinstate.sdir;
sort = reinstate.sort;
page = reinstate.page;
noteId = reinstate.noteId;
customerId = reinstate.customerId;
} else {
initValues();
}
if (!customerIdChanged) {
jQuery("#customerTypeChanger").val(customerType);
jQuery("#customerFilter").val(customerId);
jQuery("#monthPicker").empty();
makeMonthPicker();
if (noteId != false && noteId != 'false') {
doShowNotes();
} else {
jQuery("#notesContent").hide();
jQuery("#tableContent").show();
doAjaxCall();
}
} else {
//logic to fetch customer specific stuff here, TODO
customerIdChanged = false;
}
});
Sys.Application.set_enableHistory(true);
jQuery(document).ready(function() {
origColor = jQuery("#dataTable > thead > tr > th").css('backgroundColor');
makeMonthPicker();
jQuery("#customerTypeChanger").val(customerType);
jQuery("#customerTypeChanger").change(function() {
customerType = jQuery(this).val();
iqSetHistory();
});
jQuery("#customerFilter").change(function() {
customerId = jQuery(this).val();
var tableBody = jQuery("#dataTable > tbody");
tableBody.find("tr").removeClass("selected");
tableBody.find("tr[rel=" + customerId + "]").addClass("selected");
customerIdChanged = true;
iqSetHistory();
});
jQuery(".checkAll").click(function() {
var elm = jQuery(this);
if (elm.is(':checked')) {
jQuery(".check").attr('checked', 'checked');
} else {
jQuery(".check").removeAttr('checked');
}
});
if (!hasDoneCall) {
if (noteId == false) {
doAjaxCall();
} else {
doShowNotes();
}
}
});
});
function makeMonthPicker() {
var selDate = new Date();
selDate.setFullYear(selYear);
selDate.setMonth(selMonth-1);
jQuery("#monthPicker").monthPicker(function(year, month) {
selYear = year;
selMonth = month;
iqSetHistory();
}, selDate);
}
var origColor;
var notesPath = "/ShowNotes";
function fadeOut(elm) {
elm.animate({ backgroundColor: 'rgb(180, 180, 180)' }, 250);
}
function fadeIn(elm) {
elm.animate({ backgroundColor: origColor }, 250);
}
function iqSetHistory() {
var state = { 'customerType': customerType, 'selYear': selYear, 'selMonth': selMonth, 'sdir': sdir, 'sort': sort, 'page': page, 'noteId': noteId, 'customerId':customerId };
Sys.Application.addHistoryPoint(state);
}
var ajaxPath = "/GetCreditListMonth";
function doAjaxCall() {
fadeOut(jQuery("#dataTable > thead > tr > th"));
jQuery.ajax({
type: "POST",
url: ajaxPath,
dataType: "json",
data: "month=" + selMonth + "&year=" + selYear + "&custType=" + customerType + "&sort=" + sort + "&sdir=" + sdir + "&page=" + page + "&asCsv=false",
success: function(msg) {
var table = jQuery("#dataTable");
var tableBody = table.find("tbody");
tableBody.empty();
var i = 0;
while (i < msg.Rows.length) {
var data = msg.Rows[i];
var row = jQuery("<tr rel=\"" + data.CustomerId + "\"></tr>");
if (data.CustomerId == customerId) {
row.addClass("selected");
}
if (i % 2 == 1) {
row.addClass("alternatetablerow");
}
var custName = data.CustomerName;
if (data.PaymentCreated) {
custName = "<a href=\"javascript:showNotes('" + getCreditId(data.CustomerId) + "')\">" + custName + "</a>";
}
row.append("<td><input type=\"checkbox\" class=\"check\" name=\"ids\" value=\"" + getCreditId(data.CustomerId) + "\" /></td>");
row.append("<td>" + custName + "</td>");
row.append("<td>" + data.AmountExcludingTaxes + "</td>");
row.append("<td>" + data.BonusAmount + "</td>");
row.append("<td>" + data.Amount + "</td>");
row.appendTo(tableBody);
i++;
}
tableBody.find("input, a").click(function(event){ //Stop clicks from falling through to the table row event
event.stopPropagation();
return true;
});
tableBody.find("tr").click(function(event){
var row = jQuery(this);
if (row.hasClass("selected")) { //Deselect
jQuery("#customerFilter").val(0);
} else {
jQuery("#customerFilter").val(jQuery(this).attr('rel'));
}
jQuery("#customerFilter").triggerHandler("change");
});
createPager(msg.Pages, jQuery("#pager"));
jQuery(".checkAll").triggerHandler('click');
fadeIn(table.find('thead > tr > th'));
}
});
}
function downloadListAsCsv() {
window.location.href = ajaxPath + "?month=" + selMonth + "&year=" + selYear + "&custType=" + customerType + "&sort=" + sort + "&sdir=" + sdir + "&page=0&asCsv=true";
}
function doShowNotes(){
jQuery.ajax({
type: "GET",
url: notesPath + "/" + noteId,
success: function(msg) {
jQuery("#tableContent").hide();
jQuery("#notesContent").html(msg).show();
}
});
}
function showNotes(id) {
noteId = id;
iqSetHistory();
}
function showTable() {
noteId = false;
iqSetHistory();
}
function getCreditId(custId) {
return selYear + "-" + selMonth + "-" + custId;
}
function sortDataTable(col) {
if (col == sort) {
sdir = !sdir;
} else {
sdir = false;
}
page = 1
sort = col;
iqSetHistory();
}
function createPager(totalPages, elm) {
elm.empty();
if (totalPages > 1)
{
var builder = "";
var numDirections = 2;
if (page > 1)
{
if (page - numDirections - 1 > 0)
{
builder += CreatePageLinkStatic(1, "«");
builder += " ";
}
builder += CreatePageLinkStatic(page - 1, "<");
builder += " ";
}
var n = page - numDirections;
while (n < page)
{
if (n > 0)
{
builder += CreatePageLinkStatic(n, n);
builder += " ";
}
n++;
}
builder += page;
builder += " ";
n = page + 1;
while (n <= page + numDirections && n <= totalPages)
{
builder += CreatePageLinkStatic(n, n);
builder +=" ";
n++;
}
if (page < totalPages)
{
builder += CreatePageLinkStatic(page + 1, ">");
builder += " ";
if (page + numDirections < totalPages)
{
builder += CreatePageLinkStatic(totalPages, "»");
}
}
builder;
elm.append(builder);
}
}
function CreatePageLinkStatic(page, str){
return "<a href=\"javascript:pageDataTable(" + page + ")\">" + str + "</a>";
}
function pageDataTable(newPage){
page = newPage;
iqSetHistory();
}
```
And the markup:
```
<div id="tableContent">
<select id="customerTypeChanger">
<option selected="selected" value="Publisher">Publisher</option>
<option value="Advertiser">Advertiser</option>
</select>
<select id="customerFilter"><option value="0">Choose Customer</option><option value="1">Customer 1</option><option value="1">Customer 2</option>...</select>
<div id="monthPicker"></div>
<div><a href="javascript:downloadListAsCsv()">DownloadAsCSV</a></div>
<table id="dataTable" class="grid">
<thead>
<tr>
<th style="text-align: left"><input type="checkbox" name="toggleCheckBox" class="checkAll" value="dummy" /></th>
<th><a href="javascript:sortDataTable('CustomerName')">Customer name</a></th>
<th><a href="javascript:sortDataTable('AmountExcludingTaxes')">Amount</th>
<th><a href="javascript:sortDataTable('BonusAmount')">Bonus amount</a></th>
<th><a href="javascript:sortDataTable('Amount')">Amount including VAT</a></th>
</tr>
</thead>
<tbody></tbody>
</table>
<div class="pagination" id="pager"></div>
<div>With the selected rows</div>
<input id="createNotes" type="button" value="Create notes" onclick="javascript:createNotes(this)" /> <input id="deleteNotes" value="Delete notes" type="submit" onclick="javascript:deleteNotes(this)" />
</div>
<div id="notesContent"></div>
```
If needed as well, here's the code I did for the monthpicker (it is a very basic datepicker thing that just lets you flick back and forth between months and gives an output like
**< April 2009** *May 2009* **June 2009 >**
(where bold is clickable links taking you to see just that month period, and italic is the already selected one, obviously actual html markup differs)
It utilizes the datepicker from jQuery UI to get the localized names of the months
```
(function($) {
var selDate;
$.fn.monthPicker = function(callback, selectedDate) {
selDate = selectedDate;
var elm = this;
this.html("<span class=\"prevMonthButton\"><a href=\"\"><</a></span><span class=\"prevMonth\"><a href=\"\"></a></span><span class=\"curMonth\"></span><span class=\"nextMonth\"><a href=\"\"></a></span><span class=\"nextMonthButton\"><a href=\"\">></a></span>");
populateDates(this);
var prevMonthFunc = function() {
var month = selDate.getMonth() - 1;
if (month < 0) {
month = 11;
selDate.setFullYear(selDate.getFullYear() - 1);
}
selDate.setMonth(month);
populateDates(elm);
callback(selDate.getFullYear(), selDate.getMonth() + 1);
return false;
}
var nextMonthFunc = function() {
var month = selDate.getMonth() + 1;
if (month > 11) {
month = 0;
selDate.setFullYear(selDate.getFullYear() + 1);
}
selDate.setMonth(month);
populateDates(elm);
callback(selDate.getFullYear(), selDate.getMonth() + 1);
return false;
};
this.find(".prevMonth > a").click(prevMonthFunc);
this.find(".prevMonthButton > a").click(prevMonthFunc);
this.find(".nextMonth > a").click(nextMonthFunc);
this.find(".nextMonthButton > a").click(nextMonthFunc);
}
function populateDates(elm) {
var months = jQuery.datepicker._defaults.monthNames;
var selYear = selDate.getFullYear();
var selMonth = selDate.getMonth();
elm.find(".curMonth").text(months[selMonth] + " " + selYear);
var prevMonth = selMonth - 1;
var prevYear = selYear;
if (prevMonth < 0) {
prevMonth = 11;
prevYear = prevYear - 1;
}
elm.find(".prevMonth > a").text(months[prevMonth] + " " + prevYear);
var nextMonth = selMonth + 1;
var nextYear = selYear;
if (nextMonth > 11) {
nextMonth = 0;
nextYear = nextYear + 1;
}
elm.find(".nextMonth > a").text(months[nextMonth] + " " + nextYear);
}
})(jQuery);
```
I know most of this JavaScript code sucks - but for the main part it seems to do the job quite nicely, but as I said click a row to select it, then click it to deselect and boom, double call to add\_navigate which results in an extra call to my JSON service and a visual flicker on the client side - and I cannot work out why it happens (and even more strange, why it just happens when it is deselected and not on a selected one as well). | I would try doing
```
.unbind('click').click(function()
```
instead of
```
.click(function()
```
Just to make sure click events are not getting bound twice. | I think the issue is with double binding of events to the same element unconsiously.
This is usually the scenario,
```
(function($){
var MyDocument = new Object({
prepareBody : function(){
//addClick Event
$('div#updatedElement').click(MyDocument.ajaxCall());
//adjusting the height of an updatedElement to an-otherElement
$('div#updatedElement').css('height', $('div#otherElement').height());
},
ajaxCall : function(){
//do your ajax Call
$.getJSON('index.php',{param:1, param:2},function(response){
//do something with your response
$('div#updatedElement').html(response)
//say if after your call you decide to update the body again
MyDocument.prepareBody();
//what that does is you will double bind click to the updateElement div.
//The next time that it is click, the AjaxCall function will run twice
//The next time it is clicked the MyDocument.ajaxCall function will be run four times
//8 - 16 - 32 and by now, firefox would have crashed!
},'json');
}
});
$(document).ready(function(){
MyDocument.prepareBody()
});
})(jQuery);
```
So as advised by KClough, unbind Events before binding them so that they run only once! Sad jQuery does not overwrite them as do other frameworks!
Hope this helps some one else | Double loading issue in Javascript / jQuery/Microsoft-ajax hybrid | [
"",
"javascript",
"jquery",
"microsoft-ajax",
""
] |
I have an XSLT transform issue:
```
style="width:{Data/PercentSpaceUsed}%;"
```
And the value of Data/PercentSpaceUsed is integer 3.
And it outputs:
```
style="width:
 3
 %;"
```
instead of what I expected:
```
style="width:3%;"
```
Here's the code that does the transform: `xslt_xslt` is the transform xml, `sw.ToString()` contains the `
` and `
` which I did not expect.
```
var xslTransObj = new XslCompiledTransform();
var reader = new XmlTextReader(new StringReader(xslt_xslt));
xslTransObj.Load(reader);
var sw = new StringWriter();
var writer = new XmlTextWriter(sw);
xslTransObj.Transform(new XmlTextReader(new StringReader(xslt_data)), writer);
ResultLiteral.Text = sw.ToString();
``` | The `
` are carriage returns and line feeds either within your XML or your XSLT. Make sure the xml is like
```
<Value>3</Value>
```
Rather than
```
<Value>
3
</Value>
```
I believe there is a way to stop whitespace being used within your transformation although I don`t know it off the top of my head. | You're getting whitespace from the source document. Use
```
style="width:{normalize-space(Data/PercentSpaceUsed)}%;"
```
to strip out the whitespace. The other option in your case would be to use
```
style="width:{number(Data/PercentSpaceUsed)}%;"
``` | C# XSLT transform adding 
 and 
 to the output | [
"",
"c#",
"xslt",
"whitespace",
""
] |
In server management studio 2008 you can right mouse click on a table and then hit the select the first 1000 rows. Is there a button or a quick way to edit one of the returned rows instead of having to right mouse click on the table again and click edit first 200 rows. | Here's the way I normally go about this:
1. Right-Click table and select "Edit Top 200 Rows"
2. Right-Click anywhere on the results, navigate to Pane -> SQL
You'll see a SELECT statement that begins with
```
SELECT TOP(200) .....
```
Change the "200" to a larger value, or add a WHERE clause at the bottom. | In Options - SQL Server Object Explorer, Commands you can edit the Values for the
Edit Top Rows command
Select Top Rows command
You could change the edit to be 1000 rows instead of 200 | SQL Server 2008 - Go From Select to Edit Quickly | [
"",
"sql",
"sql-server-2008",
"ssms",
""
] |
If I have an LinkedList of Employee objects...
Each employee has a Name, and an ID fields.
I have linkedList call list....
If I want to see if the list contains an employee I do:
```
list.contains(someEmployeeObject)
```
How about if I want to see if the the list contains an employee based on the imployee ID..
let's say I have the following method:
```
public boolean containsEmployeeByID(int id)
```
How can I know if the list contains the employee object with the parameter id? | Just walk the list and look for matches. If you do this often and change the list infreqently, build a Map index first.
```
List<Employee> list = ...
for (Employee e : list)
if (e.getID() == id)
return true;
return false;
```
That said, saving employees in a LinkedList?? What a strange example problem... | Maybe you should be using a map with the key being an ID and value being the employee name or the employee object? | Finding element in LinkedList | [
"",
"java",
"search",
"linked-list",
""
] |
If I `GROUP BY` on a unique key, and apply a `LIMIT` clause to the query, will all the groups be calculated before the limit is applied?
If I have hundred records in the table (each has a unique key), Will I have `100` records in the temporary table created (for the `GROUP BY`) before a `LIMIT` is applied?
A case study why I need this:
Take `Stack Overflow` for example.
Each query you run to show a list of questions, also shows the user who asked this question, and the number of badges he has.
So, while a user<->question is one to one, user<->badges is one has many.
The only way to do it in one query (and not one on questions and another one on users and then combine results), is to group the query by the primary key (question\_id) and join+group\_concat to the user\_badges table.
**The same goes for the questions TAGS.**
```
Code example:
Table Questions:
question_id (int)(pk)| question_body(varchar)
Table tag-question:
question-id (int) | tag_id (int)
SELECT:
SELECT quesuestions.question_id,
questions.question_body,
GROUP-CONCAT(tag_id,' ') AS 'tags-ids'
FROM
questions
JOIN
tag_question
ON
questions.question_id=tag-question.question-id
GROUP BY
questions.question-id
LIMIT 15
``` | `LIMIT` does get applied after `GROUP BY`.
Will the temporary table be created or not, depends on how your indexes are built.
If you have an index on the grouping field and don't order by the aggregate results, then an `INDEX SCAN FOR GROUP BY` is applied, and each aggregate is counted on the fly.
That means that if you don't select an aggregate due to the `LIMIT`, it won't ever be calculated.
But if you order by an aggregate, then, of course, all of them need to be calculated before they can be sorted.
That's why they are calculated first and then the `filesort` is applied.
**Update:**
As for your query, see what `EXPLAIN EXTENDED` says for it.
Most probably, `question_id` is a `PRIMARY KEY` for your table, and most probably, it will be used in a scan.
That means no `filesort` will be applies and the join itself will not ever happen after the `15'th` row.
To make sure, rewrite your query as following:
```
SELECT question_id,
question_body,
(
SELECT GROUP_CONCAT(tag_id, ' ')
FROM tag_question t
WHERE t.question_id = q.question_id
)
FROM questions q
ORDER BY
question_id
LIMIT 15
```
* First, it is more readable,
* Second, it is more efficient, and
* Third, it will return even untagged questions (which your current query doesn't). | Yes, the order the query executes is:
* FROM
* WHERE
* GROUP
* HAVING
* SORT
* SELECT
* LIMIT
LIMIT is the last thing calculated, so your grouping will be just fine.
Now, looking at your rephrased question, then you're not having just one row per group, but many: in the case of stackoverflow, you'll have just one user per row, but many badges - i.e.
```
(uid, badge_id, etc.)
(1, 2, ...)
(1, 3, ...)
(1, 12, ...)
```
all those would be grouped together.
To avoid full table scan all you need are indexes. Besides that, if you need to SUM, for example, you cannot avoid a full scan.
**EDIT:**
You'll need something like this (look at the WHERE clause):
```
SELECT
quesuestions.question_id,
questions.question_body,
GROUP_CONCAT(tag_id,' ') AS 'tags_ids'
FROM
questions q1
JOIN tag_question tq
ON q1.question_id = tq.question-id
WHERE
q1.question_id IN (
SELECT
tq2.question_id
FROM
tag_question tq2
ON q2.question_id = tq2.question_id
JOIN tag t
tq2.tag_id = t.tag_id
WHERE
t.name = 'the-misterious-tag'
)
GROUP BY
q1.question_id
LIMIT 15
``` | Is a GROUP BY on UNIQUE key calculates all the groups before applying LIMIT clause? | [
"",
"sql",
"mysql",
"grouping",
"limit",
""
] |
What is the difference between the JavaScript functions `decodeURIComponent` and `decodeURI`? | To explain the difference between these two let me explain the difference between `encodeURI` and `encodeURIComponent`.
The main difference is that:
* The `encodeURI` function is intended for use on the full URI.
* The `encodeURIComponent` function is intended to be used on .. well .. URI components that is
any part that lies between separators (; / ? : @ & = + $ , #).
So, in `encodeURIComponent` these separators are encoded also because they are regarded as text and not special characters.
Now back to the difference between the decode functions, each function decodes strings generated by its corresponding encode counterpart taking care of the semantics of the special characters and their handling. | encodeURIComponent/decodeURIComponent() is almost always the pair you want to use, for concatenating together and splitting apart text strings in URI parts.
encodeURI in less common, and misleadingly named: it should really be called fixBrokenURI. It takes something that's nearly a URI, but has invalid characters such as spaces in it, and turns it into a real URI. It has a valid use in fixing up invalid URIs from user input, and it can also be used to turn an IRI (URI with bare Unicode characters in) into a plain URI (using %-escaped UTF-8 to encode the non-ASCII).
Where encodeURI should really be named fixBrokenURI(), decodeURI() could equally be called potentiallyBreakMyPreviouslyWorkingURI(). I can think of no valid use for it anywhere; avoid. | What is the difference between decodeURIComponent and decodeURI? | [
"",
"javascript",
""
] |
I hope you can help me.
I have a web service that needs to log transactions. Since there are many hits, the log statements appears disjoint/fragmented in the log file.
I've considered passing a StringBuilder instance through-out the layers and appending the statements to this instance, then I log its contents once at the very end (finally clause in the main controller) before returning a response to the client.
This seems unnecessary and I'm sure there's a cleaner way to do it with Log4J.
Can anyone shed light on the issue?
Thanks | the good things is that you are using Log4j. The library itself implements the concept of *nested diagnostic context* (`org.apache.log4j.NDC`) and *mapped diagnostic context* (`org.apache.log4j.MDC`). I reckon you should definitely have a look at both, because those allows you to build some sort of the context that you can use in your log output layout. | You might consider turning the problem around: instead of consolidating all of the pieces of an entry when logging at runtime, consolidate them later during the analysis of the data.
For example, we have an application that logs a lot of data, where fast response is important. We recently implemented a (custom-built) system that logs to what is effictively a relational database. However, the logging portion of it is optomized for logging; we simply append tuple data to the logfile in the order that the application generates it. We have tools that can do queries against this format, and also plan to generate new versions of the database stored in a different format when we start getting annoyed that every query against the log format effectively requires a table scan of every table in the database.
Even if you can't make direct use of this technique, just considering it may give you ideas for doing something better within Log4J. | structured/combined logging with Log4J | [
"",
"java",
"logging",
"log4j",
""
] |
In the below code "where T : WsgTypes.RouteRestriction", can I add multiple classes so that T can be of only those few classes types which I am interested of
```
public static T GetDetails<T>(string code) where T : WsgTypes.RouteRestriction
{
T details;
if (typeof(T) == typeof(WsgTypes.TicketType))
{
details = TicketTypeDetail.GetDetails(code) as T;
}
else if (typeof(T) == typeof(WsgTypes.RouteRestriction))
{
details = RouteRestrictionDetail.GetDetails(code) as T;
}
else
{
throw new NotSupportedException("");
}
return details;
throw new NotImplementedException();
}
``` | For inheritance you can have a single class with multiple interfaces.
```
public static T GetDetails<T>(string code) where T : WsgTypes.RouteRestriction, , IComparable
{
}
```
Instead you can have an interface and have multiple classes implementing it.
```
public interface IInterface
{}
public class Class1: IInterface
{}
public class Class2: IInterface
{}
public static T GetDetails<T>(string code) where T:IInterface
{
T instance;
// ...
return instance;
}
``` | It seems to me that this isn't a proper use of generics. It would be better if `TicketType` and `RouteRestriction` implemented some `IDetailed`. | Generics c#.net | [
"",
"c#",
"generics",
""
] |
I have successfully implemented interop beftween Win32 application and managed .Net dll as described [here](https://stackoverflow.com/questions/787303/how-to-use-net-assembly-from-win32-without-registration). But I also read [here](https://stackoverflow.com/questions/258875/hosting-the-net-runtime-in-a-delphi-program) that it is possible to host the entire CLR inside of the unmanaged process.
So my question is: why would you do that? It is somewhat more complex than just use an object - what benefits you gain for this price of increased complexity?
Edit: what I understood from 2 first answers, is that you get the possibility to customize the CLR for your needs - meaning if you're writing a simple business app, you'll never need to host. Hosting is for system-heavy stuff, like browser or SQL Server. | Hosting the CLR is generally not something you do to interop between managed code and Win32. there are generally 3 methods of interop:
* Runtime Callable Wrapper (RCW) - call a COM object from .NET
* COM Callable Wrapper (CCW) - make a .NET object appear as a COM object
* P/Invoke
These have been supported since the first version of .NET. The whole point of hosting the CLR is to allow you to deeply embed .NET code inside an unmanaged application. For example, there is a module that can host .NET in Apache on Win32 allowing it run .aspx pages.
Similarly, the SQL Server wanted a way for people to write extended stored procedures and functions with managed code. In the past you could write these in C/C++, but if they hosted the CLR they could actually allow people to write these with C#. The work to get the CLR into a state were it could be safely embedded really pushed out the timelines, and so things like control over memory and security were born. SQL Server has some serious stability requirements and you can't have .NET rocking the boat.
The hosting API changed significantly from .NET 1.x to 2.x but has been more stable since the 2.0 CLR has lived through .NET 3.0, 3.5 etc. | You may have a legacy application and want to allow 3rd parties to the use the facilities of .net from within your application but particularly in a controlled manner ***such as*** controlling where assemblies are loaded from. [Here](http://msdn.microsoft.com/en-us/magazine/cc163567.aspx) is an example. | Hosting CLR versus using ClrCreateManagedInstance - what are the benefits? | [
"",
"c#",
".net",
"delphi",
"interop",
""
] |
by simple I mean, having buttons:
1. bold,
2. italic,
3. numbered list
4. bullet point list
5. indent left
6. indent right
7. spell check (obviously supported by ready made js component)
by custom I mean: having custom icons - so really just custom design
no frameworks, written from scratch, lightweight, compatible with major browsers
this is one of the main components of the webapp, so it has to be super lightweight, that's why I don't want frameworks | Writing an editor that works cross-platform can be difficult, but, you should create your own framework as you do it, as it is a large project.
If you just want custom icons, that will depend on how long it takes you to make them, but, to get some basic functionality isn't that hard, probably less than 40 hrs of work if you know what you are doing.
In Unix writing your own shell used to be a rite of passage, in javascript it may be writing your own editor. :)
Where it gets tricky is if I have
```
<b>some text</b><i>more text</i>
```
and I decide to remove the tags from this text, then how to fix it will get tricky.
If you want to use only css then it gets to be more of a problem as you are grouping text from span tags, and fixing css classes, while the user is continuing to make changes.
I am dealing with this currently as I want an editor that works in XHTML2.0, and it is not a trivial issue, much harder than it is to do in a desktop application.
I would suggest getting it to work on Firefox 3 and Safari first, then, once it is working, go back and add in the code to get it to work on IE8, and if you want IE7, since MS is pushing IE8 out as a critical update now. | Unless you are targeting one browser, editors are immensely complicated components to get to work cross browser. There's no reason to do it yourself, unless you want to learn.
Use one of the many available that allow customization:
[tinymce](http://tinymce.moxiecode.com/),
[fckeditor](http://www.fckeditor.net/),
[wysihat](http://github.com/37signals/wysihat/tree/master),
[others](http://www.google.com/search?hl=en&client=safari&rls=en-us&num=20&q=javascript+wysiwyg+editor&btnG=Search) | How fast does it take to write a simple, custom editor? | [
"",
"javascript",
"editor",
""
] |
In VB.NET, you can surround a variable name with brackets and use keywords as variable names, like this:
```
Dim [goto] As String = ""
```
Is there a C# equivlent to doing this? | ```
string @string = "";
``` | Yes, prefix it with a @
```
String @goto = "";
``` | C# keywords as a variable | [
"",
"c#",
""
] |
buildin an smtp client in python . which can send mail , and also show that mail has been received through any mail service for example gmail !! | Create mail messages (possibly with multipart attachments) with [email](http://docs.python.org/library/email.html).
> The `email` package is a library for managing email messages, including MIME and other RFC 2822-based message documents.
Send mail using [smtplib](http://docs.python.org/library/smtplib.html)
> The `smtplib` module defines an SMTP client session object that can be used to send mail to any Internet machine with an SMTP or ESMTP listener daemon.
If you are interested in browsing a remote mailbox (for example, to see if the message you sent have arrived), you need a mail service accessible via a known protocol. An popular example is the [`imaplib`](http://docs.python.org/library/imaplib.html) module, implementing the [`IMAP4` protocol](http://en.wikipedia.org/wiki/IMAP4). `IMAP` is [supported by `gmail`](http://mail.google.com/support/bin/topic.py?hl=en&topic=12806).
> This (`imaplib`) module defines three classes, IMAP4, IMAP4\_SSL and IMAP4\_stream, which encapsulate a connection to an IMAP4 server and implement a large subset of the IMAP4rev1 client protocol as defined in RFC 2060. It is backward compatible with IMAP4 (RFC 1730) servers, but note that the STATUS command is not supported in IMAP4. | If you want the Python standard library to do the work for you (recommended!), use [smtplib](http://docs.python.org/library/smtplib.html). To see whether sending the mail worked, just open your inbox ;)
If you want to implement the protocol yourself (is this homework?), then read up on the [SMTP protocol](http://www.ietf.org/rfc/rfc0821.txt) and use e.g. the [socket](http://docs.python.org/library/socket.html) module. | How would one build an smtp client in python? | [
"",
"python",
"smtp",
""
] |
Im wondering if the way i use to retrieve the id of the last row inserted in a postgresql table is efficent..
It works, obviously, but referencing on the serial sequence currval value could be problematic when i have many users adding rows in the same table at the same time.
My actual way is:
```
$pgConnection = pg_connect('host=127.0.0.1 dbname=test user=myuser password=xxxxx')or die('cant connect');
$insert = pg_query("INSERT INTO customer (name) VALUES ('blabla')");
$last_id_query = pg_query("SELECT currval('customer_id_seq')");
$last_id_results = pg_fetch_assoc($last_id_query);
print_r($last_id_results);
pg_close($pgConnection);
```
Well, its just a test atm.
But anyway, i can see 3 issues with this way:
1. Referencing on the customer\_id\_seq, if two user do the same thing in the same time, could happen that them both get the same id from that way... or not?
2. I *have to know* the table's sequence name. Becose pg\_get\_serial\_sequence dont works for me (im newbie on postgresql, probably is a configuration issue)
Any suggestion/better ways?
p.s: i can't use the PDO, becose seem lack a bit with the transaction savepoint; I wont use zend and, in the end, i'll prefer to use the php pg\_\* functions (maybe i'll build up my classes in the end)
**EDIT:**
@SpliFF(thet deleted his answer): this would works better?
```
$pgConnection = pg_connect('host=127.0.0.1 dbname=test user=myuser password=xxxxx')or die('cant connect');
pg_query("BEGIN");
$insert = pg_query("INSERT INTO customer (name) VALUES ('blabla')");
$last_id_query = pg_query("SELECT currval('customer_id_seq')");
$last_id_results = pg_fetch_assoc($last_id_query);
print_r($last_id_results);
//do somethings with the new customer id
pg_query("COMMIT");
pg_close($pgConnection);
``` | If you use a newer version of PostgreSQL (> 8.1) you should use the RETURNING clause of INSERT (and UPDATE) command.
OTOH if you insist on using one of the sequence manipulation functions, please read the [fine manual](http://www.postgresql.org/docs/current/static/functions-sequence.html). A pointer: "Notice that because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did." | Insert and check curval(seq) inside one transaction. Before commiting transaction you'll see curval(seq) for your query and no matter who else inserted at the same time.
Don't remember the syntax exactly - read in manual (last used pgsql about 3 years ago), but in common it looks like this:
```
BEGIN TRANSACTION;
INSERT ...;
SELECT curval(seq);
COMMIT;
``` | Postgresql and PHP: is the currval a efficent way to retrieve the last row inserted id, in a multiuser application? | [
"",
"php",
"postgresql",
"insert-id",
""
] |
This is more a question regarding generics than subsonic:
Imagine if have the following code:
```
List<int> result =
DB.Select(Product.Columns.Id)
.From<Product>()
.ExecuteTypedList<int>();
```
That works great and returns a generic list with the ids from my Product table.
But if I want to get a list of the ProductName:
```
List<String> result =
DB.Select(Product.Columns.ProductName)
.From<Product>()
.ExecuteTypedList<String>();
```
it throws a compiler message (translated from german):
> "string" has to be a non-abstract type
> with a public Constructor without
> parameter, in order to be used as a
> generic type or in the generic method
> "SubSonic.SqlQuery.ExecuteTypedList()"
> as param "T".
cause: String has no empty contructor:
```
int i = new int; // works
String s = new String; // compiler error: "string" does not contain a constructor that takes '0' argument
```
If I use a `List<Object>` instead it works, but is there a more elegant way, where I can use `List<String>` ?
Update: `List<Object>` does not work. I indeed get a list of objects but that seem to be "empty" object that doesn't contain my ProductNames ( object.ToString() returns `{Object}` ) | With a little bit dotnet magic it is possible without patching the subsonic code.
1. Create a new class SubsonicSqlQueryExtensionMethods and drop this code:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SubSonic;
namespace MyUtil.ExtensionMethods
{
public static class SubSonicSqlQueryExtensionMethods
{
public static List<String> ExecuteTypedList(this SqlQuery qry)
{
List<String> list = new List<String>();
foreach (System.Data.DataRow row in qry.ExecuteDataSet().Tables[0].Rows)
{
list.Add((String)row[0]);
}
return list;
}
}
}
```
Now add a reference to MyUtil.ExtensionMethods to your class:
```
using MyUtil.ExtensionMethods;
```
And finally this works:
```
List<String> result = DB.Select(User.Columns.Name).From<User>().ExecuteTypedList();
```
Please note that the above extension method overloads the ExecuteTypedList() method with no type-argument (unfortunately this snippet requires dotnet 3.5, but for me it works) | I know I am late to this party but I found a neat way of 'tricking' this problem.
```
List<String> result =
DB.Select()
.From<Product>()
.ExecuteTypedList<String>().Select(p => p.ProductName).ToList<String>();
```
This works like a charm for me.
Hope is helps someone somewhere, As I am sure you are far past the issue. | Use the Subsonic.Select() ExecuteTypedList Method with String | [
"",
"c#",
"generics",
"subsonic",
""
] |
I'm having a major performance issue with LINQ2SQL and transactions. My code does the following using IDE generated LINQ2SQL code:
Run a stored proc checking for an existing record
Create the record if it doesn't exist
Run a stored proc that wraps its own code in a transaction
When I run the code with no transaction scope, I get 20 iterations per second. As soon as I wrap the code in a transaction scope, it drops to 3-4 iterations per second. I don't understand why the addition of a transaction at the top level reduces the performance by so much. Please help?
Psuedo stored proc with transaction:
```
begin transaction
update some_table_1;
insert into some_table_2;
commit transaction;
select some, return, values
```
Pseudo LINQ code without transaction:
```
var db = new SomeDbContext();
var exists = db.RecordExists(some arguments);
if (!exists) {
var record = new SomeRecord
{
// Assign property values
};
db.RecordsTable.InsertOnSubmit(record);
db.SubmitChanges();
var result = db.SomeStoredProcWithTransactions();
}
```
Pseudo LINQ code with transaction:
```
var db = new SomeDbContext();
var exists = db.RecordExists(some arguments);
if (!exists) {
using (var ts = new TransactionScope())
{
var record = new SomeRecord
{
// Assign property values
};
db.RecordsTable.InsertOnSubmit(record);
db.SubmitChanges();
var result = db.SomeStoredProcWithTransactions();
ts.Complete();
}
}
```
I know the transaction isn't being escalated to the DTC because I've disabled the DTC. SQL Profiler shows that several of the queries take much longer with the transactionscope enabled, but I'm not sure why. The queries involved are very short lived and I've got indexes that I have verified are being used. I'm unable to determine why the addition of a parent transaction causes so much degredation in performance.
Any ideas?
**EDIT:**
I've traced the problem to the following query within the final stored procedure:
```
if exists
(
select * from entries where
ProfileID = @ProfileID and
Created >= @PeriodStart and
Created < @PeriodEnd
) set @Exists = 1;
```
If I had with(nolock) as shown below, the problem disappears.
```
if exists
(
select * from entries with(nolock) where
ProfileID = @ProfileID and
Created >= @PeriodStart and
Created < @PeriodEnd
) set @Exists = 1;
```
However, I'm concerned that doing so may cause problems down the road. Any advice? | One big thing that changes as soon as you get a transaction - the [isolation level](http://msdn.microsoft.com/en-us/library/system.transactions.isolationlevel.aspx). Is your database under heavy contention? If so: by default a `TransactionScope` is at the highest "serializable" isolation level, which involves read locks, key-range locks, etc. If it can't acquire those locks immediately it will slow down while it it blocked. You could investigate by reducing the isolation level of the transaction (via the constructor). For example (but pick your own isolation-level):
```
using(var tran = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.Snapshot })) {
// code
tran.Complete();
}
```
However, picking an isolation level is... tricky; serializable is the safest (hence the default). You can also use granular hints (but not via LINQ-to-SQL) such as `NOLOCK` and `UPDLOCK` to help control locking of specific tables.
---
You could also investigate whether the slowdown is due to *trying* to talk to DTC. Enable DTC and see if it speeds up. The LTM is good, but I've seen composite operations to a single database escalate to DTC before... | Does the Stored Procedure you call participate in the ambient (parent) transaction? - that is the question.
It's likely that the Stored Procedure participates in the ambient transaction, which is causing the degredation. There's an [MSDN article here](http://msdn.microsoft.com/en-us/library/ms172152.aspx) discussing how they interrelate.
From the article:
"When a TransactionScope object joins an existing ambient transaction, disposing of the scope object may not end the transaction, unless the scope aborts the transaction. If the ambient transaction was created by a root scope, only when the root scope is disposed of, does Commit get called on the transaction. If the transaction was created manually, the transaction ends when it is either aborted, or committed by its creator."
There's also a serious looking document on nested transactions which looks like it is directly applicable localted on [MSDN here](http://msdn.microsoft.com/en-us/library/ms189336.aspx).
Note:
"If TransProc is called when a transaction is active, the nested transaction in TransProc is largely ignored, and its INSERT statements are committed or rolled back based on the final action taken for the outer transaction."
I think that explains the difference in performance - it's essentially the cost of maintaining the parent transaction. Kristofer's suggestion may help to reduce the overhead. | LINQ2SQL performance with transactions | [
"",
"c#",
"linq-to-sql",
""
] |
Most of us write conditionals like:
```
if (resultIndex == 0)
```
...but occaisionally I come across someone writing them like:
```
if (0 == resultIndex)
```
...and interestingly those people have been authors and seemingly pretty hot coders.
So why do some people choose the 'backwards' style? Is there some history behind it? Readabililty?
---
Duplicate: [Why does one often see “null != variable” instead of “variable != null” in C#?](https://stackoverflow.com/questions/271561/why-does-one-often-see-null-variable-instead-of-variable-null-in-c). | This has been asked numerous times before though I can't seem to find the Dup.
Essentially, this style is a hangover from C where a common mistake for
```
if (c == 5) //Comparison
```
was
```
if (c = 5) //Assignment
```
In the latter case, the compile rwould not complain so people wrote it like so to reduce the likelyhood of this happening
```
if (5 == c)
``` | It is a legacy from C, where a common bug would be to write
```
if (x = 0) {...}
```
If you taught yourself to write these tests the other way around, then the compiler would complain when you made the == typo, instead of silently adding a bug. | Conditional styles: if (0 == resultIndex) vs if (resultIndex ==0) | [
"",
"c#",
""
] |
What is difference in developing applications using .Net Framework, Asp.net and developing application in Sharepoint (MOSS or WSS)? | I highly recommend checking out this thread for various pain points:
<https://stackoverflow.com/questions/256407>
Since SharePoint is built on ASP.NET, the argument can be made that anything you can do with ASP.NET you can do with SharePoint but the reality is developing applications for SharePoint is not for the faint-hearted and you should expect a much longer development cycle, particularly if you are new to the platform. I suggest becoming very familiar with Google, StackOverflow, and .NET Reflector because a lot of what you need to know is not in the documentation or hard to find.
It's not all bad though. You get a lot of infrastructure out of the box like authentication, versioning of data (if you are storing your data in lists), and incoming/outgoing email connectivity to name a few. | Sharepoint is a collaboration tool built on top of .NET and ASP.NET. To develop applications for Sharepoint means you still need familiarity with ASP.NET and the .NET Framework, but also familiarity with the Sharepoint infrastructure/API's. | Difference in Sharepoint and .Net development? | [
"",
"c#",
".net",
"sharepoint",
"project-planning",
""
] |
I looking for a way to create Delta Diff Patches of Large Binary Files (VMWare Virtual Disk Files). Is there an implementation in C# or any useful methods in the the .NET Framework.
Any help is appreciated. Thanks.
rAyt | There's nothing built into the framework to do this.
You're going to have to look for 3rd party solutions, commercial or free, or write your own.
A common algorithm is the [VCDiff](http://www.faqs.org/rfcs/rfc3284.html) algorithm, which is used by quite a large number of products. | [bsdiff](http://www.daemonology.net/bsdiff/) was designed to create very small patches for binary files.
As stated on its page, it requires `max(17*n,9*n+m)+O(1)` bytes of memory and runs in `O((n+m) log n)` time (where `n` is the size of the old file and `m` is the size of the new file), so it will take a long time and use a huge amount of memory to create diffs for virtual disk files.
The original implementation is in C, but a C# port is described [here](http://code.logos.com/blog/2010/12/binary_patching_with_bsdiff.html) and available [here](https://github.com/LogosBible/bsdiff.net). | Creating Delta Diff Patches of large Binary Files in C# | [
"",
"c#",
".net",
"diff",
""
] |
I'm attempting to split up my WCF web services into a few services instead of 1 giant service. But the Visual Studio (Silverlight client) duplicates the common classes shared by both services. Here is a simple example to illustrate my problem.
In this example there are two services. Both return the type "Person". By default VS will create two seperate Person proxy's under unique NameSpaces. This means that the "Person" returned by the different services cannot be consumed by the client as the same thing. How do I fix this? Is it possible without writing the proxy classes myself?
## Common
```
[DataContract]
public class Person
{
[DataMember]
string FirstName { get; set; }
[DataMember]
string LastName { get; set; }
[DataMember]
string PrivateData { get; set; }
}
```
## StaffService.svc
```
[ServiceContract(Namespace = "")]
public class StaffService
{
[OperationContract]
public Person GetPerson ()
{
return new Person {"John", "Doe", "secret"};
};
}
```
## PublicService.svc
```
[ServiceContract(Namespace = "")]
public class PublicService
{
[OperationContract]
public Person GetPerson ()
{
return new Person {"John", "Doe", "*****"};
};
}
```
Thanks for you help!
Justin | There is a check box under the Advanced section of "Add Service Reference" named "Reuse types in referenced assemblies". This will hunt for types used in your service and if they already exist in a referenced assembly then they'll be used rather than a proxy class generated.
One caveat here is that it's only "referenced assemblies" that are searched, so it won't pick up proxies generated by other services (and I believe the different namespace would stop it as well).
I usually have a business / domain project in my Silverlight project so I add my shared classes to that project (usually with the "Add Existing Item" > "Add as Link" so the code is shared).
Once that's done you can generate your service references and they should pick up your existing types.
Hope this helps | If you generate the proxies at the same time using svcutil.exe it will only generate one type. I don't know how to do this with adding a service reference to the project.
We run it in a batch file so I have clipped that down and changed the names to protect the innocent. It is really about mapping the service namespaces together and then **including all the URLs together**. It also has the collection type set (for lists) and includes an assembly reference (which some of the other answers reference.
```
@ECHO OFF
SET cmd=C:\"Program Files"\"Microsoft SDKs"\Windows\v6.0a\bin\SvcUtil.exe
SET cmd=%cmd% /out:Traffic.cs /noConfig /collectionType:System.Collections.Generic.List`1
SET cmd=%cmd% /reference:..\..\..\lib\Architecture.Frameworks.dll
REM ######### Service namespace mappings (Service Contracts and Message Contracts)
SET cmd=%cmd% /namespace:"http://services.test.com/app/2005/09/"
SET cmd=%cmd%,"app.ServiceProxies"
REM ######### Schema namespace mappings (Data Contracts)
SET cmd=%cmd% /namespace:"http://schemas.company.com/app/2005/09/"
SET cmd=%cmd%,"Co.ServiceProxies.app.DataContracts"
REM ######### Set all the URLs that have common types
SET cmd=%cmd% http://localhost/Services/MyService1.svc
SET cmd=%cmd% http://localhost/Services/MyService2.svc
%cmd%
PAUSE
```
If all the items are in the same service namespace, you could possibly get away with just having all the URLs and not worry about the namespaces, but I have not tried it that way. | Adding service references to multiple WCF services that shared classes | [
"",
"c#",
"wcf",
"silverlight",
"web-services",
"proxy",
""
] |
Is it possible to make XA-transactional access to the file system in Java?
I want to manipulate files within the boundaries of a transaction and my transaction must participate in a distributed transaction via JTA (so I guess the file system needs to be accesses as a XAResource). I don't need support for fine-grained read/write file access; treating each file as a record is good enough for my needs.
Does anybody know an open-source project that already does this? I don't feel like implementing this mess just to find out that it's already been done...
I heard some rumors that JBoss Transcations will add support for this (see for example [this discussion](http://jboss.org/community/wiki/TransactionalFileIO)) but couldn't find an official statement about this.
By the way, if you need transactional file access but don't require the transaction to participate in a 2-phase commit I recommend you have a look at [Apache Commons Transaction](http://commons.apache.org/transaction/file/index.html)
A nice article about the complexities involved can be found in [here](http://java.sys-con.com/node/37798). | Recently I solved exactly the same problem. Finally I used [Bitronix](http://docs.codehaus.org/display/BTM/Home) with XADisk.
You can find more details in my blog post: [JTA transaction manager – Atomikos or Bitronix?](http://blog.trixi.cz/2011/11/jta-transaction-manager-atomikos-or-bitronix/) | [XADisk](http://xadisk.java.net/) can get you what you are looking for. It's free and open source. | Is there an open-source solution to XA-transactional file access in Java? | [
"",
"java",
"file",
"transactions",
"filesystems",
"jta",
""
] |
I've read around the subject of temporary tables and scope and all the answers i've seen don't seem to talk about one of my concerns.
I understand that a local temporary table's scope is only valid withing the lifetime of a stored procedure or child stored procedures. However what is the situation with regard to concurency. i.e. if i have a stored procedure that creates a temporary table which is called from two different processes but from the same user/connection string, will that temporary table be shared between the two calls to that one stored procedure or will it be a case of each call to the stored procedure creates an unique temporary table instance.
I would assume that the temporary table belongs to the scope of the call to the stored procdure but i want to be sure before i go down a path with this. | Local temporary tables (start with #) are limited to your session; other sessions, even from the same user/connection string, can't see them. The rules for the lifetime depend on whether the local temporary table was created in a stored procedure:
* A local temporary table that is created in a stored procedure is dropped when the procedure ends; other stored procedures, or the calling process, can't see them.
* Other local temporary tables are dropped when the session ends.
Global temporary tables (start with ##) are shared between sessions. They are dropped when:
* The session that created them ends
* AND no other session is referring to them
This command can be handy to see which temporary tables exist:
```
select TABLE_NAME from tempdb.information_schema.tables
```
And this is handy to drop temporary tables if you're not sure they exist:
```
if object_id('tempdb..#SoTest') is not null drop table #SoTest
```
See this [MSDN article](http://msdn.microsoft.com/en-us/library/aa258255(SQL.80).aspx) for more information. | The temporary table will be accesible to the instance of the procedure that creates it
The following script
```
Exec ('Select 1 as col Into #Temp Select * From #Temp')
Exec ('Select 2 as col Into #Temp Select * From #Temp')
```
Returns
```
Col
1
Col
2
```
Not
```
Col
1
2
```
Or an error because the table already exists.
The temporary table will also be accesible by any 'child' procedures that the initial procedure runs as well. | SQL Server 2005 and temporary table scope | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have typed in the following text in a control derived from Richtextbox
"The world is {beautful}".
My main intention is to create a link for the word beautful. I can create this using CFE\_LINK , but that's when I select the text.
**When I use Select (4,9), the text within the range 4 to 9 gets deleted.**
Can someone please help me with what I am missing out?
CODE :
I am creating a User Control, derived from Richtextbox.
I am giving the exact code below; I have not done any color change. I think the Select command sets the selected text to blue by default.
```
protected override void OnKeyPress(KeyPressEventArgs e)
{
String keypressed = e.KeyChar.ToString();
if(keypressed == "}")
Select(4,9)
base.OnKeyPress(e);
}
``` | At first when I started messing with this, I was puzzled as well. But then it hit me, it's very possible that your key that's being pressed is being sent to the textbox to render at KeyUp. Sure enough, when I changed your code to this it worked:
```
protected override void OnKeyUp(KeyEventArgs e)
{
base.OnKeyUp(e);
if (e.KeyCode == Keys.Oem6)
{
Select(4, 9);
}
}
``` | I suspect that when the '}' key is pressed, your code runs before the character is sent to the textbox.
So you select the text, and then the '}' character is sent to the textbox, overwriting the selection.
**Edit:** Yup, reproduced it.
I'm not sure off the top of my head how to solve it. Perhaps it would be better to implement `OnTextChanged` instead.. You could scan the entire textbox for unlinked {words inside braces}. It might be slower if the text is large, but it would automatically handle copy and paste and things like that. | Selecting text in RichTexbox in C# deletes the text | [
"",
"c#",
"richtextbox",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.