Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
What are the established coding standards for JavaScript?
|
Not exactly what you're looking for, but (like Doug Crockford's vids) I think everybody who writes/edits/reviews JS should look at Chris Heilmann's [slides](http://www.slideshare.net/cheilmann/fronteers-maintainability-presentation) and [videos](http://ajaxian.com/archives/maintainable-javascript-videos-are-now-available) on maintainable javascript.
In the [slides](http://www.slideshare.net/cheilmann/fronteers-maintainability-presentation), slide 127, he says don't worry so much about style guides/code standards, reformat all code checked in to VCS.
Also these:
[Mozilla JavaScript Tips](http://developer.mozilla.org/en/JavaScript_style_guide)
[JavaScript Style Guide](http://neil.rashbrook.org/Js.htm)
|
Another reason why [Crockford's JavaScript](http://javascript.crockford.com/code.html) coding guidelines are a good idea to follow:
This code (below) actually returns undefined because of JavaScript's semicolon injection - which is a damn good reason to get the whole team of developers singing from the same song sheet:
```
return // injected semicolon, therefore returns 'undefined'
{
javascript : "fantastic"
}; // object constructs anonymously but nothing happens with it.
```
Bit more about that at [Beware of JavaScript semicolon insertion](http://www.robertnyman.com/2008/10/16/beware-of-javascript-semicolon-insertion)
|
Are there any coding standards for JavaScript?
|
[
"",
"javascript",
"coding-style",
""
] |
I was wondering if the C# project setting "Allow unsafe code" applies only to unsafe C# code in the project itself, or is it necessary to set this option when linking in a native C++ DLL? What about linking in a managed DLL that itself links to a native DLL? What does this option really do, under the hood?
|
It has to do with the "unsafe" keyword in C#. "unsafe" turns off all the checks that would normally happen and allow you to directly access the memory. it doesn't refer to calling native C++ DLL's or interfaces.
|
It allows you to use the "unsafe" block.
```
unsafe(...)
{
}
```
|
What is the actual function of the C# project setting "Allow unsafe code"
|
[
"",
"c#",
"dll",
"interop",
"native",
"unsafe",
""
] |
I am currently in a course that is using OpenGL and I have been using C for all the programs so far. I have Python installed on Fedora as well as OpenGL, however the minute I call an OpenGL command in my Python code, I get a **segmentation fault**. I have no idea why this is.
Just to avoid the "just use C" comments, here is why I want to use Python:
There are a couple reasons I am wanting to switch from C to Python, but the main one is because we are about to start writing a raytracer and I would like to use classes to make it easier on me. Since I hate classes in C++ and structs in C seems a little crazy, I thought I would give Python a try at it. I have also been looking for a reason to use Python again as it has been a while.
Thanks for any help.
|
You may also want to consider using [Pyglet](http://www.pyglet.org/) instead of PyOpenGL. It's a ctypes-wrapper around the native OpenGL libs on the local platform, along with windowing support (should handle most of the stuff you want to use GLUT for.) The [pyglet-users](http://groups.google.com/group/pyglet-users) list is pretty active and very helpful.
|
Well, I don't know if these are the libs the original poster are using but I saw identical issues in a pet project I'm working on (Graphics Engine using C++ and Python) using PyOpenGL.
PyOpenGL didn't correctly pick up the rendering context if it was created after the python script had been loaded (I was loading the script first, then calling Python methods in it from my C++ code).
The problem doesn't appear if you initialize the display and create the OpenGL rendering context before loading the Python script.
|
OpenGl with Python
|
[
"",
"python",
"opengl",
"fedora",
""
] |
I have over the course of a few projects developed a pattern for creating immutable (readonly) objects and immutable object graphs. Immutable objects carry the benefit of being 100% thread safe and can therefore be reused across threads. In my work I very often use this pattern in Web applications for configuration settings and other objects that I load and cache in memory. Cached objects should always be immutable as you want to guarantee they are not unexpectedly changed.
Now, you can of course easily design immutable objects as in the following example:
```
public class SampleElement
{
private Guid id;
private string name;
public SampleElement(Guid id, string name)
{
this.id = id;
this.name = name;
}
public Guid Id
{
get { return id; }
}
public string Name
{
get { return name; }
}
}
```
This is fine for simple classes - but for more complex classes I do not fancy the concept of passing all values through a constructor. Having setters on the properties is more desirable and your code constructing a new object gets easier to read.
So how do you create immutable objects with setters?
Well, in my pattern objects start out as being fully mutable until you freeze them with a single method call. Once an object is frozen it will stay immutable forever - it cannot be turned into a mutable object again. If you need a mutable version of the object, you simply clone it.
Ok, now on to some code. I have in the following code snippets tried to boil the pattern down to its simplest form. The IElement is the base interface that all immutable objects must ultimately implement.
```
public interface IElement : ICloneable
{
bool IsReadOnly { get; }
void MakeReadOnly();
}
```
The Element class is the default implementation of the IElement interface:
```
public abstract class Element : IElement
{
private bool immutable;
public bool IsReadOnly
{
get { return immutable; }
}
public virtual void MakeReadOnly()
{
immutable = true;
}
protected virtual void FailIfImmutable()
{
if (immutable) throw new ImmutableElementException(this);
}
...
}
```
Let's refactor the SampleElement class above to implement the immutable object pattern:
```
public class SampleElement : Element
{
private Guid id;
private string name;
public SampleElement() {}
public Guid Id
{
get
{
return id;
}
set
{
FailIfImmutable();
id = value;
}
}
public string Name
{
get
{
return name;
}
set
{
FailIfImmutable();
name = value;
}
}
}
```
You can now change the Id property and the Name property as long as the object has not been marked as immutable by calling the MakeReadOnly() method. Once it is immutable, calling a setter will yield an ImmutableElementException.
Final note:
The full pattern is more complex than the code snippets shown here. It also contains support for collections of immutable objects and complete object graphs of immutable object graphs. The full pattern enables you to turn an entire object graph immutable by calling the MakeReadOnly() method on the outermost object. Once you start creating larger object models using this pattern the risk of leaky objects increases. A leaky object is an object that fails to call the FailIfImmutable() method before making a change to the object. To test for leaks I have also developed a generic leak detector class for use in unit tests. It uses reflection to test if all properties and methods throw the ImmutableElementException in the immutable state.
In other words TDD is used here.
I have grown to like this pattern a lot and find great benefits in it. So what I would like to know is if any of you are using similar patterns? If yes, do you know of any good resources that document it? I am essentially looking for potential improvements and for any standards that might already exist on this topic.
|
For info, the second approach is called "popsicle immutability".
Eric Lippert has a series of blog entries on immutability starting [here](https://learn.microsoft.com/en-us/archive/blogs/ericlippert/immutability-in-c-part-one-kinds-of-immutability). I'm still getting to grips with the CTP (C# 4.0), but it looks interesting what optional / named parameters (to the .ctor) might do here (when mapped to readonly fields)...
[update: I've blogged on this [here](https://blog.marcgravell.com/2008/11/immutability-and-optional-parameters.html)]
For info, I probably wouldn't make those methods `virtual` - we probably don't want subclasses being able to make it non-freezable. If you want them to be able to add extra code, I'd suggest something like:
```
[public|protected] void Freeze()
{
if(!frozen)
{
frozen = true;
OnFrozen();
}
}
protected virtual void OnFrozen() {} // subclass can add code here.
```
Also - AOP (such as PostSharp) might be a viable option for adding all those ThrowIfFrozen() checks.
(apologies if I have changed terminology / method names - SO doesn't keep the original post visible when composing replies)
|
Another option would be to create some kind of Builder class.
For an example, in Java (and C# and many other languages) String is immutable. If you want to do multiple operations to create a String you use a StringBuilder. This is mutable, and then once you're done you have it return to you the final String object. From then on it's immutable.
You could do something similar for your other classes. You have your immutable Element, and then an ElementBuilder. All the builder would do is store the options you set, then when you finalize it it constructs and returns the immutable Element.
It's a little more code, but I think it's cleaner than having setters on a class that's supposed to be immutable.
|
Immutable object pattern in C# - what do you think?
|
[
"",
"c#",
"functional-programming",
"design-patterns",
"immutability",
""
] |
I'm writing an implementation of a virtual machine in C#, and I need to implement the VM's stack, which can contain two types of entry - return entries or backtrack entries. What is the best way of implementing this?
I'm currently using a base type, as follows:
```
class StackEntry { }
class Return : StackEntry { uint pc; }
class Backtrack : StackEntry { uint pc; object backtrack; }
Stack<StackEntry> stack;
```
This works OK, but the resulting type testing and downcasting feels clumsy.
Is there a better way of handling this type of construction?
|
I'm having a hard time imagining how you're going to use this, but the basic answer is that you use a single type with a default operation for post-pop processing
```
StackEntry { protected virtual void PostPop(); }
Return : StackEntry { protected override void PostPop(); }
Backtrack : StackEntry { protected override void PostPop(); }
```
Does that make sense?
|
What's wrong with putting the BackTrack object in anyway and have it be null if there is no back track? You can add a helpful property like bool IsBacktrack { get { return \_backTrack != null; } }
Can the backtrack be validly null? If yes, then use a bool flag for it.
|
What is the best way of implementing a stack of more than one type of object in C#?
|
[
"",
"c#",
"oop",
""
] |
My new boss went to a speech where Deliverance, a kind of proxy allowing to add skin to any html output on the fly, was presented. He decided to use it right after that, no matter how young it is.
More here :
<http://www.openplans.org/projects/deliverance/introduction>
In theory, the system sounds great when you want a newbie to tweak your plone theme without having to teach him all the complex mechanisms behind the zope products. And apply the same theme on a Drupal web site in one row.
But I don't believe in theory, and would like to know if anybody tried this out in the real world :-)
|
Having used Plone professionally for the last 4 years or so, and Deliverance on 4 commercial sites, I would advise all new front end developers (and old hands alike) to use Deliverance to theme Plone sites.
It is *much* easier to learn (a couple of weeks Vs couple of months) and potentially much more powerful than the old, confused, methods - few of which you will still need (and even then at a much later point in the life of the site).
Not only that, but it uses XPath and CSS selectors and can be used on non-Plone sites, so the time invested is easily transferable.
|
Note, plone.org uses xdv, a version of deliverance that compiles down to xslt. The simplest way to try it is with <http://pypi.python.org/pypi/collective.xdv> though plone.org runs the xslt in a (patched) Nginx.
|
Any experience with the Deliverance system?
|
[
"",
"python",
"html",
"deliverance",
""
] |
I took a wsp file, and did my **stsadm -o addsolution** like usual. Then I went into *central administration->solution management* and it showed up just fine. Then I deployed the web part, no problems so far.
The problem is when I go to add it to the webpart gallery (*Web Part Gallery: New Web Parts*) usually the web part is in the list, I check the box next to it and click **populate gallery** but it is not showing up in the list? Could I be missing something in my manifest.xml to cause this? I just wrote and deployed another web part this *exact* same way and it went fine. Also, I wrote a dummy webpart that does nothing but print "working" and tried it with that getting the same results.
Any ideas?
|
wow... turns out that all I was missing was a 'public' declaration on my class!?!
I feel like an idiot. But also, I did have to manually delete it to get it recognized. Thanks everyone!
|
Check that the .webpart file deployed to the wpcatalog folder of your web site. Depending on what directory was specified when provisioning the web application, you should find it in a location similar to this:
c:\Inetpub\wwwroot\wss\VirtualDirectories\80\wpcatalog
|
Deployed Web Part not showing up in 'Web Part Gallery: New Web Parts'
|
[
"",
"c#",
"sharepoint",
"moss",
"web-parts",
""
] |
Is there a way to colorize parts of logs in the eclipse console. I know I could send to error and standard streams and color them differently but I'm more looking someting in the lines of ANSI escape codes (or anyother, HTML ?) where I could embed the colors in the string to have it colored in the logs.
It sure would help making the important bits stand out without resorting to weird layout, rather keep the layout to the log4j setups
here is an example of what I am looking for :
[INFO ] The grid is complete ....... **false**
where the bold parts would be in blue, this coloring can be controlled by the application to an extent. like so (tags are conceptual and arbitrary, but you get the idea):
log.info(String.format("The grid is complete ....... `<blue>`%s`</blue>`", isComplete ));
---
On a more general note it is the ability to embed meta information in the logs to help the presentation of these logs. Much like we tag web pages content to help the presentation of the information by CSS.
|
Have a try with this Eclipse Plugin: [Grep Console](http://marketplace.eclipse.org/content/grep-console)
**[Update]**:
As pointed out by commenters:
When installing Grep Console in the currently last version of Eclipse, you need to uncheck 'Group items by category' in the Install dialog to see the available items.
**[Update 2]**:
As pointed out by [@azdev](https://stackoverflow.com/questions/233790/colorize-logs-in-eclipse-console#comment9125576_1373290), to get proper highlighting:
> Entering just literal strings doesn't work. To get a line to be colored, you have to enclose the string in
> `.*`
> on either side, like so:
> `.*ERROR.*`
|
Actually the [ANSI Console plugin](https://github.com/mihnita/ansi-econsole) adds ANSI escape code support to Eclipse console.
At present it does have a limitation though, whereby escape codes that span multiple lines leak incorrectly to other lines when scrolling, see [issue #3](https://github.com/mihnita/ansi-econsole/issues/3).
Otherwise some terminal view plugin as explained in [this other question](https://stackoverflow.com/questions/1562600/is-there-an-eclipse-plugin-to-run-system-shell-in-the-console) might be an option for some.
|
Colorize logs in eclipse console
|
[
"",
"java",
"eclipse",
"console",
"escaping",
"metadata",
""
] |
I'm looking to setup a lightweight, developer only web stack on Windows (and possible OSX). Ideally, I'd be working with Zend framework, MySQL. But I'm open to other APIs to facilitate creating RESTFul (or pseudo-Restful) web services.
I've seen some tools, like QuickPHP, but it might have been too lightweight as I couldn't get everything working that I wanted.
I'm not opposed to installing Apache and all that, but was just curious if there's some other tools I'm not seeing to get up to speed quickly.
Keep in mind that this is for local development only.
Thank you.
|
The thing is, you want your development environment to behave the same way as your production environment, so I would suggest installing whatever you are going to deploy to. I run a LAMP stack on my server, so I run [WAMP](http://www.wampserver.com/en/) on Windows for development. It is very easy to install and I don't notice it slowing down my laptop. You can also look at [XAMPP](http://www.apachefriends.org/en/xampp.html), but after trying both, I prefer WAMP.
|
For my local OSX development I've used [MAMP](http://www.mamp.info/en/download.html). I highly recommend it.
For Windows I'm sure you already know about a [WAMP](http://en.wikipedia.org/wiki/Comparison_of_WAMPs) and I haven't used anything else.
|
Looking for lightweight PHP stack for development on Windows
|
[
"",
"php",
"windows",
"rest",
""
] |
As many do I have a config.php file in the root of a web app that I want to include in almost every other php file. So most of them have a line like:
```
require_once("config.php");
```
or sometimes
```
require_once("../config.php");
```
or even
```
require_once("../../config.php");
```
But I never get it right the first time. I can't figure out what php is going to consider to be the current working directory when reading one of these files. It is apparently not the directory where the file containing the require\_once() call is made because I can have two files in the same directory that have different paths for the config.php.
How I have a situation where one path is correct for refreshing the page but an ajax can that updates part of the page requires a different path to the config.php in the require\_once() statement;
What's the secret? From where is that path evaluated?
Shoot, I was afraid this wouldn't be a common problem - This is occurring under apache 2.2.8 and PHP 5.2.6 running on windows.
|
The current working directory for PHP is the directory in which the called script file is located. If your files looked like this:
```
/A
foo.php
tar.php
B/
bar.php
```
If you call foo.php (ex: <http://example.com/foo.php>), the working directory will be /A/. If you call bar.php (ex: <http://example.com/B/bar.php>), the working directory will be /A/B/.
This is where it gets tricky. Let us say that foo.php is such:
```
<?php
require_once 'B/bar.php';
?>
```
And bar.php is:
```
<?php
require_once 'tar.php';
?>
```
If we call foo.php, then bar.php will successfully call tar.php because tar.php and foo.php are in the same directory which happens to be the working directory. If you instead call bar.php, it will fail.
Generally you will see either in all files:
```
require_once realpath( dirname( __FILE__ ) ).'/../../path/to/file.php';
```
or with the config file:
```
// config file
define( "APP_ROOT", realpath( dirname( __FILE__ ) ).'/' );
```
with the rest of the files using:
```
require_once APP_ROOT.'../../path/to/file.php';
```
|
I like to do this:
```
require_once(dirname(__FILE__)."/../_include/header.inc");
```
That way your paths can always be relative to the current file location.
|
How do you know the correct path to use in a PHP require_once() statement
|
[
"",
"php",
""
] |
I think most C++ programmers here would agree that polluting the global namespace is a bad idea, but are there times when this rule can be ignored?
For example, I have a type that I need to use all over a particular application - should I define it thus:
```
mytypes.h
typedef int MY_TYPE;
foo.cpp
MY_TYPE myType;
```
Or use a namespace:
```
mytypes.h
namespace ns {
typedef int MY_TYPE;
}
foo.cpp
ns::MY_TYPE myType;
...
using namespace ns;
MY_TYPE myType;
```
Which do you prefer? Are there times when it is acceptable to use the first method?
|
I use namespaces for partitioning library code from application-specific code, and in a big project to partition the various modules that make up the project.
The global namespace is thus useful for application-specific types and functions that are used across multiple modules in the application.
So, if your `MY_TYPE` is used throughout your application, put it in the global namespace, otherwise put it in a named namespace.
|
You can define your type in a separate namespace, and use
```
using ns::MY_TYPE;
```
|
Polluting the global namespace
|
[
"",
"c++",
"namespaces",
"typedef",
""
] |
When using XmlDocument.Load , I am finding that if the document refers to a DTD, a connection is made to the provided URI. Is there any way to prevent this from happening?
|
After some more digging, maybe you should set the [XmlResolver](http://msdn.microsoft.com/en-us/library/system.xml.xmlreadersettings.xmlresolver.aspx) property of the XmlReaderSettings object to null.
> 'The XmlResolver is used to locate and
> open an XML instance document, or to
> locate and open any external resources
> referenced by the XML instance
> document. This can include entities,
> DTD, or schemas.'
So the code would look like this:
```
XmlReaderSettings settings = new XmlReaderSettings();
settings.XmlResolver = null;
settings.DtdProcessing = DtdProcessing.Parse;
XmlDocument doc = new XmlDocument();
using (StringReader sr = new StringReader(xml))
using (XmlReader reader = XmlReader.Create(sr, settings))
{
doc.Load(reader);
}
```
|
The document being loaded HAS a DTD.
With:
```
settings.ProhibitDtd = true;
```
I see the following exception:
> Service cannot be started. System.Xml.XmlException: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method.
So, it looks like ProhibitDtd MUST be set to true in this instance.
It looked like ValidationType would do the trick, but with:
```
settings.ValidationType = ValidationType.None;
```
I'm still seeing a connection to the DTD uri.
|
Prevent DTD download when parsing XML
|
[
"",
"c#",
".net",
"xml",
""
] |
I have a bunch of JUnit 3 classes which extend TestCase and would like to automatically migrate them to be JUnit4 tests with annotations such as `@Before`, `@After`, `@Test`, etc.
Any tool out there to do this in a big batch run?
|
In my opinion, it cannot be that hard. So let's try it:
## 0. Imports
You need to import three annotations:
```
import org.junit.After;
import org.junit.Before;
import org.junit.Test;`
```
After you've made the next few changes, you won't need `import junit.framework.TestCase;`.
## 1. Annotate `test*` Methods
All methods beginning with `public void test` must be preceded by the `@Test` annotation.
This task is easy with a regex.
## 2. Annotate SetUp and TearDown methods
Eclipse generates following `setUp()` method:
```
@Override
protected void setUp() throws Exception { }
```
Must be replaced by:
```
@Before
public void setUp() throws Exception { }
```
Same for `tearDown()`:
```
@Override
protected void tearDown() throws Exception { }
```
replaced by
```
@After
public void tearDown() throws Exception { }
```
## 3. Get rid of `extends TestCase`
Remove exactly one occurence per file of the string
```
" extends TestCase"
```
## 4. Remove main methods?
Probably it's necessary to remove/refactor existing main methods that will execute the test.
## 5. Convert `suite()` method to `@RunWithClass`
According to saua's comment, there must be a conversion of the `suite()` method. Thanks, saua!
```
@RunWith(Suite.class)
@Suite.SuiteClasses({
TestDog.class
TestCat.class
TestAardvark.class
})
```
## Conclusion
I think, it's done very easy via a set of regular expressions, even if it will kill my brain ;)
|
Here are the actual regular expressions I used to execute furtelwart's suggestions:
```
// Add @Test
Replace:
^[ \t]+(public +void +test)
With:
@Test\n $1
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove double @Test's on already @Test annotated files
Replace:
^[ \t]+@Test\n[ \t]+@Test
With:
@Test
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove all empty setUp's
Replace:
^[ \*]+((public|protected) +)?void +setUp\(\)[^\{]*\{\s*(super\.setUp\(\);)?\s*\}\n([ \t]*\n)?
With nothing
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Add @Before to all setUp's
Replace:
^([ \t]+@Override\n)?[ \t]+((public|protected) +)?(void +setUp\(\))
With:
@Before\n public void setUp()
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove double @Before's on already @Before annotated files
Replace:
^[ \t]+@Before\n[ \t]+@Before
With:
@Before
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove all empty tearDown's
Replace:
^[ \*]+((public|protected) +)?void +tearDown\(\)[^\{]*\{\s*(super\.tearDown\(\);)?\s*\}\n([ \t]*\n)?
With nothing
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Add @After to all tearDown's
Replace:
^([ \t]+@Override\n)?[ \t]+((public|protected) +)?(void +tearDown\(\))
With:
@After\n public void tearDown()
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove double @After's on already @After annotated files
Replace:
^[ \t]+@After\n[ \t]+@After
With:
@After
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove old imports, add new imports
Replace:
^([ \t]*import[ \t]+junit\.framework\.Assert;\n)?[ \t]*import[ \t]+junit\.framework\.TestCase;
With:
import org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport static org.junit.Assert.*;
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Remove all extends TestCase
Replace:
[ \t]+extends[ \t]+TestCase[ \t]+\{
With:
{
Regular Expression: on
Case sensitive: on
File name filter:
*Test.java
// Look for import junit.framework;
Find:
import junit\.framework
Manually fix
Regular Expression: on
Case sensitive: on
// Look for ignored tests (FIXME, disabled, ...)
Find:
public[ \t]+void[ \t]+\w+test
Manually fix
Regular Expression: on
Case sensitive: on
// Look for dummy/empty tests
Find:
public[ \t]+void[ \t]+test[\w\d]*\(\s*\)\s*\{\s*(//[^\n]*)?\s*\}
Manually fix
Regular Expression: on
Case sensitive: on
```
Note: it's important to do them in the order shown above.
|
Best way to automagically migrate tests from JUnit 3 to JUnit 4?
|
[
"",
"java",
"junit",
"migration",
""
] |
We have a Java program run as **root** on Unix, that therefore can read for example the content of the folders `/home/user1` and `/home/user2`. However, if the Unix user "user1" is logged in in our application, he should not be able to access "/home/user2" data.
We would like to use directly the Unix rights and not recreate all the permissions in our application !
So, could we...
1. try to change the UID of our
program depending on the user logged
in ? Sounds difficult, and each file
access is in different threads so
the UID would be different on each
thread of our program...
2. use JNI to read permissions of
"`/home/user2`"...And then determine
if user1 has sufficient permissions
on "`/home/user2`" ? (how ?).
|
**Use SecurityManager!**
1. Put current unix user id into ThreadLocal
2. Create your own SecurityManager that checks unix user permissions on checkRead() and checkWrite()
3. System.setSecurityManager(new MySecurityManager())
4. Enjoy
**Update**
There is no, of course, standard library to read unix file permissions. It's not WORA.
But I have tried briefly to find a ready to use library, and found this one:
<http://jan.newmarch.name/java/posix/> It uses JNI, but you don't need to write your own JNI code, which is a big relief. :) I'm sure there must also be others.
Class Stat from there gives you all required access information:
<http://jan.newmarch.name/java/posix/posix.Stat.html>
**Update 2**
As folks mentioned, this approach fails to check for "non-standard" unix security features, such as ACL or Posix Capabilities (may be; not sure if they apply to files). But if the goal of being totally in sync with host OS security is set, then we even more need to use SecurityManager, because it's a JVM-wide protection mechanism! Yes, we can start a child SUID-process to verify the permissions (and keep it running, talking to it via pipe running while the user is logged in), **but we need to do so from SecurityManager**!
|
The simplest and most portable way would be to spawn a child process, have it exec a wrapper written in C which changes the UID, drops all the privileges (be careful, writting a wrapper to do that is tricky - it is as hard as writing a setuid wrapper), and execs another java instance to which you talk via RMI. That java instance would do all the filesystem manipulation on behalf of the user.
For single-threaded Linux programs, you could instead use `setfsuid()`/`setfsgid()`, but that is not an option for portable or multithreaded programs.
|
In a Java thread running as root, how can we apply Unix rights specific to a logged-in user?
|
[
"",
"java",
"unix",
"permissions",
""
] |
Some background info;
* LanguageResource is the base class
* LanguageTranslatorResource and LanguageEditorResource inherit from LanguageResource
* LanguageEditorResource defines an IsDirty property
* LanguageResourceCollection is a collection of LanguageResource
* LanguageResourceCollection internally holds LanguageResources in `Dictionary<string, LanguageResource> _dict`
* LanguageResourceCollection.GetEnumerator() returns `_dict.Values.GetEnumerator()`
I have a LanguageResourceCollection \_resources that contains only LanguageEditorResource objects and want to use LINQ to enumerate those that are dirty so I have tried the following. My specific questions are in bold.
1. `_resources.Where(r => (r as LanguageEditorResource).IsDirty)`
neither Where not other LINQ methods are displayed by Intellisense but I code it anyway and am told "LanguageResourceCollection does not contain a definition for 'Where' and no extension method...".
**Why does the way that LanguageResourceCollection implements IEnumerable preclude it from supporting LINQ?**
2. If I change the query to
`(_resources as IEnumerable<LanguageEditorResource>).Where(r => r.IsDirty)`
Intellisense displays the LINQ methods and the solution compiles. But at runtime I get an ArgumentNullException "Value cannot be null. Parameter name: source".
**Is this a problem in my LINQ code?
Is it a problem with the general design of the classes?
How can I dig into what LINQ generates to try and see what the problem is?**
My aim with this question is not to get a solution for the specific problem, as I will have to solve it now using other (non LINQ) means, but rather to try and improve my understanding of LINQ and learn how I can improve the design of my classes to work better with LINQ.
|
It sounds like your collection implements `IEnumerable`, not `IEnumerable<T>`, hence you need:
```
_resources.Cast<LanguageEditorResource>().Where(r => r.IsDirty)
```
Note that `Enumerable.Where` is defined on `IEnumerable<T>`, not `IEnumerable` - if you have the non-generic type, you need to use `Cast<T>` (or `OfType<T>`) to get the right type. The difference being that `Cast<T>` will throw an exception if it finds something that isn't a `T`, where-as `OfType<T>` simply ignores anything that isn't a `T`. Since you've stated that your collection *only* contains `LanguageEditorResource`, it is reasonable to check that assumption using `Cast<T>`, rather than silently drop data.
Check also that you have "using System.Linq" (and are referencing System.Core (.NET 3.5; else LINQBridge with .NET 2.0) to get the `Where` extension method(s).
Actually, it would be worth having your collection implement `IEnumerable<LanguageResource>` - which you could do quite simply using either the `Cast<T>` method, or an iterator block (`yield return`).
[edit]
To build on Richard Poole's note - you could write your *own* generic container here, presumably with `T : LanguageResource` (and using that `T` in the `Dictionary<string,T>`, and implementing `IEnumerable<T>` or `ICollection<T>`). Just a thought.
|
> How can I dig into what LINQ generates to try and see what the problem is?
Linq isn't generating anything here. You can step through with the debugger.
> to try and improve my understanding of LINQ and learn how I can improve the design of my classes to work better with LINQ.
System.Linq.Enumerable methods rely heavily on the IEnumerable< T > contract. You need to understand how your class can produce targets that support this contract. The type that T represents is important!
You could add this method to LanguageResourceCollection:
```
public IEnumerable<T> ParticularResources<T>()
{
return _dict.Values.OfType<T>();
}
```
and call it by:
```
_resources
.ParticularResources<LanguageEditorResource>()
.Where(r => r.IsDirty)
```
This example would make more sense if the collection class didn't implement IEnumerable< T > against that same \_dict.Values . The point is to understand IEnumerable < T > and generic typing.
|
Having some confusion with LINQ
|
[
"",
"c#",
".net",
"linq",
"linq-to-objects",
""
] |
I am just getting started with expression trees so I hope this makes sense. I am trying to create an expression tree to represent:
```
t => t.SomeProperty.Contains("stringValue");
```
So far I have got:
```
private static Expression.Lambda<Func<string, bool>> GetContainsExpression<T>(string propertyName, string propertyValue)
{
var parameterExp = Expression.Parameter(typeof(T), "type");
var propertyExp = Expression.Property(parameter, propertyName);
var containsMethodExp = Expression.*SomeMemberReferenceFunction*("Contains", propertyExp) //this is where I got lost, obviously :)
...
return Expression.Lambda<Func<string, bool>>(containsMethodExp, parameterExp); //then something like this
}
```
I just don't know how to reference the String.Contains() method.
Help appreciated.
|
Something like:
```
class Foo
{
public string Bar { get; set; }
}
static void Main()
{
var lambda = GetExpression<Foo>("Bar", "abc");
Foo foo = new Foo { Bar = "aabca" };
bool test = lambda.Compile()(foo);
}
static Expression<Func<T, bool>> GetExpression<T>(string propertyName, string propertyValue)
{
var parameterExp = Expression.Parameter(typeof(T), "type");
var propertyExp = Expression.Property(parameterExp, propertyName);
MethodInfo method = typeof(string).GetMethod("Contains", new[] { typeof(string) });
var someValue = Expression.Constant(propertyValue, typeof(string));
var containsMethodExp = Expression.Call(propertyExp, method, someValue);
return Expression.Lambda<Func<T, bool>>(containsMethodExp, parameterExp);
}
```
You might find [this](http://marcgravell.blogspot.com/2008/10/express-yourself.html) helpful.
|
To perform a search like:
```
ef.Entities.Where(entity => arr.Contains(entity.Name)).ToArray();
```
which the trace string will be:
```
SELECT .... From Entities ... Where Name In ("abc", "def", "qaz")
```
I use the method I created below:
```
ef.Entities.Where(ContainsPredicate<Entity, string>(arr, "Name")).ToArray();
public Expression<Func<TEntity, bool>> ContainsPredicate<TEntity, T>(T[] arr, string fieldname) where TEntity : class {
ParameterExpression entity = Expression.Parameter(typeof(TEntity), "entity");
MemberExpression member = Expression.Property(entity, fieldname);
var containsMethods = typeof(Enumerable).GetMethods(BindingFlags.Static | BindingFlags.Public)
.Where(m => m.Name == "Contains");
MethodInfo method = null;
foreach (var m in containsMethods) {
if (m.GetParameters().Count() == 2) {
method = m;
break;
}
}
method = method.MakeGenericMethod(member.Type);
var exprContains = Expression.Call(method, new Expression[] { Expression.Constant(arr), member });
return Expression.Lambda<Func<TEntity, bool>>(exprContains, entity);
}
```
|
How do I create an expression tree to represent 'String.Contains("term")' in C#?
|
[
"",
"c#",
".net",
"lambda",
"expression-trees",
""
] |
Have you ever tried this before?
```
static void Main(string[] args)
{
int x = 10;
x = x++;
Console.WriteLine(x);
}
```
Output: 10.
but for
```
static void Main(string[] args)
{
int x = 10;
x++;
Console.WriteLine(x);
}
```
Output: 11.
Could anyone explain why this?
|
X++ will increment the value, but then return its old value.
So in this case:
```
static void Main(string[] args)
{
int x = 10;
x = x++;
Console.WriteLine(x);
}
```
You have X at 11 just for a moment, then it gets back to 10 because 10 is the return value of (x++).
You could instead do this for the same result:
```
static int plusplus(ref int x)
{
int xOld = x;
x++;
return xOld;
}
static void Main(string[] args)
{
int x = 10;
x = plusplus(x);
Console.WriteLine(x);
}
```
It is also worth mentioning that you would have your expected result of 11 if you would have done:
```
static void Main(string[] args)
{
int x = 10;
x = ++x;
Console.WriteLine(x);
}
```
|
In the assignment `x = x++` you *first* extract the old value of `x` to use in evaluating the right-hand side expression, in this case 'x'; then, you increment `x` by 1. Last, you assign the results of the expression evaluation (10) to `x` via the assignment statement.
Perhaps an equivalent code would make the predicament clear:
```
var tmp = x;
x++;
x = tmp;
```
This is the equivalent of your `x = x++` code in C#.
|
What's the difference between X = X++; vs X++;?
|
[
"",
"c#",
""
] |
In my current 32-bit application, I check (very occasionally) for overflow by doing operations on 64-bit integers.
However, on 64-bit systems there does not seem to be a standard 128-bit integer. Is there a simple way of checking for overflow, or a way of getting 128-bit integers, which works on all OSes and compilers?
I tried using GMP as a more generic solution, but it is a little heavyweight for my requirements.
Efficiency is not too important, no processor specific-ASM is.
|
Much of the discussion in this question applies:
[How to detect integer overflow?](https://stackoverflow.com/questions/199333/best-way-to-detect-integer-overflow-in-cc)
Many of the techniques used for 32-bit overflow chacking apply to 64-bits as well (not all of the techniques discussed use the next larger integer type to handle the overflow).
|
[this document](http://www.fefe.de/intof.html) talks about catching overflows (in c) in detail. I don't know if there are better ways of doing this in C++.
|
Undefined behavior when exceed 64 bits
|
[
"",
"c++",
"64-bit",
""
] |
I am building a multithreaded system that works like this:
While there are entities:
1. Gets an entity from nHibernate (using the current session)
2. Starts a new thread that will work with this entity\*
When I start this new thread, it is required to have a new Session, because nHibernate is not thread-safe. I create it, but the entity retrieved before doesn't work for this session.
Today I'm resolving this situation retrieving from nHibernate a new Entity passing the id. But this is expensive, and I'm trying to save some time to achieve my SLA.
Is there any way for linking this object to this new session without needing to do a new database call? The other session will not be closed, they're all opened until the end of the application.
|
If you're working with detached objects, you will have to reattach them to the session. You can do that if you have the correct Hibernate ids of the objects you're working with, calling a get, and then merging your copy with the one Hibernate just put into session. Make sure you use merge, though, because saveOrUpdate() will not delete any children that are missing from the detached object, just add the new children and save changes to existing children.
|
Besides Evict + Lock you could make use of 2:nd level cache to reconstruct entities without going to the database. I don't know if it fits your application but I also think it's possible to pass the session to the other tread as long as the first thread stops making changes to it.
|
How to maintain an object for two nHibernate sessions?
|
[
"",
"c#",
"multithreading",
"nhibernate",
""
] |
What is a `StackOverflowError`, what causes it, and how should I deal with them?
|
Parameters and local variables are allocated on the **stack** (with reference types, the object lives on the **heap** and a variable in the stack references that object on the heap). The stack typically lives at the **upper** end of your address space and as it is used up it heads towards the **bottom** of the address space (i.e. towards zero).
Your process also has a **heap**, which lives at the **bottom** end of your process. As you allocate memory, this heap can grow towards the upper end of your address space. As you can see, there is a potential for the heap to **"collide"** with the stack (a bit like tectonic plates!!!).
The common cause for a stack overflow is a **bad recursive call**. Typically, this is caused when your recursive functions doesn't have the correct termination condition, so it ends up calling itself forever. Or when the termination condition is fine, it can be caused by requiring too many recursive calls before fulfilling it.
However, with GUI programming, it's possible to generate **indirect recursion**. For example, your app may be handling paint messages, and, whilst processing them, it may call a function that causes the system to send another paint message. Here you've not explicitly called yourself, but the OS/VM has done it for you.
To deal with them, you'll need to examine your code. If you've got functions that call themselves then check that you've got a terminating condition. If you have, then check that when calling the function you have at least modified one of the arguments, otherwise there'll be no visible change for the recursively called function and the terminating condition is useless. Also mind that your stack space can run out of memory before reaching a valid terminating condition, thus make sure your method can handle input values requiring more recursive calls.
If you've got no obvious recursive functions then check to see if you're calling any library functions that **indirectly** will cause your function to be called (like the implicit case above).
|
If you have a function like:
```
int foo()
{
// more stuff
foo();
}
```
Then foo() will keep calling itself, getting deeper and deeper, and when the space used to keep track of what functions you're in is filled up, you get the stack overflow error.
|
What is a StackOverflowError?
|
[
"",
"java",
"exception",
"memory-leaks",
"out-of-memory",
"stack-overflow",
""
] |
I'd like to remove all "unchecked" warnings from this general utility method (part of a larger class with a number of similar methods). In a pinch, I can use @SuppressWarnings("unchecked") but I'm wondering if I can use generics properly to avoid the warning.
The method is intended to be allow callers to compare two objects by passing through to compareTo, with the exception that if the object is a strings it does it in a case insensitive manner.
```
public static int compareObject(Comparable o1, Comparable o2)
{
if ((o1 instanceof String) && (o2 instanceof String))
return ((String) o1).toUpperCase().compareTo(((String) o2).toUpperCase());
else
return o1.compareTo(o2);
}
```
This was my first (incorrect) attempt at a solution. The parameters work fine, but the line o1.compareTo(o2) has a compile error "The method compareTo(capture#15-of ?) in the type Comparable is not applicable for the arguments (Comparable".
```
public static int compareObject(Comparable<?> o1, Comparable<?> o2)
{
if ((o1 instanceof String) && (o2 instanceof String))
return ((String) o1).toUpperCase().compareTo(((String) o2).toUpperCase());
else
return o1.compareTo(o2);
}
```
Any suggestions?
|
I just tried this:
```
public static <T extends Comparable> int compareObject(T o1, T o2) {
if ((o1 instanceof String) && (o2 instanceof String))
return ((String) o1).toUpperCase().compareTo(((String) o2).toUpperCase());
else
return o1.compareTo(o2);
}
```
It compiles, but gives a unchecked cast warning on the compareTo() call.
I tried changing it to
```
public static <T extends Comparable<T>> int compareObject(T o1, T o2) {
```
and the String checks failed to compile ("inconvertible types: found: T, required: String"). I think this must be close, though.
---
EDIT: As pointed out in the comments, this is a [bug in javac](https://bugs.java.com/bugdatabase/view_bug?bug_id=6548436). The second form is indeed correct, but will not compile currently. Crazy as it may look, this is the code that works with no warnings:
```
public static <T extends Comparable<T>> int compareObject(T o1, T o2) {
if (((Object) o1 instanceof String) && ((Object) o2 instanceof String))
return ((String) (Object)o1).toUpperCase().compareTo(((String) (Object)o2).toUpperCase());
else
return o1.compareTo(o2);
}
```
As you can see, the only difference is all the redundant casts to `Object`.
|
I hope you are aware that many of the approaches here do change the semantics of the method. With the original method you could compare objects of different types if they allow this, but with
```
public static <T extends Comparable<T>> int compareObject(T o1, T o2)
```
you cannot do this comparison anymore. A variant that allows this would be
```
public static int compareObject2(Comparable<Object> o1, Comparable<Object> o2) {
if (((Object) o1 instanceof String) && ((Object) o2 instanceof String))
return ((String) (Object)o1).toUpperCase().compareTo(((String) (Object)o2).toUpperCase());
else
return o1.compareTo(o2);
}
```
(I inserted the workaround for the mentioned javac bug.) But this does not enhance type safety or anything, so in this case it is probably better to use the more understandable non generic method and live with a `@SuppressWarnings("unchecked")`. There is such a thing as overuse of generics.
|
Using generic parameters with static compareObject method
|
[
"",
"java",
"generics",
""
] |
I have a string column in a database table which maps to an Enum in code. In my dbml file when I set the "Type" to `MyTypes.EnumType` I get the following error:
> Error 1 DBML1005: Mapping between DbType 'VarChar(50) NOT NULL' and
> Type 'MyTypes.EnumType' in Column 'EnumCol' of Type 'Table1' is not
> supported.
This question:
[LINQ to SQL strings to enums](https://stackoverflow.com/questions/4939/linq-to-sql-strings-to-enums)
indicates that what I am trying to do is possible, but how is it done?
|
Curious - it should work IIRC; I'll see if I can do a quick example - however, you might want to check that you have the fully-qualified enum name (i.e. including the namespace).
[update] From [here](http://blog.rolpdog.com/2007/07/linq-to-sql-enum-mapping.html) it seems that the RTM version shipped with a bug when resolving the enum. One workaround suggested (on that page) was to add the `global::` prefix. It works fine for me without this workaround, so maybe it is fixed in 3.5 SP1? It also allegedly works fine in 3.5 if you use the unqualified name if the enum is in the same namespace.
[example] Yup, worked fine: with Northwind, I defined an enum for the shipping country:
```
namespace Foo.Bar
{
public enum MyEnum
{
France,
Belgium,
Brazil,
Switzerland
}
}
```
I then edited the dbml to have:
```
<Column Name="ShipCountry" Type="Foo.Bar.MyEnum" DbType="NVarChar(15)" CanBeNull="true" />
```
This generated:
```
private Foo.Bar.MyEnum _ShipCountry;
//...
[Column(Storage="_ShipCountry", DbType="NVarChar(15)", CanBeNull=true)]
public Foo.Bar.MyEnum ShipCountry
{ get {...} set {...} }
```
And finally wrote a query:
```
using (DataClasses1DataContext ctx = new DataClasses1DataContext())
{
var qry = from order in ctx.Orders
where order.ShipCountry == Foo.Bar.MyEnum.Brazil
|| order.ShipCountry == Foo.Bar.MyEnum.Belgium
select order;
foreach (var order in qry.Take(10))
{
Console.WriteLine("{0}, {1}", order.OrderID, order.ShipCountry);
}
}
```
Worked fine; results:
```
10250, Brazil
10252, Belgium
10253, Brazil
10256, Brazil
10261, Brazil
10287, Brazil
10290, Brazil
10291, Brazil
10292, Brazil
10299, Brazil
```
|
I know this has been answered, but i'm still getting this error also. Very weird.
Anyway, I found a solution. You need to *PREPEND* the full namespace of the enum with `global::`. I know it sounds very weird.
Anyway, I didn't figure this out. Some dude called [Matt](http://blog.rolpdog.com/2007/07/linq-to-sql-enum-mapping.html), did. And he posted a bug report on MS Connect and they can't reproduce it so it's not fixed, I guess.
|
Mapping Enum from String
|
[
"",
"c#",
"linq-to-sql",
""
] |
When a script runs under Apache, I insert `$_SERVER['SERVER_NAME']` value into an error reporting e-mail message.
However, if a Web script forks a "worker" job with `nohup php ...`, `$_SERVER['SERVER_NAME']` appears to be empty there. Thus, if an error occurs, it's reported without a host name.
Can I reliably get the host name by means of PHP, without calling Unix `hostname` command?
|
[php\_uname("n")](http://uk.php.net/manual/en/function.php-uname.php)
> (PHP 4 >= 4.0.2, PHP 5)
> php\_uname — Returns information about the
> operating system PHP is running on
>
> php\_uname() returns a description of the operating system PHP is
> running on. This is the same string you see at the very top of the
> phpinfo() output. For the name of just the operating system, consider
> using the PHP\_OS constant, but keep in mind this constant will contain
> the operating system PHP was built on.
>
> On some older UNIX platforms, it may not be able to determine the
> current OS information in which case it will revert to displaying the
> OS PHP was built on. This will only happen if your uname() library
> call either doesn't exist or doesn't work.
|
For [PHP >= 5.3.0 use this](http://www.php.net/manual/en/function.gethostname.php):
`$hostname = gethostname();`
For [PHP < 5.3.0 but >= 4.2.0 use this](http://www.php.net/manual/en/function.php-uname.php):
`$hostname = php_uname('n');`
For PHP < 4.2.0 you can try one of these:
```
$hostname = getenv('HOSTNAME');
$hostname = trim(`hostname`);
$hostname = preg_replace('#^\w+\s+(\w+).*$#', '$1', exec('uname -a'));
```
|
Is there a PHP function or variable giving the local host name?
|
[
"",
"php",
"hostname",
""
] |
We need to write unit tests for a *wxWidgets* application using *Google Test Framework*.
The problem is that *wxWidgets* uses the macro **IMPLEMENT\_APP(MyApp)** to initialize and enter the application main loop. This macro creates several functions including **int main()**. The google test framework also uses macro definitions for each test.
One of the problems is that it is not possible to call the wxWidgets macro from within the test macro, because the first one creates functions.. So, we found that we could replace the macro with the following code:
```
wxApp* pApp = new MyApp();
wxApp::SetInstance(pApp);
wxEntry(argc, argv);
```
That's a good replacement, but wxEntry() call enters the original application loop. If we don't call wxEntry() there are still some parts of the application not initialized.
The question is how to initialize everything required for a wxApp to run, without actually running it, so we are able to unit test portions of it?
|
You want to use the function:
```
bool wxEntryStart(int& argc, wxChar **argv)
```
instead of wxEntry. It doesn't call your app's OnInit() or run the main loop.
You can call `wxTheApp->CallOnInit()` to invoke OnInit() when needed in your tests.
You'll need to use
```
void wxEntryCleanup()
```
when you're done.
|
Just been through this myself with 2.8.10. The magic is this:
```
// MyWxApp derives from wxApp
wxApp::SetInstance( new MyWxApp() );
wxEntryStart( argc, argv );
wxTheApp->CallOnInit();
// you can create top level-windows here or in OnInit()
...
// do your testing here
wxTheApp->OnRun();
wxTheApp->OnExit();
wxEntryCleanup();
```
You can just create a wxApp instance rather than deriving your own class using the technique above.
I'm not sure how you expect to do unit testing of your application without entering the mainloop as many wxWidgets components require the delivery of events to function. The usual approach would be to run unit tests after entering the main loop.
|
wxWidgets: How to initialize wxApp without using macros and without entering the main application loop?
|
[
"",
"c++",
"unit-testing",
"wxwidgets",
"googletest",
""
] |
What is the difference between ***anonymous methods*** of C# 2.0 and ***lambda expressions*** of C# 3.0.?
|
[The MSDN page on anonymous methods explains it](http://msdn.microsoft.com/en-us/library/0yw3tz5k.aspx)
> In versions of C# before 2.0, the only
> way to declare a delegate was to use
> named methods. C# 2.0 introduced
> anonymous methods and in C# 3.0 and
> later, lambda expressions supersede
> anonymous methods as the preferred way
> to write inline code. However, the
> information about anonymous methods in
> this topic also applies to lambda
> expressions. There is one case in
> which an anonymous method provides
> functionality not found in lambda
> expressions. Anonymous methods enable
> you to omit the parameter list, and
> this means that an anonymous method
> can be converted to delegates with a
> variety of signatures. This is not
> possible with lambda expressions. For
> more information specifically about
> lambda expressions, see Lambda
> Expressions (C# Programming Guide).
[And regarding lambda expressions](http://msdn.microsoft.com/en-us/library/bb397687.aspx):
> A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types.
> All lambda expressions use the lambda operator =>, which is read as "goes to". The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block. The lambda expression x => x \* x is read "x goes to x times x." This expression can be assigned to a delegate type as follows:
|
1. Lambda expressions can be converted to delegates or expression trees (with some restrictions); anonymous methods can only be converted to delegates
2. Lambda expressions allow type inference on parameters:
3. Lambda expressions allow the body to be truncated to just an expression (to return a value) or single statement (in other cases) without braces.
4. Lambda expressions allow the parameter list to be shortened to just the parameter name when the type can be inferred and when there's only a single parameter
5. Anonymous methods allow the parameter list to be omitted entirely when it's not used within the body and it doesn't lead to ambiguity
The last point is the only benefit of anonymous methods over lambdas, I believe. It's useful to create a field-like event with a no-op subscription though:
```
public event EventHandler Click = delegate{};
```
|
What's the difference between anonymous methods (C# 2.0) and lambda expressions (C# 3.0)?
|
[
"",
"c#",
"methods",
"expression",
""
] |
How do I embed a tag within a [url templatetag](http://docs.djangoproject.com/en/dev/ref/templates/builtins/#url "url templatetag") in a django template?
Django 1.0 , Python 2.5.2
In views.py
```
def home_page_view(request):
NUP={"HOMEPAGE": "named-url-pattern-string-for-my-home-page-view"}
variables = RequestContext(request, {'NUP':NUP})
return render_to_response('home_page.html', variables)
```
In home\_page.html, the following
```
NUP.HOMEPAGE = {{ NUP.HOMEPAGE }}
```
is displayed as
```
NUP.HOMEPAGE = named-url-pattern-string-for-my-home-page-view
```
and the following url named pattern works ( as expected ),
```
url template tag for NUP.HOMEPAGE = {% url named-url-pattern-string-for-my-home-page-view %}
```
and is displayed as
```
url template tag for NUP.HOMEPAGE = /myhomepage/
```
but when `{{ NUP.HOMEPAGE }}` is embedded within a `{% url ... %}` as follows
```
url template tag for NUP.HOMEPAGE = {% url {{ NUP.HOMEPAGE }} %}
```
this results in a template syntax error
```
TemplateSyntaxError at /myhomepage/
Could not parse the remainder: '}}' from '}}'
Request Method: GET
Request URL: http://localhost:8000/myhomepage/
Exception Type: TemplateSyntaxError
Exception Value:
Could not parse the remainder: '}}' from '}}'
Exception Location: C:\Python25\Lib\site-packages\django\template\__init__.py in __init__, line 529
Python Executable: C:\Python25\python.exe
Python Version: 2.5.2
```
I was expecting `{% url {{ NUP.HOMEPAGE }} %}` to resolve to `{% url named-url-pattern-string-for-my-home-page-view %}` at runtime and be displayed as `/myhomepage/`.
Are embedded tags not supported in django?
is it possible to write a custom url template tag with embedded tags support to make this work?
`{% url {{ NUP.HOMEPAGE }} %}`
|
Maybe you could try passing the final URL to the template, instead?
Something like this:
```
from django.core.urlresolvers import reverse
def home_page_view(request):
NUP={"HOMEPAGE": reverse('named-url-pattern-string-for-my-home-page-view')}
variables = RequestContext(request, {'NUP':NUP})
return render_to_response('home_page.html', variables)
```
Then in the template, the `NUP.HOMEPAGE` should the the url itself.
|
That's seems way too dynamic. You're supposed to do
```
{% url named-url-pattern-string-for-my-home-page-view %}
```
And leave it at that. Dynamically filling in the name of the URL tag is -- frankly -- a little odd.
If you want to use any of a large number of different URL tags, you'd have to do something like this
```
{% if tagoption1 %}<a href="{% url named-url-1 %}">Text</a>{% endif %}
```
Which seems long-winded because, again, the dynamic thing you're trying to achieve seems a little odd.
If you have something like a "families" or "clusters" of pages, perhaps separate template directories would be a way to manage this better. Each of the clusters of pages can inherit from a base templates and override small things like this navigation feature to keep all of the pages in the cluster looking similar but having one navigation difference for a "local home".
|
How to embed a tag within a url templatetag in a django template?
|
[
"",
"python",
"django",
"url",
"templates",
"templatetag",
""
] |
In any (non-web) .net project, the compiler automatically declares the DEBUG and TRACE constants, so I can use conditional compiling to, for example, handle exceptions differently in debug vs release mode.
For example:
```
#if DEBUG
/* re-throw the exception... */
#else
/* write something in the event log... */
#endif
```
How do I obtain the same behavior in an ASP.net project?
It looks like the system.web/compilation section in the web.config could be what I need, but how do I check it programmatically?
Or am I better off declaring a DEBUG constant myself and comment it out in release builds?
EDIT: I'm on VS 2008
|
Look at [ConfigurationManager.GetSection()](http://msdn.microsoft.com/en-us/library/system.configuration.configurationmanager.getsection.aspx) - this should get you most of the way there.. however, I think you're better off just changing between debug and release modes and letting the compiler determine to execute the "#if DEBUG" enclosed statements.
```
#if DEBUG
/* re-throw the exception... */
#else
/* write something in the event log... */
#endif
```
the above will work just fine, just make sure you have at least two build configurations (right-click the project you're working on and go to "Properties" there's a section in there on Builds) - make sure that one of those builds has the "define DEBUG" checked and the other does not.
|
To add ontop of Andrews answer, you could wrap it in a method as well
```
public bool IsDebugMode
{
get
{
#if DEBUG
return true;
#else
return false;
#endif
}
}
```
|
How can I check whether I am in a debug or release build in a web app?
|
[
"",
"c#",
"asp.net",
"debugging",
""
] |
Is it possible to modify/write data to an XML file without any server-side proxy(e.g. a php or asp script)? Can this be done via javascript? XSLT?
|
You can load and modify xml in browser, but writing the file back is a different thing.
I don't know of any feasible way of writing data back to a server without some kind of server side mechanism to write the data to disk.
|
Using the XMLHTTPRequest object you can modify an XML document using XSLT. Here's a [sample](http://aspalliance.com/1067_Creating_a_Menu_Using_XSLT_XML_and_JavaScript) article for getting started.
|
Modify XML document inside the browser
|
[
"",
"javascript",
"xml",
""
] |
Here's something I know is probably possible but I've never managed to do
In VS2005(C++), While debugging, to be able to invoke a function from the code which I'm debugging.
This feature is sometimes essential when debugging complex data structures which can't be explored easily using just the normal capabilities of the watch window.
The watch window seem to allow writing function calls but every time I try it it gives me one error or another.
`Error: symbol "func" not found`
`Error: argument list does not match function`
`Error: member function not present`
Did anyone ever succeed in making this work properly?
What am I missing here?
**Edit:** clearly, the function called should be a symbol that exists in the current scope the debugger is in.
|
Ok, Here's what I found
CXX0040 means that "`The C expression evaluator does not support implicit conversions involving constructor calls.`"
CXX0047 means that "`Overloaded functions can be called only if there is an exact parameter match or a match that does not require the construction of an object.`"
So combined it means that If I want to call a function none of the arguments should have an implicit conversion and none of the arguments should need a construction.
"implicit conversion" in this context seem to include trivial things like converting `'String'` to `'const String&'`.
"construction" seem to include trivial copy-construction. so passing by value anything that is not a primitive type will result in an error.
**So this basically leaves functions that take only primitive types or pointers.**
I have just tested this theory successfully.
So if you want to be able to call a method from the watch window, add an overload which takes only pointers and primitives and in the watch window pass the arguments appropriately. To pass an object that is not a primitive pass its address.
|
The watch window is limited by the context wherein your current code is, e.g., when your code enters a function and you try to access another function that is hidden from the scope of your current function, it won't work.
If you invoke a function in the watch window, make sure that it is visible and accessible from the current scope.
|
invoking functions while debugging with Visual Studio 2005?
|
[
"",
"c++",
"visual-studio",
"debugging",
"visual-c++-2005",
"watch",
""
] |
Is there any way in JavaScript to create a "weak reference" to another object? [Here is the wiki page describing what a weak reference is.](http://en.wikipedia.org/wiki/Weak_reference) [Here is another article that describes them in Java.](https://community.oracle.com/blogs/enicholas/2006/05/04/understanding-weak-references) Can anyone think of a way to implement this behavior in JavaScript?
|
**Update: Since July, 2020 some implementations (Chrome, Edge, Firefox and Node.js) has had support for [`WeakRef`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef)s as defined in the [WeakRefs proposal](https://tc39.es/proposal-weakrefs/#sec-weak-ref-objects), which is a "Stage 3 Draft" as of December 16, 2020.**
There is no language support for weakrefs in JavaScript. You can roll your own using manual reference counting, but not especially smoothly. You can't make a proxy wrapper object, because in JavaScript objects never know when they're about to be garbage-collected.
So your ‘weak reference’ becomes a key (eg. integer) in a simple lookup, with an add-reference and remove-reference method, and when there are no manually-tracked references anymore then entry can be deleted, leaving future lookups on that key to return null.
This is not really a weakref, but it can solve some of the same problems. It's typically done in complex web applications to prevent memory leakage from browsers (typically IE, especially older versions) when there is a reference loop between a DOM Node or event handler, and an object associated with it such as a closure. In these cases a full reference-counting scheme may not even be necessary.
|
When running JS on NodeJS, you may consider <https://github.com/TooTallNate/node-weak>.
|
Is it possible to create a "weak reference" in JavaScript?
|
[
"",
"javascript",
"weak-references",
""
] |
In Java, we can always use an array to store object reference. Then we have an ArrayList or HashTable which is automatically expandable to store objects. But does anyone know a native way to have an auto-expandable array of object references?
Edit: What I mean is I want to know if the Java API has some class with the ability to store references to objects (but not storing the actual object like XXXList or HashTable do) AND the ability of auto-expansion.
|
Java arrays are, by their definition, fixed size. If you need auto-growth, you use XXXList classes.
EDIT - question has been clarified a bit
When I was first starting to learn Java (coming from a C and C++ background), this was probably one of the first things that tripped me up. Hopefully I can shed some light.
Unlike C++, Object arrays in Java do *not* store objects. They store object references.
In C++, if you declared something similar to:
```
String myStrings[10];
```
You would get 10 String objects. At this point, it would be perfectly legal to do something like println(myStrings[5].length); - you'd get '0' - the default constructor for String creates an empty string with length 0.
In Java, when you construct a new array, you get an empty container that can hold 10 String references. So the call:
```
String[] myStrings = new String[10];
println(myStringsp[5].length);
```
would throw a null pointer exception, because you haven't actually placed a String reference into the array yet.
If you are coming from a C++ background, think of new String[10] as being equivalent to new (String \*)[10] from C++.
So, with that in mind, it should be fairly clear why ArrayList *is* the solution for an auto expanding array of objects (and in fact, ArrayList is implemented using simple arrays, with a growth algorithm built in that allocates new expanded arrays as needed and copies the content from the old to the new).
In practice, there are actually relatively few situations where we use arrays. If you are writing a container (something akin to ArrayList, or a BTree), then they are useful, or if you are doing a lot of low level byte manipulation - but at the level that most development occurs, using one of the Collections classes is by far the preferred technique.
|
All the classes implementing Collection are expandable and store only references: you don't store objects, you create them in some data space and only manipulate references to them, until they go out of scope without reference on them.
You can put a reference to an object in two or more Collections. That's how you can have sorted hash tables and such...
|
Is there an expandable list of object references in Java?
|
[
"",
"java",
"arrays",
"object-reference",
""
] |
I'm using jQuery to post a form to a php file, simple script to verify user details.
```
var emailval = $("#email").val();
var invoiceIdval = $("#invoiceId").val();
$.post("includes/verify.php",
{invoiceId:invoiceIdval , email:emailval },
function(data) {
//stuff here.
});
```
PHP Code:
```
<?php
print_r($_POST);
?>
```
I look at the response in firebug, it is an empty array. The array should have at least some value.
I can not work out why the `$_POST` isn't working in the php file. Firebug shows the post to contain the contents posted, email and invoice id, just nothing is actually received in the php file.
The form:
```
<form method="post" action="<?=$_SERVER['PHP_SELF']; ?>" enctype="application/x-www-form-urlencoded">
```
Anyone know what its doing?
thanks
---
found this - <http://www.bradino.com/php/empty-post-array/>
that a sensible route to go?
|
`$.post()` passes data to the underlying `$.ajax()` call, which sets `application/x-www-form-urlencoded` by default, so i don't think it's that.
can you try this:
```
var post = $('#myForm').serialize();
$.post("includes/verify.php", post, function(data) {
alert(data);
});
```
the `serialize()` call will grab all the current data in `form.myForm`.
|
I got bitten by the same issue, and I find the solution Owen gives not appropriate. You're serializing the object yourself, while jQuery should do that for you. You might as well do a $.get() in that case.
I found out that in my case it was actually a server redirect from /mydir to /mydir/ (with slash) that invalidated the POST array. The request got sent to an index.php within /mydir
This was on a local machine, so I couldn't check the HTTP traffic. I would have found out earlier if I would have done that.
|
jquery $.post empty array
|
[
"",
"php",
"jquery",
"ajax",
"post",
""
] |
I've been a C# programmer for about 2 years total, and professionally for a little more than 1. I work at a company as a developer on an application that began before the days of .NET 2. My question is this:
What is the benefit to use databinding from sql queries directly to a control over querying and manually adding items to the control? Is this approach generally considered cleaner, and simpler in terms of deploying? Will there still be cases where manually adding would give more fine grained control over the display / value of the data?
I'm asking because I've been assigned some bugs that deal with some controls that populate themselves based off query results, and would love to clean up unnecessary logic and push that off to the libraries to handle instead of me.
Note: We are using .NET 2.0
|
I personally find that using the
```
control.DataSource = YourSource;
control.DataBind();
```
process is much easier, you don't have to do the iteration, and overall reduces LOC.
If working with DropDownLists and other controls you will most likely set the DataValueField and DataTextField properties as well.
|
Data binding is much easier to set up, less error prone overall, reduces LOC significantly (as Mitchel Sellers said), and, a few minor glitches aside, works fairly reliably.
In my experience, you only actually need full manual control if you need to specify the exact update order or timing for data bound controls.
|
Benefits of DataBinding over Manually Querying / Adding to Control
|
[
"",
"c#",
".net",
"sql-server",
"model-view-controller",
"data-binding",
""
] |
```
$array = explode(".", $row[copy]);
$a = $array.length -1;
```
I want to return the last element of this array but all i get from this is -1.
|
You can also use:
```
$a = end($array);
```
This also sets the arrays internal pointer to the end of the array, but it does get you the last element easily.
|
Try [count](http://php.net/count):
```
$array = explode(".", $row[copy]);
$a = count($array) - 1;
$array[$a]; // last element
```
|
How to access the last element in an array?
|
[
"",
"php",
"arrays",
"element",
""
] |
I am trying to get Silverlight to work with a quick sample application and am calling a rest service on a another computer. The server that has the rest service has a clientaccesspolicy.xml which looks like:
```
<access-policy>
<cross-domain-access>
<policy>
<allow-from http-request-headers="*">
<domain uri="*"/>
</allow-from>
<grant-to>
<resource path="/" include-subpaths="true"/>
</grant-to>
</policy>
</cross-domain-access>
</access-policy>
```
And is being picked up (at least according to the the network traces I have run), and there is no request for crossdomain.xml. The C# code looks like:
```
public Page()
{
InitializeComponent();
string restUrl = "http://example.com/rest_service.html?action=test_result";
WebClient testService = new WebClient();
testService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(testService_DownloadStringCompleted);
testService.DownloadStringAsync(new Uri(restUrl, UriKind.Absolute));
}
void testService_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
{
if (e.Error == null)
{
LoadTreeViewWithData(e.Result);
}
}
```
However, I always get the following Security Error back:
```
{System.Security.SecurityException ---> System.Security.SecurityException: Security error.
at System.Net.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult)
at System.Net.BrowserHttpWebRequest.c__DisplayClass5.b__4(Object sendState)
at System.Net.AsyncHelper.c__DisplayClass2.b__0(Object sendState)
--- End of inner exception stack trace ---
at System.Net.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state)
at System.Net.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.WebClient.GetWebResponse(WebRequest request, IAsyncResult result)
at System.Net.WebClient.DownloadBitsResponseCallback(IAsyncResult result)}
```
What am I doing wrong? And why doesn't the security error tell me some more useful information?
|
If you haven't already done so, I'd first try changing the restUrl to something simpler like a static HTML page on the same server (or if need be on your own server) just to verify your main code works.
Assuming the security exception is specific to that REST URL (or site), you might take a look at the [URL Access Restrictions in Silverlight 2](http://msdn.microsoft.com/en-us/library/cc189008(VS.95).aspx) article. There are some non-obvious security rules involving file types and "internet zones" in addition to the more well-known cross domain rules.
I second the complaint about many exception messages in Silverlight not being very helpful. The above referenced MSDN article contains an amusing note:
> When users get an error that results from one of these access policies being violated, the error may not indicate the exact cause.
|
I couldn't do cross domain REST HTTP deletes without adding http-methods="\*" to the allow-from element in the clientaccesspolicy.xml. When I added the http-methods attribute, then everything worked and the SecurityException stopped happening.
|
Silverlight Rest Service, Security Exception
|
[
"",
"c#",
"silverlight",
"rest",
""
] |
does anybody know how could I get the TWO most largest values from the third column on the following array?
```
$ar = array(array(1, 1, 7.50, 'Hello'),
array(1, 2, 18.90, 'Hello'),
array(3, 5, 11.50, 'Hello'),
array(2, 4, 15.90, 'Hello'));
```
Output should be:
```
15.90
18.90
```
Thanks in advance
|
If you're sure that the value (two) will never change, just iterate over the array and keep track of the two largest numbers. If not, sort the arrays using [`usort`()](http://www.php.net/manual/en/function.usort.php) and providing an appropriate callback. Then take the first two values:
```
function cmp($a, $b) {
$a = $a[2];
$b = $b[2];
return $a == $b ? 0 : $a < $b ? 1 : -1;
}
usort($ar, 'cmp');
```
|
Sorting is O(n log n), but you can actually accomplish this in O(n) (that is, *faster*, if the array is big). Pseudocode follows:
```
first = array[0][2]
second = array[1][2]
if second > first
first, second = second, first
for tuple in array[2:n]
if tuple[2] > second
second = tuple[2]
if second > first
first, second = second, first
```
|
Largest values on array
|
[
"",
"php",
"arrays",
""
] |
Into some view data i have put the result of an anonymous type:
```
var projectData = from p in db.Projects
orderby p.title
select new
{
Title = p.title,
DevURL = p.devURL ?? "N/A",
QAURL = p.qaURL ?? "N/A",
LiveURL = p.liveURL ?? "N/A",
Users = p.GetUsers().MakeUserList()
};
ViewData["ProjectSummary"] = projectData;
```
How do I iterate through this view data in the MVC view on the front end to say, make a table of results?
|
In your case it would be much simpler to create a model to hold your data rather than using an anonymous type.
The issue you're having is that your anonymous type is cast to an object when its stored within the ViewData. On the UI side, when you get that object out, the only way to access its properties is to use reflection. You do NOT want to do this in your UI. It will be highly ugly. Instead, just add the following class to your Models:
```
public class Project{
public string Title {get;set;}
public string DevUrl {get;set;}
public string QAUrl {get;set;}
public string LiveUrl {get;set;}
public IEnumerable<User> Users {get;set;}
public static IEnumerable<Project> RetrieveAllProjects()
{
return from p in db.Projects
orderby p.title
select new Project
{
Title = p.title,
DevURL = p.devURL ?? "N/A",
QAURL = p.qaURL ?? "N/A",
LiveURL = p.liveURL ?? "N/A",
Users = p.GetUsers().MakeUserList()
};
}
```
In your controller do this:
```
public ActionResult Index()
{
return View("Index", Project.RetrieveAllProjects());
}
```
and in your view's codebehind, strongly type it thusly:
```
//snip
public partial class Index : ViewPage<IEnumerable<Project>>
{
//snip
```
You might think its a bit wasteful to have all these models laying around, but its much easier to understand, and makes your UI code much slimmer, if you use your models wisely.
Also, a model is a great place (and, in fact, should be where you do it) to place the logic for loading your data and constructing the models themselves. Think ActiveRecord. And, while you're coding all this, realize that projects like SubSonic create your models for you without any muss or fuss.
|
Not tried this with an anonymous type but this is how i do it by passing a `List<T>` object to `ViewData`
```
<% foreach (Project p in (IEnumerable<Project>)ViewData["ProjectSummary"]) { %>
<%= Html.Encode(p.Title) %>
<% } %>
```
Hope this is what your looking for.
Mark
|
Iterating through anonymous typed data in MVC view
|
[
"",
"c#",
"asp.net-mvc",
"anonymous-types",
""
] |
I know the statement:
```
create table xyz_new as select * from xyz;
```
Which copies the structure and the data, but what if I just want the structure?
|
Just use a where clause that won't select any rows:
```
create table xyz_new as select * from xyz where 1=0;
```
### Limitations
The following things will not be copied to the new table:
* sequences
* triggers
* indexes
* some constraints may not be copied
* materialized view logs
This also does not handle partitions
---
|
I used the method that you accepted a lot, but as someone pointed out it doesn't duplicate constraints (except for NOT NULL, I think).
A more advanced method if you want to duplicate the full structure is:
```
SET LONG 5000
SELECT dbms_metadata.get_ddl( 'TABLE', 'MY_TABLE_NAME' ) FROM DUAL;
```
This will give you the full create statement text which you can modify as you wish for creating the new table. You would have to change the names of the table and all constraints of course.
(You could also do this in older versions using EXP/IMP, but it's much easier now.)
**Edited to add**
If the table you are after is in a different schema:
```
SELECT dbms_metadata.get_ddl( 'TABLE', 'MY_TABLE_NAME', 'OTHER_SCHEMA_NAME' ) FROM DUAL;
```
|
How can I create a copy of an Oracle table without copying the data?
|
[
"",
"sql",
"oracle",
"copy",
"database-table",
""
] |
Just about every piece of example code everywhere omits error handling (because it "confuses the issue" that the example code is addressing). My programming knowledge comes primarily from books and web sites, and you seldom see any error handling in use at all there, let alone good stuff.
Where are some places to see good examples of C++ error handling code? Specific books, specific open-source projects (preferably with the files and functions to look at), and specific web pages or sites will all be gratefully accepted.
|
Herb Sutter's and Andrei Alexandrescu's book [C++ Coding Standards](http://www.gotw.ca/publications/c++cs.htm) comes with a whole chapter on *Error Handling and Exceptions* including
* Assert liberally to document internal assumptions and invariants
* Establish a rational error handling policy, and follow it strictly
* Distinguish between errors and non-errors
* Design and write error-safe code
* Prefer to use exceptions to report errors
* Throw by value, catch by reference
* Report, handle, and translate errors appropriately
* Avoid exception specifications
Every topic also includes an example and I found it to be a very valuable resource.
|
*"Use exceptions"* vs. *"Use error codes"* is never as clear-cut as examples suggest.
Use error codes for program flow. If you have an error that is expected, do not throw an exception. E.g. you're reading a file, you may throw an exception for *"file not found"*, *"file locked"*; but never throw one for *"end of file"*.
If you do, you can never write simple loops, you'll always wrapping code in exception handlers. And don't forget exceptions are very slow, this is especially important in big multi-threaded servers. (Not so important at all in your desktop application).
Secondly, be very careful with exception hierarchies. You may think it's OK to have an `Exception` class, then derive a `NetException` from it, then `SMTPException` for your SMTP class. But unless you hold generic data in the base class, you will always have to catch every type of exception in that hierarchy. E.g. if you put the reason for the SMTP error in your `SMTPException` class, you must catch it - if you only catch `Exception` types, you will not have access to the `SMTPException` members. A good workaround for this problem is to have a string and an int member in the base exception class and only use them, even for the derived types. Unfortunately `std::exception` only offers a string :(
Some people say that doing this means you might as well only have a single exception type, especially as you will always catch the base class type anyway.
If you do use exceptions you must take the trouble to populate them with more data than you would with an error code. With errors, you must handle them immediately or they get lost in the code. With an exception, it may get caught many levels away from where it was thrown - like in Roddy's example. `DoC` is called, and gets an exception 2 levels in from `DoA`. Unless you specify the error to be specific to the code in `DoA`, you may think it was thrown from the `DoB` function. (simple example, but I've seen code where an exception was handled many levels down the call stack. It was a b*st*rd to debug. This especially applies to OO programs)
So hopefully, I've given you enough to think about. The simple truth of the matter is that style means nothing in error handling, practicality is everything. If you have to put log statements everywhere an error can occur, then do so. It matters a lot more that you can see where the code went wrong (and what data was being worked with) than you have a elegant exception hierarchy or you've littered your code with exception handlers. **If you cannot easily trace the error, your error handling code is useless.**
Exceptions are good, use them. But think about what you're doing, do not misuse or overuse them. A misused exception is worse than no error handling at all (as you can grab a crash dump and view the unhandled exception to find the error in seconds. With an exception that is eaten and ignored, you're stuffed).
I've found over the years that the biggest assistant to debugging is logging. Write logs, write lots of logs.
|
C++ Error Handling -- Good Sources of Example Code?
|
[
"",
"c++",
"error-handling",
""
] |
I want to write an Add-In for Visual Studio that provides instant search for the solution explorer. So you press a key combination and while you are typing a list first containing all files of the solution explorer gets narrowed down.
But how can I get access to the solution explorer using C#? Does anyone have some good resources?
---
Edit: I edited the title, so that it more relates to the content, since all of you posted already made plug-ins.
|
[Sonic File Finder](http://jens-schaller.de/sonictools/sonicfilefinder/index.htm) it's free
When you hit the shortcut you have a search box with autocomplete:
[alt text http://jens-schaller.de/files/images/SonicFileFinder/sonicFileFinderToolWindow.png](http://jens-schaller.de/files/images/SonicFileFinder/sonicFileFinderToolWindow.png)
|
Visual Studio is already perfectly capable of doing that; just drag the "Edit.GoToFindCombo" to the toolbar, if it's not already there. (Press Ctrl+D if it is, to activate) and type ">of s" for all files starting with the letter 's'.
[](https://i.stack.imgur.com/nV3qg.png)
(source: [aaronlerch.com](http://www.aaronlerch.com/files/blog/TheTalentedMr.GoToFindCombo_F0A5/image_thumb_5.png))
The GoToFindCombo is more powerful than this though, you can set aliases ">alias $aliasHere $SomeCommand" for virtually every command in Visual Studio. For example, to make an alias for find in files you type ">alias ff Edit.FindinFiles"
|
Plug-in for Visual Studio for quick-searching files in solution
|
[
"",
"c#",
"visual-studio",
"resources",
"solution-explorer",
""
] |
Correct me if I am wrong,
int is 4 bytes, with a range of values from -2,147,483,648 to 2,147,483,647 (2^31)
long is 4 bytes, with a range of values from -2,147,483,648 to 2,147,483,647 (2^31)
What is the difference in C++? Can they be used interchangeably?
|
It is implementation dependent.
For example, under Windows they are the same, but for example on Alpha systems a long was 64 bits whereas an int was 32 bits. This [article](http://software.intel.com/en-us/articles/size-of-long-integer-type-on-different-architecture-and-os) covers the rules for the Intel C++ compiler on variable platforms. To summarize:
```
OS arch size
Windows IA-32 4 bytes
Windows Intel 64 4 bytes
Windows IA-64 4 bytes
Linux IA-32 4 bytes
Linux Intel 64 8 bytes
Linux IA-64 8 bytes
Mac OS X IA-32 4 bytes
Mac OS X Intel 64 8 bytes
```
|
The only guarantee you have are:
```
sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
// FROM @KTC. The C++ standard also has:
sizeof(signed char) == 1
sizeof(unsigned char) == 1
// NOTE: These size are not specified explicitly in the standard.
// They are implied by the minimum/maximum values that MUST be supported
// for the type. These limits are defined in limits.h
sizeof(short) * CHAR_BIT >= 16
sizeof(int) * CHAR_BIT >= 16
sizeof(long) * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
CHAR_BIT >= 8 // Number of bits in a byte
```
Also see: [Is `long` guaranteed to be at least 32 bits?](https://stackoverflow.com/q/4329777/14065)
|
What is the difference between an int and a long in C++?
|
[
"",
"c++",
"variables",
""
] |
In my views.py, I'm building a list of two-tuples, where the second item in the tuple is another list, like this:
```
[ Product_Type_1, [ product_1, product_2 ],
Product_Type_2, [ product_3, product_4 ]]
```
In plain old Python, I could iteration the list like this:
```
for product_type, products in list:
print product_type
for product in products:
print product
```
I can't seem to do the same thing in my Django template:
```
{% for product_type, products in product_list %}
print product_type
{% for product in products %}
print product
{% endfor %}
{% endfor %}
```
I get this error from Django:
**Caught an exception while rendering: zip argument #2 must support iteration**
Of course, there is some HTML markup in the template, not print statements. Is tuple unpacking not supported in the Django template language? Or am I going about this the wrong way? All I am trying to do is display a simple hierarchy of objects - there are several product types, each with several products (in models.py, Product has a foreign key to Product\_type, a simple one-to-many relationship).
Obviously, I am quite new to Django, so any input would be appreciated.
|
it would be best if you construct your data like {note the '(' and ')' can be exchanged for '[' and ']' repectively, one being for tuples, one for lists}
```
[ (Product_Type_1, ( product_1, product_2 )),
(Product_Type_2, ( product_3, product_4 )) ]
```
and have the template do this:
```
{% for product_type, products in product_type_list %}
{{ product_type }}
{% for product in products %}
{{ product }}
{% endfor %}
{% endfor %}
```
the way tuples/lists are unpacked in for loops is based on the item returned by the list iterator.
each iteration only one item was returned. the first time around the loop, Product\_Type\_1, the second your list of products...
|
Another way is as follows.
If one has a list of tuples say:
```
mylst = [(a, b, c), (x, y, z), (l, m, n)]
```
then one can unpack this list in the template file in the following manner.
In my case I had a list of tuples which contained the URL, title, and summary of a document.
```
{% for item in mylst %}
{{ item.0 }} {{ item.1}} {{ item.2 }}
{% endfor %}
```
|
Django - How to do tuple unpacking in a template 'for' loop
|
[
"",
"python",
"django",
"templates",
"tuples",
"iterable-unpacking",
""
] |
The [MSDN documentation](https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/c5kehkcz(v=vs.110)) says that
```
public class SomeObject
{
public void SomeOperation()
{
lock(this)
{
//Access instance variables
}
}
}
```
is "a problem if the instance can be accessed publicly". I'm wondering why? Is it because the lock will be held longer than necessary? Or is there some more insidious reason?
|
It is bad form to use `this` in lock statements because it is generally out of your control who else might be locking on that object.
In order to properly plan parallel operations, special care should be taken to consider possible deadlock situations, and having an unknown number of lock entry points hinders this. For example, any one with a reference to the object can lock on it without the object designer/creator knowing about it. This increases the complexity of multi-threaded solutions and might affect their correctness.
A private field is usually a better option as the compiler will enforce access restrictions to it, and it will encapsulate the locking mechanism. Using `this` violates encapsulation by exposing part of your locking implementation to the public. It is also not clear that you will be acquiring a lock on `this` unless it has been documented. Even then, relying on documentation to prevent a problem is sub-optimal.
Finally, there is the common misconception that `lock(this)` actually modifies the object passed as a parameter, and in some way makes it read-only or inaccessible. This is **false**. The object passed as a parameter to `lock` merely serves as a **key**. If a lock is already being held on that key, the lock cannot be made; otherwise, the lock is allowed.
This is why it's bad to use strings as the keys in `lock` statements, since they are immutable and are shared/accessible across parts of the application. You should use a private variable instead, an `Object` instance will do nicely.
Run the following C# code as an example.
```
public class Person
{
public int Age { get; set; }
public string Name { get; set; }
public void LockThis()
{
lock (this)
{
System.Threading.Thread.Sleep(10000);
}
}
}
class Program
{
static void Main(string[] args)
{
var nancy = new Person {Name = "Nancy Drew", Age = 15};
var a = new Thread(nancy.LockThis);
a.Start();
var b = new Thread(Timewarp);
b.Start(nancy);
Thread.Sleep(10);
var anotherNancy = new Person { Name = "Nancy Drew", Age = 50 };
var c = new Thread(NameChange);
c.Start(anotherNancy);
a.Join();
Console.ReadLine();
}
static void Timewarp(object subject)
{
var person = subject as Person;
if (person == null) throw new ArgumentNullException("subject");
// A lock does not make the object read-only.
lock (person.Name)
{
while (person.Age <= 23)
{
// There will be a lock on 'person' due to the LockThis method running in another thread
if (Monitor.TryEnter(person, 10) == false)
{
Console.WriteLine("'this' person is locked!");
}
else Monitor.Exit(person);
person.Age++;
if(person.Age == 18)
{
// Changing the 'person.Name' value doesn't change the lock...
person.Name = "Nancy Smith";
}
Console.WriteLine("{0} is {1} years old.", person.Name, person.Age);
}
}
}
static void NameChange(object subject)
{
var person = subject as Person;
if (person == null) throw new ArgumentNullException("subject");
// You should avoid locking on strings, since they are immutable.
if (Monitor.TryEnter(person.Name, 30) == false)
{
Console.WriteLine("Failed to obtain lock on 50 year old Nancy, because Timewarp(object) locked on string \"Nancy Drew\".");
}
else Monitor.Exit(person.Name);
if (Monitor.TryEnter("Nancy Drew", 30) == false)
{
Console.WriteLine("Failed to obtain lock using 'Nancy Drew' literal, locked by 'person.Name' since both are the same object thanks to inlining!");
}
else Monitor.Exit("Nancy Drew");
if (Monitor.TryEnter(person.Name, 10000))
{
string oldName = person.Name;
person.Name = "Nancy Callahan";
Console.WriteLine("Name changed from '{0}' to '{1}'.", oldName, person.Name);
}
else Monitor.Exit(person.Name);
}
}
```
Console output
```
'this' person is locked!
Nancy Drew is 16 years old.
'this' person is locked!
Nancy Drew is 17 years old.
Failed to obtain lock on 50 year old Nancy, because Timewarp(object) locked on string "Nancy Drew".
'this' person is locked!
Nancy Smith is 18 years old.
'this' person is locked!
Nancy Smith is 19 years old.
'this' person is locked!
Nancy Smith is 20 years old.
Failed to obtain lock using 'Nancy Drew' literal, locked by 'person.Name' since both are the same object thanks to inlining!
'this' person is locked!
Nancy Smith is 21 years old.
'this' person is locked!
Nancy Smith is 22 years old.
'this' person is locked!
Nancy Smith is 23 years old.
'this' person is locked!
Nancy Smith is 24 years old.
Name changed from 'Nancy Drew' to 'Nancy Callahan'.
```
|
Because if people can get at your object instance (ie: your `this`) pointer, then they can also try to lock that same object. Now they might not be aware that you're locking on `this` internally, so this may cause problems (possibly a deadlock)
In addition to this, it's also bad practice, because it's locking "too much"
For example, you might have a member variable of `List<int>`, and the only thing you actually need to lock is that member variable. If you lock the entire object in your functions, then other things which call those functions will be blocked waiting for the lock. If those functions don't need to access the member list, you'll be causing other code to wait and slow down your application for no reason at all.
|
Why is lock(this) {...} bad?
|
[
"",
"c#",
"multithreading",
"locking",
""
] |
Given that these two examples are equivalent, which do you think is preferrable?
**Without explicit modifier**
```
public class MyClass
{
string name = "james";
public string Name {
get { return name; }
set { name = value; }
}
void SomeMethod() { ... }
}
```
**With explicit modifier**
```
public class MyClass
{
private string name = "james";
public string Name {
get { return name; }
set { name = value; }
}
private void SomeMethod() { ... }
}
```
---
I've always used the latter, but recently I've started adopting the former style. The private is redundant as that's the default accessor modifier, so doesn't it make sense to exclude it?
|
I think explicity stating private helps in readability. It won't allow for a programmer to interpret its visibility differently.
|
It looks that we are the only one, but personally, **I support** the let's remove private campaign.
My concern is that public and private are so similar, 6-7 chars length, blue, starting with 'p', so it's much harder to point a public method between 10 explicit private ones than between 10 that have no access attribute.
Also, it's an advantage since lazy people in your team tend to save writing the modifier and making the method private, which is actually a good thing. Otherwise you end up with everything public.
I usually prefer explicit over implicit, but that's more important in language corner cases (tricky cheats) that in a widespread feature. Here I think long-rung maintainability is more important.
Also, I usually like when code is simple and clear in a *mathematical* way over when the code is explicit in order to preserve future coder's ignorance. That's the VB way, not C#...
|
Should you use the private access modifier if it's redundant?
|
[
"",
"c#",
"coding-style",
""
] |
Are C# enums typesafe?
If not what are the implications?
|
To give a slightly different answer... while the values are type-safe from the casting perspective, they are still unchecked once they have been cast - i.e.
```
enum Foo { A = 1, B = 2, C = 3 }
static void Main()
{
Foo foo = (Foo)500; // works fine
Console.WriteLine(foo); // also fine - shows 500
}
```
For this reason, you should take care to check the values - for example with a `default` in a `switch` that throws an exception.
You can also check the (for non-`[Flags]` values) via:
```
bool isValid = Enum.IsDefined(typeof(Foo), foo);
```
|
Yes they are.
The following is from <http://www.csharp-station.com/Tutorials/Lesson17.aspx>
Enums are strongly typed constants. They are essentially unique types that allow you to assign symbolic names to integral values. In the C# tradition, they are strongly typed, meaning that an enum of one type may not be implicitly assigned to an enum of another type even though the underlying value of their members are the same. Along the same lines, integral types and enums are not implicitly interchangable. All assignments between different enum types and integral types require an explicit cast.
|
Are C# enums typesafe?
|
[
"",
"c#",
"enums",
""
] |
The `java.net.InetAddress.GetByName(String host)` method can only return `A` records so to lookup other record types I need to be able to send DNS queries using the `dnsjava` library.
However that normally relies on being able to parse `/etc/resolv.conf` or similar to find the DNS server addresses and that doesn't work on Android.
The current DNS settings on Android can apparently only be obtained from within a shell by using the `getprop` command.
Can anyone tell me how to get those settings from Java other than by spawning a shell with `Runtime.exec()` and parsing the output from `getprop`?
|
The [DNS protocol](http://tools.ietf.org/rfc/rfc882.txt) is not that complex - can't you just do the DNS accesses using raw sockets (either TCP or UDP)? After a quick look at the dnsjava doco it seems to provide low level DNS support to assist with this.
The other possible direction is, starting with dnsjava, to remove the dependence on `/etc/resolv.conf`. I would think about using getprop in your launch script to set properties in the JVM, or to create a localized `resolv.conf` file in your app's directory from which you can read the information needed. In other words, use getprop to inject information into the JVM instead of attempting to pull it in once the JVM is going. Surely creating a file that dnsjava can use directly should be doable.
---
## Edit - android.net
It looks like `android.net.ConnectivityManager` will deliver you an array of NetworkInfo's using `getAllNetworkInfo()`. Then use `android.net.NetworkUtils.runDhcp()` to get a **DhcpInfo** for any given network interface - the **DhcpInfo** structure has the IP address for dns1 and dns2 for that interface. Surprised that the DNS's are int, therefore implying IP4 only, though.
|
I don't think it's possible for general case. For WiFi I found this:
```
WiFiManager wifi = (WifiManager) getSystemService(WIFI_SERVICE);
DhcpInfo info = wifi.getDhcpInfo();
```
|
How do I find the DNS servers in Android from a Java program?
|
[
"",
"android",
"dns",
"java",
""
] |
I'm trying to change user input in wildcard form `("*word*")` to a regular expression format. To that end, I'm using the code below to strip off the `'*'` at the beginning and end of the input so that I can add the regular expression characters on either end:
```
string::iterator iter_begin = expressionBuilder.begin();
string::iterator iter_end = expressionBuilder.end();
iter_end--;
if ((char)*iter_begin == '*' && (char)*iter_end == '*')
{
expressionBuilder.erase(iter_begin);
expressionBuilder.erase(iter_end);
expressionBuilder = "\\b\\w*" + expressionBuilder + "\\w*\\b";
}
```
However, the call to `"expressionBuilder.erase(iter_end)"` does *not* erase the trailing `'*'` from the input string so I wind up with an incorrect regular expression. What am I doing wrong here? `"(char)*iter_end == '*'"` must be true for the code inside the if statment to run (which it does), so why doesn't the same iterator work when passed to erase()?
|
Try erasing them in the opposite order:
```
expressionBuilder.erase(iter_end);
expressionBuilder.erase(iter_begin);
```
After erasing the first \*, iter\_end refers to one character past the end of the string in your example. The [STL documentation](http://www.sgi.com/tech/stl/basic_string.html) indicates that iterators are invalidated by `erase()`, so technically my example is wrong too but I believe it will work in practice.
|
Your original code and the proposed solutions so far have a couple of problems in addition to the obvious problem you posted about:
* use of invalidated iterators after the string is modified
* dereferencing possibly invalid iterators even before the string is modified (if the string is empty, for example)
* a bug if the expressionBuilder string contains only a single '\*' character
Now, the last two items might not really be a problem if the code that uses the snippet/routine is already validating that the string has at least 2 characters, but in case that's not the situation, I believe the following to be more robust in the face of arbitrary values for expressionBuilder:
```
// using the reverse iterator rbegin() is a nice easy way
// to get the last character of a string
if ( (expressionBuilder.size() >= 2) &&
(*expressionBuilder.begin() == '*') &&
(*expressionBuilder.rbegin() == '*') ) {
expressionBuilder.erase(expressionBuilder.begin());
// can't nicely use rbegin() here because erase() wont take a reverse
// iterator, and converting reverse iterators to regular iterators
// results in rather ugly, non-intuitive code
expressionBuilder.erase(expressionBuilder.end() - 1); // note - not invalid since we're getting it anew
expressionBuilder = "\\b\\w*" + expressionBuilder + "\\w*\\b";
}
```
Note that this code will work when `expressionBuilder` is `""`, `"*"`, or `"**"` in that it does not perform any undefined actions. However, it might not produce the results you want in those cases (that's because I don't know exactly what you do want in those cases). Modify to suit your needs.
|
std::string erase last character fails?
|
[
"",
"c++",
"string",
""
] |
I have a simple question. Is there a way ( using reflections I suppose ) to iterate all the static values of a class?
For instance
```
class Any {
static int one = 1;
static int two = 2;
static int three = 3;
public static void main( String [] args ) {
for( int i : magicMethod( Any.class ) ){
System.out.println( i );
}
}
}
```
Output
```
1
2
3
```
Thanks.
|
```
import java.util.*;
import java.lang.reflect.*;
class Any {
static int one = 1;
static int two = 2;
static int three = 3;
public static void main( String [] args ) {
for( int i : magicMethod( Any.class ) ){
System.out.println( i );
}
}
public static Integer[] magicMethod(Class<Any> c) {
List<Integer> list = new ArrayList<Integer>();
Field[] fields = c.getDeclaredFields();
for (Field field : fields) {
try {
if (field.getType().equals(int.class) && Modifier.isStatic(field.getModifiers())) {
list.add(field.getInt(null));
}
}
catch (IllegalAccessException e) {
// Handle exception here
}
}
return list.toArray(new Integer[list.size()]);
}
}
```
|
Hey.. it was very easy. :P
```
Field [] constants = Main.class.getFields();
Object some = new Main();
for( Field field : constants ){
if(Modifier.isStatic(field.getModifiers() ) &&
field.getType() == int.class ) {
System.out.println( field.getInt( some ) );
}
}
```
|
iterate static int values in java
|
[
"",
"java",
"reflection",
"static",
""
] |
I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on [this tutorial](http://www.lifeaftercoffee.com/2006/02/17/how-to-create-auto-increment-columns-in-oracle/), The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table...
Why doesn't oracle flag this as being an issue when I create the table?
Here is the Sequence of commands I enter:
1. Creating the Table:
```
CREATE TABLE myTable
(id NUMBER PRIMARY KEY,
field1 TIMESTAMP(6),
timeStamp NUMBER,
);
```
2. Creating the Sequence:
```
CREATE SEQUENCE test_sequence
START WITH 1
INCREMENT BY 1;
```
3. Creating the trigger:
```
CREATE OR REPLACE TRIGGER test_trigger
BEFORE INSERT
ON myTable
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT test_sequence.nextval INTO :NEW.ID FROM dual;
END;
/
```
Here is the error message I get:
```
ORA-06552: PL/SQL: Compilation unit analysis terminated
ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed
```
Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name.
As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger...
**CLARIFICATION**
I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning.
That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though.
If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.
|
There is a note on metalink about this (227615.1) extract below:
```
# symptom: Creating Trigger fails
# symptom: Compiling a procedure fails
# symptom: ORA-06552: PL/SQL: %s
# symptom: ORA-06553: PLS-%s: %s
# symptom: PLS-320: the declaration of the type of this expression is incomplete or malformed
# cause: One of the tables being references was created with a column name that is one of the datatypes (reserved key word). Even though the field is not referenced in the PL/SQL SQL statements, this error will still be produced.
fix:
Workaround:
1. Rename the column to a non-reserved word.
2. Create a view and alias the column to a different name.
```
|
TIMESTAMP is not listed in the Oracle docs as a reserved word (which is surprising).
It is listed in the V$RESERVED\_WORDS data dictionary view, but its RESERVED flag is set to 'N'.
It might be a bug in the trigger processing. I would say this is a good one for Oracle support.
|
Oracle why does creating trigger fail when there is a field called timestamp?
|
[
"",
"sql",
"oracle",
"ora-06553",
""
] |
If I create a recursive list of of lists:
```
class myList
{
List<myList> childLists;
List<string> things;
//...
}
List<myList> tempList = new List<myList>();
```
And then later call tempList.Clear(), will it destroy all the childLists in memory, or should I create a recursive method to clear all the childLists first?
|
If no *other* references exist to the child lists, they will be garbage collected as normal. The trick is to watch for any dangling references to the child items (databinding especially tends to go unnoticed once done).
|
You seem to have come from a C++ background.
A read on [.NET's Garbage Collection](http://msdn.microsoft.com/en-us/library/0xy59wtx.aspx) should clear a lot of things up for you.
In your case, you do not need to "destroy" all the child lists. In fact, you can't even destroy or dispose a generic List object yourself in a normal good-practice .NET way. If you no longer wish to use it, then just remove all references to it. And the actual destruction of the object will be done by the garbage collector (aka GC) when it sees appropriate.
The GC is also very smart, it'll detect circular-references and a->b->c->d object trees and most things you could come up it and clean the whole object graph up properly. So you do not need to create that recursive cleaning routine.
But do note that the GC's behavior is undeterministic, i.e. you won't know when the actual "cleanup" will happen so if your list contains some important resources that should be freed immediately i.e. File handles, database connections, then you should explicitly "Dispose" of it, as @lassevk recommended.
|
Does the List Clear() method destroy children [C#.NET]?
|
[
"",
"c#",
".net",
"memory-management",
"recursion",
""
] |
I know the standard way of using the [null coalescing operator](https://en.wikipedia.org/wiki/Null_coalescing_operator) in C# is to set default values.
```
string nobody = null;
string somebody = "Bob Saget";
string anybody = "";
anybody = nobody ?? "Mr. T"; // Returns Mr. T
anybody = somebody ?? "Mr. T"; // Returns "Bob Saget"
```
But what else can `??` be used for? It doesn't seem as useful as the [ternary operator](https://en.wikipedia.org/wiki/%3F:), apart from being more concise and easier to read than:
```
nobody = null;
anybody = nobody == null ? "Bob Saget" : nobody; // Returns Bob Saget
```
So given that fewer even know about null coalescing operator...
* Have you used `??` for something else?
* Is `??` necessary, or should you just use the ternary operator (that
most are familiar with)
|
Well, first of all, it's much easier to chain than the standard ternary operator:
```
string anybody = parm1 ?? localDefault ?? globalDefault;
```
vs.
```
string anyboby = (parm1 != null) ? parm1
: ((localDefault != null) ? localDefault
: globalDefault);
```
It also works well if a null-possible object isn't a variable:
```
string anybody = Parameters["Name"]
?? Settings["Name"]
?? GlobalSetting["Name"];
```
vs.
```
string anybody = (Parameters["Name"] != null ? Parameters["Name"]
: (Settings["Name"] != null) ? Settings["Name"]
: GlobalSetting["Name"];
```
|
I've used it as a lazy load one-liner:
```
public MyClass LazyProp
{
get { return lazyField ?? (lazyField = new MyClass()); }
}
```
Readable? Decide for yourself.
|
Unique ways to use the null coalescing operator
|
[
"",
"c#",
"coding-style",
"null",
"conditional-operator",
"null-coalescing-operator",
""
] |
What's a more elegant way of having the code below where i want to return a derived class based on the type of another class.
```
if (option_ is Rectangle)
{
modelInputs = new Foo();
}
else if (option_ is Circle)
{
modelInputs = new Bar();
}
else if (option_ is Triangle)
{
modelInputs = new Bar2();
}
```
|
Have Rectangle, Circle and Triangle implement IHasModelInput:
```
interface IHasModelInput
{
IModelInput GetModelInput();
}
```
then you can do
```
IModelInput modelInputs = option_.GetModelInput();
```
|
My opinion: your "inelegant" way is fine. It's simple, readable and does the job.
Having the Rectangle, Circle and Triangle implement the necessary factory function via *IHasModelInput* would work, but it has a design cost: you've now coupled this set of classes with the IModelInput set of classes (Foo, Bar and Bar2). They could be in two completely different libraries, and maybe they shouldn't know about one another.
A more complicated method is below. It gives you the advantage of being able to configure your factory logic at runtime.
```
public static class FactoryMethod<T> where T : IModelInput, new()
{
public static IModelInput Create()
{
return new T();
}
}
delegate IModelInput ModelInputCreateFunction();
IModelInput CreateIModelInput(object item)
{
Dictionary<Type, ModelInputCreateFunction> factory = new Dictionary<Type, ModelInputCreateFunction>();
factory.Add(typeof(Rectangle), FactoryMethod<Foo>.Create);
factory.Add(typeof(Circle), FactoryMethod<Bar>.Create);
// Add more type mappings here
IModelInput modelInput;
foreach (Type t in factory.Keys)
{
if ( item.GetType().IsSubclassOf(t) || item.GetType().Equals(t))
{
modelInput = factory[t].Invoke();
break;
}
}
return modelInput;
}
```
But then ask the question: which one would you rather read?
|
Factory based on Typeof or is a
|
[
"",
"c#",
"factory",
""
] |
How can I have a view render a partial (user control) from a different folder?
With preview 3 I used to call RenderUserControl with the complete path, but whith upgrading to preview 5 this is not possible anymore.
Instead we got the RenderPartial method, but it's not offering me the functionality I'm looking for.
|
Just include the path to the view, with the file extension.
Razor:
```
@Html.Partial("~/Views/AnotherFolder/Messages.cshtml", ViewData.Model.Successes)
```
ASP.NET engine:
```
<% Html.RenderPartial("~/Views/AnotherFolder/Messages.ascx", ViewData.Model.Successes); %>
```
If that isn't your issue, could you please include your code that used to work with the RenderUserControl?
|
In my case I was using MvcMailer (https://github.com/smsohan/MvcMailer) and wanted to access a partial view from another folder, that wasn't in "Shared." The above solutions didn't work, but using a relative path did.
```
@Html.Partial("../MyViewFolder/Partials/_PartialView", Model.MyObject)
```
|
Render partial from different folder (not shared)
|
[
"",
"c#",
"asp.net-mvc",
"renderpartial",
""
] |
Please bear with me here, I'm a student and new to Java Server Pages.
If I'm being a complete idiot, can someone give me a good link to a tutorial on JSP, since I've been unable to find info on this anywhere.
Okay, here goes...
I'm using Netbeans and trying to pass an object that connects to a database between the pages, otherwise I'd have to reconnect to the database every time a new page is displayed.
Using Netbeans, you can view each page as "jsp", in "design" view, or view the Java code. In the Java code is the class that extends an AbstractPageBean. The problem is that I'd like to pass parameters, but there is no object representing the class and so I can't just access the instance variables.
Can anyone tell me how to do this?
|
You can put it in a session [JSP tutorial, Sessions](http://www.jsptut.com/Sessions.jsp).
But frankly, you don't put database connections in a session. They're a scarce resource. You'd be better off using some pooling mechanism like in [Tomcat JNDI database pooling example](http://www.informit.com/articles/article.aspx?p=336708&seqNum=5).
I personally would put all that java code in a class and use that class:
java:
```
public class FooRepo {
public static Foo getFoo(Long id) {
// Read resultSet into foo
}
}
```
jsp:
```
Foo = FooRepo.getFoo( id as stored in JSP );
// display foo
```
If you start playing with JSP I strongly recommend using a book. Creating a working JSP is very, very easy but creating a readable, maintainable JSP is hard. Use JSPs for the view, and not for the logic.
As for what book; go to a bookstore. I personally like the core java series and the Head First series. The last series is *very* accessible but also thorough.
I understand a book is expensive but investing in a book will help you understand the fundamentals which will help you if you move to struts, spring-mvc, wicket, JSF or whatever other framework you will use in the future.
|
<http://java.sun.com/j2ee/1.4/docs/tutorial/doc/index.html> is a J2EE tutorial with parts of it talking about JSP as well
one more JSP tutorial from sun : <http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/JSPIntro.html>
|
Passing parameters between JSPs
|
[
"",
"java",
"database",
"jsp",
"jsf",
"netbeans",
""
] |
A cross join performs a cartesian product on the tuples of the two sets.
```
SELECT *
FROM Table1
CROSS JOIN Table2
```
Which circumstances render such an SQL operation particularly useful?
|
If you have a "grid" that you want to populate completely, like size and color information for a particular article of clothing:
```
select
size,
color
from
sizes CROSS JOIN colors
```
Maybe you want a table that contains a row for every minute in the day, and you want to use it to verify that a procedure has executed each minute, so you might cross three tables:
```
select
hour,
minute
from
hours CROSS JOIN minutes
```
Or you have a set of standard report specs that you want to apply to every month in the year:
```
select
specId,
month
from
reports CROSS JOIN months
```
The problem with maintaining these as views is that in most cases, you don't want a complete product, particularly with respect to clothes. You can add `MINUS` logic to the query to remove certain combinations that you don't carry, but you might find it easier to populate a table some other way and not use a Cartesian product.
Also, you might end up trying the cross join on tables that have perhaps a few more rows than you thought, or perhaps your `WHERE` clause was partially or completely missing. In that case, your DBA will notify you promptly of the omission. Usually he or she will not be happy.
|
Generate data for testing.
|
What are the uses for Cross Join?
|
[
"",
"sql",
"database",
"join",
"relational-database",
""
] |
You launch a java program from a console (maybe using a .bat script).
I don't want the console to remain visible, I want to hide it.
Is there a simple way to do this ? Without JNI ?
|
Use javaw.
<http://java.sun.com/javase/6/docs/tooldocs/windows/java.html>
> The javaw command is identical to java, except that with javaw there is no associated console window. Use javaw when you don't want a command prompt window to appear. The javaw launcher will, however, display a dialog box with error information if a launch fails for some reason.
|
You can start a java application with `start javaw`. It will hide the black console window.
|
Is there a way to the hide win32 launch console from a Java program (if possible without JNI)
|
[
"",
"java",
"winapi",
"console",
""
] |
So I want to trigger an event (pausing/unpausing some media) whenever the user presses spacebar anywhere in the my Swing app.
Since there are so many controls and panels that could have focus, its not really possible to add keyevents to them all(not to mention gross).
So I found
```
KeyboardFocusManager.getCurrentKeyboardFocusManager().addKeyEventDispatcher()
```
which is awesome, you can register global keypress pre-handlers. There's a major problem though - spaces will be typed all the time in input fields, table cells, etc, and I obviously dont want to trigger the pause event then!
So any ideas?
Perhaps there is way to detect globally whether the cursor is focused on something that allows text input, without having to check through a list of all the editable controls(vomit!)?
|
I think you answered that yourself - yes I think you can find out the current element that has focus, and if it is an instanceof a certain field class, you ignore the space for the purpose of pause event. If it seams heavy handed, don't worry, instanceof is VERY fast for the JVM (and in any cause you are talking human scale events which are an eon to a processor).
|
I'm rusty on my Swing, but I think you should try registering a global listener using [Toolkit.addAWTEventListener](http://java.sun.com/javase/6/docs/api/java/awt/Toolkit.html#addAWTEventListener(java.awt.event.AWTEventListener,%20long)) with a `KEY_EVENT_MASK`. You can then filter the AWTEvent processing based on its type and source.
|
capturing global keypresses in Java
|
[
"",
"java",
"global",
"keypress",
""
] |
I'm trying to start using LINQ and specifically LINQ to SQL but I'm having some difficulties
I've tried this with SqlMetal and now using the database table designer in Visual Studio and I keep getting similar errors, like in this code, using the data context I created with the database layout designer in VS2008.
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
if (Page.IsPostBack)
{
string usn = UserNameBox.Text;
string pss = PassBox.Text;
if (usn == "" || pss == "")
return;
DataClassesDataContext dc = new DataClassesDataContext();
var user = from u in User
where u.UserName == usn
select u;
}
}
}
```
I get an error on the where saying: Could not find an implementation of the query pattern for source type 'System.Security.Principal.IPrincipal'. And also: 'Where' not found.
I had something similar to this when I tried to use the results of SqlMetal. I deleted that source and started over using the designer. I must be missing something here but I can't figure out what. Shouldn't the tables implement what I need since I'm using LINQ to SQL, or do I need to do something extra to make that happen?
|
Try
```
var user = from u in dc.User
where u.UserName == usn
select u;
```
|
Try changing `User` to `dc.User`:
```
var user = from u in dc.User
where u.UserName == usn
select u;
```
The `User` is the property [System.Web.UI.Page.User](http://msdn.microsoft.com/en-us/library/system.web.ui.page.user.aspx).
|
LINQ to SQL error message: 'Where' not found
|
[
"",
"c#",
"linq",
"linq-to-sql",
""
] |
How to calculate the length (in pixels) of a string in Java?
Preferable without using Swing.
EDIT:
I would like to draw the string using the drawString() in Java2D
and use the length for word wrapping.
|
If you just want to use AWT, then use [`Graphics.getFontMetrics`](http://docs.oracle.com/javase/6/docs/api/java/awt/Graphics.html#getFontMetrics()) (optionally specifying the font, for a non-default one) to get a `FontMetrics` and then [`FontMetrics.stringWidth`](http://docs.oracle.com/javase/6/docs/api/java/awt/FontMetrics.html#stringWidth(java.lang.String)) to find the width for the specified string.
For example, if you have a `Graphics` variable called `g`, you'd use:
```
int width = g.getFontMetrics().stringWidth(text);
```
For other toolkits, you'll need to give us more information - it's always going to be toolkit-dependent.
|
It doesn't always need to be toolkit-dependent or one doesn't always need use the FontMetrics approach since it requires one to first obtain a graphics object which is absent in a web container or in a headless enviroment.
I have tested this in a web servlet and it does calculate the text width.
```
import java.awt.Font;
import java.awt.font.FontRenderContext;
import java.awt.geom.AffineTransform;
...
String text = "Hello World";
AffineTransform affinetransform = new AffineTransform();
FontRenderContext frc = new FontRenderContext(affinetransform,true,true);
Font font = new Font("Tahoma", Font.PLAIN, 12);
int textwidth = (int)(font.getStringBounds(text, frc).getWidth());
int textheight = (int)(font.getStringBounds(text, frc).getHeight());
```
---
Add the necessary values to these dimensions to create any required margin.
|
Calculate the display width of a string in Java
|
[
"",
"java",
"string-length",
""
] |
In my Java development I have had great benefit from the [Jad/JadClipse](http://en.wikipedia.org/wiki/JAD_%28JAva_Decompiler%29) decompiler. It made it possible to *know* why a third-party library failed rather than the usual guesswork.
I am looking for a similar setup for C# and Visual Studio. That is, a setup where I can point to any class or variable in my code and get a code view for that particular class.
What is the best setup for this? I want to be able to use the usual "jump to declaration/implementation" that I use to navigate my own code. It doesn't *have* to be free, but it would be a bonus if it was.
It should support Visual Studio 2008 or Visual Studio 2005 and .NET 2 and 3(.5).
|
Here is a good article about [Reflector and how to integrate Reflector into Visual Studio](http://en.csharp-online.net/Visual_Studio_Hacks%E2%80%94Hack_64:_Examine_the_Innards_of_Assemblies).
> Of particular interest is the Reflector.VisualStudio Add-In. This
> add-in, created by Jaime Cansdale, allows for Reflector to be hosted
> within Visual Studio. With this add-in, you can have Reflector
> integrated within the Visual Studio environment.
>
> To get
> started, you will need to have the latest version of Reflector on your
> machine. Once you have downloaded Reflector, download the latest
> version of the Reflector.VisualStudio Add-In from
> <http://www.testdriven.NET/reflector>. The download contains a number of
> files that need to be placed in the same directory as Reflector.exe.
> To install the add-in, drop to the command line and run:
>
> ```
> Reflector.VisualStudio.exe /install
> ```
>
> After the add-in has been installed, you can start using Reflector from Visual Studio. You’ll notice a new menu item, Addins, which has a
> menu option titled Reflector. This option, when selected, displays the
> Reflector window, which can be docked in the IDE. Additionally, the
> add-in provides context menu support.
>
> When you right-click
> in an open code file in Visual Studio, you’ll see a Reflector menu
> item that expands into a submenu with options to disassemble the code
> into C# or Visual Basic, display the call graph or callee graph, and
> other related choices. The context menu also includes a Synchronize
> with Reflector menu item that, when clicked, syncs the object browser
> tree in the Reflector window with the current code file.
|
Try the open-source software <http://ilspy.net/>
|
Best (free?) decompiler for C# with Visual Studio integration?
|
[
"",
"c#",
"visual-studio",
"decompiling",
""
] |
I'm not sure why I'm getting this error, but shouldn't this code compile, since I'm already checking to see if queue is getting initialized?
```
public static void Main(String[] args)
{
Byte maxSize;
Queue queue;
if(args.Length != 0)
{
if(Byte.TryParse(args[0], out maxSize))
queue = new Queue(){MaxSize = maxSize};
else
Environment.Exit(0);
}
else
{
Environment.Exit(0);
}
for(Byte j = 0; j < queue.MaxSize; j++)
queue.Insert(j);
for(Byte j = 0; j < queue.MaxSize; j++)
Console.WriteLine(queue.Remove());
}
```
So if queue is not initialized, then the for loops aren't reachable right? Since the program already terminates with Environment.Exit(0)?
Hope ya'll can give me some pointers :)
Thanks.
|
The compiler doesn't know that the Environment.Exit() is going to terminate the program; it just sees you executing a static method on a class. Just initialize `queue` to null when you declare it.
```
Queue queue = null;
```
|
The compiler doesn't know that Environment.Exit() does not return. Why not just "return" from Main()?
|
C# error: Use of unassigned local variable
|
[
"",
"c#",
"initialization",
""
] |
What is the right way to perform some static finallization?
There is no static destructor. The `AppDomain.DomainUnload` event is not raised in the default domain. The `AppDomain.ProcessExit` event shares the total time of the three seconds (default settings) between all event handlers, so it's not really usable.
|
Basically, you can't. Design your way around it to the fullest extent possible.
Don't forget that a program can *always* terminate abruptly anyway - someone pulling out the power being the obvious example. So anything you do has to be "best effort" - in which case I'd certainly *hope* that `AppDomain.ProcessExit` would be good enough.
What do you need to do, in your particular case?
|
> Herfried Wagner has written an [excellent article](http://dotnet.mvps.org/dotnet/articles/sharedfinalizer/) explaining how to implement this – alas, in German (and VB). Still, the code should be understandable.
I've tried it:
```
static readonly Finalizer finalizer = new Finalizer();
sealed class Finalizer {
~Finalizer() {
Thread.Sleep(1000);
Console.WriteLine("one");
Thread.Sleep(1000);
Console.WriteLine("two");
Thread.Sleep(1000);
Console.WriteLine("three");
Thread.Sleep(1000);
Console.WriteLine("four");
Thread.Sleep(1000);
Console.WriteLine("five");
}
}
```
It seems to work exactly the same way as the `AppDomain.ProcessExit` event does: the finalizer gets ca. three seconds...
|
Static Finalizer
|
[
"",
"c#",
".net",
"static",
"destructor",
"finalizer",
""
] |
For work I have to code with an external company's API to deal with their proprietary database solution. Unfortunately the documentation they provide is more of an example guide then proper API docs, so it is very light on nitty gritty details like error codes, method returns, and exceptions.
So for instance, a class will have a .GetErrorCode() method, but I have no idea what those error numbers mean because they didn't document what number matches up with what error. In many cases a method will return an Object, with no documentation of what the type of Object it actually returns. I have asked them repeatedly for proper documentation but they seem to think details like the ones above are propriety secrets. So, is there any tools or methods I can work around my limited or in some cases non-existent documentation.
Please note that I am using Visual Studo 2005 and coding in C# under .Net.
And before anyone answers, "don't use the API", I have to, it is for work.
|
A nasty scenario. I hate to suggest it, but maybe [reflector](http://www.red-gate.com/products/reflector/) is your friend if it isn't obfuscated. There may be some IP issues, but in this case reversing it seems the only viable way of finding out what the API is. However, I suspect (from methods like .GetErrorCode()) that this is a shim on top of P/Invoke - in which case even reflector will just show you lots of external calls...
The main other thing I can say is: write *lots* of unit tests that cover how you are trying to use it... that way if you guess wrong and something changes, you'll know early.
|
If I can't get code samples or talk to an original developer, I usually resort to [Reflector](http://www.red-gate.com/products/reflector/) to look at the underlying code. It's slow and inefficient, but sometimes that's all you can do.
|
How to deal with an Undocumented API/Framework under .NET?
|
[
"",
"c#",
".net",
"api",
"documentation",
"frameworks",
""
] |
I'm sure this problem has been solved before and I'm curious how its done. I have code in which, when run, I want to scan the contents of a directory and load in functionality.
Specifically, I am working with a scripting engine that I want to be able to add function calls to. I want the core engine to provide very limited functionality. The user should be able to add additional functions through 3rd party libraries, which I want the engine to scan for and load. How is this done?
|
It depends on the platform. On win32, you call `LoadLibrary` to load a DLL, then get functions from it with `GetProcAddress`. On Unixy platforms, the equivalents are `dlopen` and `dlsym`.
|
You can use the POSIX dlopen/dlsym/dlerror/dlclose functions in Linux/UNIX to dynamically open shared libraries and access the symbols (including functions) they provide, see the [man page](http://linux.die.net/man/3/dlopen) for details.
|
Dynamically Loading External Modules in a C Program?
|
[
"",
"c++",
"c",
"dynamic",
"modularity",
""
] |
For example, I'm trying to convert 2008-09-26T01:51:42.000Z to 09/26/2008. What's the simplest way of accomplishing this?
|
The easiest way is to use [dateutil](http://labix.org/python-dateutil).parser.parse() to parse the date string into a timezone aware datetime object, then use strftime() to get the format you want.
```
import dateutil.parser
d = dateutil.parser.parse('2008-09-26T01:51:42.000Z')
print(d.strftime('%m/%d/%Y')) #==> '09/26/2008'
```
|
I know this is really old question, but you can do it with python datetime.strptime()
```
>>> from datetime import datetime
>>> date_format = "%Y-%m-%dT%H:%M:%S.%fZ"
>>> datetime.strptime('2008-09-26T01:51:42.000Z', date_format)
datetime.datetime(2008, 9, 26, 1, 51, 42)
```
|
How do you convert YYYY-MM-DDTHH:mm:ss.000Z time format to MM/DD/YYYY time format in Python?
|
[
"",
"python",
"datetime",
""
] |
To preface I am using Borland C++ and the VCL.
I need some sort of structured storage object which can be saved to disk as a single file and can contain multiple named blobs of binary data which I can programatically enumerate, access and manipulate.
The [IStorage](http://msdn.microsoft.com/en-us/library/aa380015(VS.85).aspx) interface seems to be close what I want but I would prefer a VCL style solution as converting TStream's (VCL) to IStream's among other things gets awkward and their must be a simpler solution using the VCL stuff allready available, like a TIniFile but for binary data.
I don't want to use XML as the binary data blobs are very large and when correctly encoded for use in XML it's completely inneficient.
Thanks in advance for all suggestions.
|
A zip file is de facto a standard container, and it seems you can get a TStream interface to them: <http://www.tek-tips.com/faqs.cfm?fid=6734>
|
SolFS (Solid File System) from Eldos.
<http://www.eldos.com/solfs/>
Very reliable, but might not be the cheapest solution ($372 for one developer).
|
VCL alternative to IStorage
|
[
"",
"c++",
"vcl",
"istorage",
"structured-storage",
""
] |
I'm refactoring a number of classes in an application to use interfaces instead of base classes. Here's the interfaces I created so far:
* ICarryable implemented by all Item objects
* IActable implemented by all Actor objects
* IUseable implemented by some Item sub-classes
* IWieldable implemented by some Item sub-classes
You can see the major base-classes are still Item and Actor. These have a common interface in that they both are located on a Map, so they have a Location property. The Map shouldn't care whether the object is an Actor or an Item, so I want to create an interface for it. Here's what the interface would look like
```
public interface IUnnameable {
event EventHandler<LocationChangedEventArgs> LocationChanged;
Location Location { get; set; }
}
```
That's no problem, but I can't think of what to call this interface. IMappable comes to mind by seems a bit lame. Any ideas?
|
Sounds like an ILocateable. Something whose location you can discover and track.
|
* ILocatable
* IGeo
* IAmSomewhere
* IIsSomewhere
Edit:
* INoun
|
Need help choosing a name for an interface
|
[
"",
"c#",
".net",
"naming",
""
] |
**EDIT: I missed a crucial point: .NET 2.0**
Consider the case where I have a list of unsorted items, for the sake of simplicity of a type like this:
```
class TestClass
{
DateTime SomeTime;
decimal SomePrice;
// constructor
}
```
I need to create a report-like output, where the total prices for each day are accumulated. There should be one line for each item, folled by the appropriate summary lines.
Take this test data:
```
List<TestClass> testList = new List<TestClass> {
new TestClass(new DateTime(2008,01,01), 12),
new TestClass(new DateTime(2007,01,01), 20),
new TestClass(new DateTime(2008,01,01), 18)
};
```
The desired output would be something like this:
```
2007-01-01:
20
Total: 20
2008-01-01:
12
18
Total: 30
```
What's the best way to approach such scenarios? In the case of such a list, I would implement the IComparable interface for TestClass, so that the list can be sorted.
To create the report itself, something like this could be used (let's assume that we have methods for tasks like accumulating the prices, keeping track of the current date etc):
```
for (int i=0;i<testList.Count;i++)
{
if (IsNewDate(testList[i]))
{
CreateSummaryLine();
ResetValuesForNewDate();
}
AddValues(testList[i]);
}
// a final summary line is needed to include the data for the last couple of items.
CreateSummaryLine();
```
This works alright, but I have a strange feeling as far as the second "CreateSummaryLines" is concerned.
In what ways do you handle such situations (especially considering the fact, the we need to work with a List<> of items rather than a pre-categorized Dictionary or something like that)?
|
[edit] Since you are using .NET 2.0 with C# 3.0, you can use [LINQBridge](http://www.albahari.com/nutshell/linqbridge.aspx) to enable this.
LINQ; something like:
```
var groups = from row in testList
group row by row.SomeTime;
foreach (var group in groups.OrderBy(group => group.Key))
{
Console.WriteLine(group.Key);
foreach(var item in group.OrderBy(item => item.SomePrice))
{
Console.WriteLine(item.SomePrice);
}
Console.WriteLine("Total" + group.Sum(x=>x.SomePrice));
}
```
|
Okay, so if you can't use LINQ:
(I'm using var to save space, but it's easy to translate to C# 2.0 if necessary...)
```
var grouped = new SortedDictionary<DateTime, List<TestClass>>();
foreach (TestClass entry in testList) {
DateTime date = entry.SomeTime.Date;
if (!grouped.ContainsKey(date)) {
grouped[date] = new List<TestClass>();
}
grouped[date].Add(entry);
}
foreach (KeyValuePair<DateTime, List<TestClass>> pair in testList) {
Console.WriteLine("{0}: ", pair.Key);
Console.WriteLine(BuildSummaryLine(pair.Value));
}
```
|
Iterating through list and creating summary lines on the fly
|
[
"",
"c#",
"loops",
""
] |
There are **two different** ways to create an empty object in JavaScript:
```
var objectA = {}
var objectB = new Object()
```
Is there any difference in how the script engine handles them? Is there any reason to use one over the other?
Similarly it is also possible to create an empty array using different syntax:
```
var arrayA = []
var arrayB = new Array()
```
|
## Objects
There is no benefit to using `new Object()`, whereas `{}` can make your code more compact, and more readable.
For defining empty objects they're technically the same. The `{}` syntax is shorter, neater (less Java-ish), and allows you to instantly populate the object inline - like so:
```
var myObject = {
title: 'Frog',
url: '/img/picture.jpg',
width: 300,
height: 200
};
```
## Arrays
For arrays, there's similarly almost no benefit to ever using `new Array()` over `[]` — with one minor exception:
```
var emptyArray = new Array(100);
```
creates a 100 item long array with all slots containing `undefined`, which may be nice/useful in certain situations (such as `(new Array(9)).join('Na-Na ') + 'Batman!'`).
## My recommendation
1. Never use `new Object();` — it's clunkier than `{}` and looks silly.
2. Always use `[]` — except when you need to quickly create an "empty" array with a predefined length.
|
Yes, There is a difference, they're not the same. It's true that you'll get the same results but the engine works in a different way for both of them. One of them is an object literal, and the other one is a constructor, two different ways of creating an object in javascript.
```
var objectA = {} //This is an object literal
var objectB = new Object() //This is the object constructor
```
In JS everything is an object, but you should be aware about the following thing with new Object(): It can receive a parameter, and depending on that parameter, it will create a string, a number, or just an empty object.
For example: `new Object(1)`, will return a Number. `new Object("hello")` will return a string, it means that the object constructor can delegate -depending on the parameter- the object creation to other constructors like string, number, etc... It's highly important to keep this in mind when you're managing dynamic data to create objects..
Many authors recommend not to use the object constructor when you can use a certain literal notation instead, where you will be sure that what you're creating is what you're expecting to have in your code.
I suggest you to do a further reading on the differences between literal notation and constructors on javascript to find more details.
|
Create an empty object in JavaScript with {} or new Object()?
|
[
"",
"javascript",
"arrays",
"object",
"javascript-objects",
"new-operator",
""
] |
I want to have a map that has a homogeneous key type but heterogeneous data types.
I want to be able to do something like (pseudo-code):
```
boost::map<std::string, magic_goes_here> m;
m.add<int>("a", 2);
m.add<std::string>("b", "black sheep");
int i = m.get<int>("a");
int j = m.get<int>("b"); // error!
```
I could have a pointer to a base class as the data type but would rather not.
I've never used boost before but have looked at the fusion library but can't figure out what I need to do.
Thanks for your help.
|
```
#include <map>
#include <string>
#include <iostream>
#include <boost/any.hpp>
int main()
{
try
{
std::map<std::string, boost::any> m;
m["a"] = 2;
m["b"] = static_cast<char const *>("black sheep");
int i = boost::any_cast<int>(m["a"]);
std::cout << "I(" << i << ")\n";
int j = boost::any_cast<int>(m["b"]); // throws exception
std::cout << "J(" << j << ")\n";
}
catch(...)
{
std::cout << "Exception\n";
}
}
```
|
[How can I build a <favorite container> of objects of different types?](http://www.parashift.com/c++-faq/heterogeneous-list.html)
> You can't, but you can fake it pretty well. In C/C++ all arrays are homogeneous (i.e., the elements are all the same type). However, with an extra layer of indirection you can give the appearance of a heterogeneous container (a heterogeneous container is a container where the contained objects are of different types).
>
> There are two cases with heterogeneous containers.
>
> The first case occurs when all objects you want to store in a container are publicly derived from a common base class. [...]
>
> The second case occurs when the object types are disjoint — they do not share a common base class.
> The approach here is to use a handle class. The container is a container of handle objects (by value or by pointer, your choice; by value is easier). Each handle object knows how to "hold on to" (i.e., maintain a pointer to) one of the objects you want to put in the container. You can use either a single handle class with several different types of pointers as instance data, or a hierarchy of handle classes that shadow the various types you wish to contain (requires the container be of handle base class pointers). The downside of this approach is that it opens up the handle class(es) to maintenance every time you change the set of types that can be contained. The benefit is that you can use the handle class(es) to encapsulate most of the ugliness of memory management and object lifetime. Thus using handle objects may be beneficial even in the first case.
|
how do you make a heterogeneous boost::map?
|
[
"",
"c++",
"boost",
"dictionary",
""
] |
A Java version of this question was just answered, and, well, I don't know how to do this in .net.
So how do you calculate the display width of a string in C# / .net?
|
You've got the same problem in this question as was present in the Java question - not enough information! It will differ between WinForms and WPF.
For WinForms: [Graphics.MeasureString](http://msdn.microsoft.com/en-us/library/6xe5hazb.aspx)
For WPF I'm not sure, but I suspect it will depend on the exact way you're drawing the text...
|
An alternative for Windows Forms is the static TextRenderer.MeasureText method.
Although restricted to integer sizes, this (in tandem with TextRenderer.DrawText) renders more accurate and much higher quality ClearType text than the Graphics.MeasureString/DrawString duo.
|
Calculate the display width of a string in C#?
|
[
"",
"c#",
".net",
""
] |
Is there any Ruby equivalent for Python's builtin `zip` function? If not, what is a concise way of doing the same thing?
A bit of context: this came up when I was trying to find a clean way of doing a check involving two arrays. If I had `zip`, I could have written something like:
```
zip(a, b).all? {|pair| pair[0] === pair[1]}
```
I'd also accept a clean way of doing this without anything resembling `zip` (where "clean" means "without an explicit loop").
|
Ruby has a zip function:
```
[1,2].zip([3,4]) => [[1,3],[2,4]]
```
so your code example is actually:
```
a.zip(b).all? {|pair| pair[0] === pair[1]}
```
or perhaps more succinctly:
```
a.zip(b).all? {|a,b| a === b }
```
|
Could you not do:
```
a.eql?(b)
```
Edited to add an example:
```
a = %w[a b c]
b = %w[1 2 3]
c = ['a', 'b', 'c']
a.eql?(b) # => false
a.eql?(c) # => true
a.eql?(c.reverse) # => false
```
|
What is a Ruby equivalent for Python's "zip" builtin?
|
[
"",
"python",
"ruby",
"translation",
""
] |
Is is better to do a joined query like this:
```
var employer = (from person in db.People
join employer in db.Employers
on person.EmployerID equals employer.EmployerID
where person.PersonID == idPerson
select employer).FirstOrDefault();
```
Or is it just as good to do the easy thing and do this (with null checks):
```
var employer = (from person in db.People
where person.PersonID == idPerson
select person).FirstOrDefault().Employer;
```
Obviously, in this one I would actually have to do it in 2 statements to get in the null check.
Is there any sort of best practice here for either readability or performance issues?
|
I'd use this:
```
var employer = (from person in db.People
where person.PersonID == idPerson
select person.Employer).FirstOrDefault();
```
It's got the simplicity of the first version but still only fetches the data for the employer (rather than the person *and* the employer).
|
The second one could evaluate to null which would result in an error.
I like the first one better because if it is null then you can deal with it without an exception being thrown.
|
Query with a join or use LINQ magic?
|
[
"",
"c#",
".net",
".net-3.5",
""
] |
I am writing an application which blocks on input from two `istreams`.
Reading from either `istream` is a synchronous (blocking) call, so, I decided to create two `Boost::thread`s to do the reading.
Either one of these threads can get to the "end" (based on some input received), and once the "end" is reached, both input streams stop receiving. Unfortunately, I cannot know which will do so.
Thus, I cannot `join()` on both threads, because only one thread (cannot be predetermined which one) will actually return (unblock).
I must somehow force the other to exit, but it is blocked waiting for input, so it cannot itself decide it is time to return (condition variables or what not).
Is their a way to either:
* Send a signal a boost::thread, or
* Force an `istream` to "fail", or
* Kill a Boost::thread?
Note:
* One of the `istreams` is `cin`
* I am trying to restart the process, so I cannot close the input streams in a way that prohibits reseting them.
Edit:
* I do know when the "end" is reached, and I do know which thread has successfully finished, and which needs to be killed. Its the killing I need to figure out (or a different strategy for reading from an istream).
* I need both threads to exit and cleanup properly :(
Thanks!
|
I don't think there is a way to do it cross platform, but pthread\_cancel should be what you are looking for. With a boost thread you can get the [native\_handle](http://www.boost.org/doc/libs/1_37_0/doc/html/thread/thread_management.html#thread.thread_management.thread.nativehandle) from a thread, and call pthread\_cancel on it.
In addition a better way might be to use the boost [asio](http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio.html) equivalent of a select call on multiple files. That way one thread will be blocked waiting for the input, but it could come from either input stream. I don't know how easy it is to do something like this with iostreams though.
|
Yes there is!
`boost::thread::terminate()` will do the job to your specifications.
It will cause the targeted thread to throw an exception. Assuming it's uncaught, the stack will unwind properly destroying all resources and terminating thread execution.
The termination isn't instant. (The wrong thread is running at that moment, anyway.)
It happens under predefined conditions - the most convenient for you would probably be when calling `boost::this_thread::sleep();`, which you could have that thread do periodically.
|
Kill a blocked Boost::Thread
|
[
"",
"c++",
"iostream",
"boost-thread",
""
] |
In developing search for a site I am building, I decided to go the cheap and quick way and use Microsoft Sql Server's Full Text Search engine instead of something more robust like Lucene.Net.
One of the features I would like to have, though, is google-esque relevant document snippets. I quickly found determining "relevant" snippets is more difficult than I realized.
I want to choose snippets based on search term density in the found text. So, essentially, I need to find the most search term dense passage in the text. Where a passage is some arbitrary number of characters (say 200 -- but it really doesn't matter).
My first thought is to use .IndexOf() in a loop and build an array of term distances (subtract the index of the found term from the previously found term), then ... what? Add up any two, any three, any four, any five, sequential array elements and use the one with the smallest sum (hence, the smallest distance between search terms).
That seems messy.
Is there an established, better, or more obvious way to do this than what I have come up with?
|
Although it is implemented in Java, you can see one approach for that problem here:
<http://rcrezende.blogspot.com/2010/08/smallest-relevant-text-snippet-for.html>
|
I know this thread is way old, but I gave this a try last week and it was a pain in the back side. This is far from perfect, but this is what I came up with.
The snippet generator:
```
public static string SelectKeywordSnippets(string StringToSnip, string[] Keywords, int SnippetLength)
{
string snippedString = "";
List<int> keywordLocations = new List<int>();
//Get the locations of all keywords
for (int i = 0; i < Keywords.Count(); i++)
keywordLocations.AddRange(SharedTools.IndexOfAll(StringToSnip, Keywords[i], StringComparison.CurrentCultureIgnoreCase));
//Sort locations
keywordLocations.Sort();
//Remove locations which are closer to each other than the SnippetLength
if (keywordLocations.Count > 1)
{
bool found = true;
while (found)
{
found = false;
for (int i = keywordLocations.Count - 1; i > 0; i--)
if (keywordLocations[i] - keywordLocations[i - 1] < SnippetLength / 2)
{
keywordLocations[i - 1] = (keywordLocations[i] + keywordLocations[i - 1]) / 2;
keywordLocations.RemoveAt(i);
found = true;
}
}
}
//Make the snippets
if (keywordLocations.Count > 0 && keywordLocations[0] - SnippetLength / 2 > 0)
snippedString = "... ";
foreach (int i in keywordLocations)
{
int stringStart = Math.Max(0, i - SnippetLength / 2);
int stringEnd = Math.Min(i + SnippetLength / 2, StringToSnip.Length);
int stringLength = Math.Min(stringEnd - stringStart, StringToSnip.Length - stringStart);
snippedString += StringToSnip.Substring(stringStart, stringLength);
if (stringEnd < StringToSnip.Length) snippedString += " ... ";
if (snippedString.Length > 200) break;
}
return snippedString;
}
```
The function which will find the index of all keywords in the sample text
```
private static List<int> IndexOfAll(string haystack, string needle, StringComparison Comparison)
{
int pos;
int offset = 0;
int length = needle.Length;
List<int> positions = new List<int>();
while ((pos = haystack.IndexOf(needle, offset, Comparison)) != -1)
{
positions.Add(pos);
offset = pos + length;
}
return positions;
}
```
It's a bit clumsy in its execution. The way it works is by finding the position of all keywords in the string. Then checking that no keywords are closer to each other than the desired snippet length, so that snippets won't overlap (that's where it's a bit iffy...). And then grabs substrings of the desired length centered around the position of the keywords and stitches the whole thing together.
I know this is years late, but posting just in case it might help somebody coming across this question.
|
C# Finding relevant document snippets for search result display
|
[
"",
"c#",
"algorithm",
"search",
"relevance",
"significance",
""
] |
I'm writing a custom blog engine and would like to have trackbacks similar to Wordpress. I could look at the Wordpress source, but I'd really prefer a tutorial of some sort and so far I haven't been able to find one. Are there any good tutorials for implementing trackbacks or pingbacks in PHP5?
|
Trackbacks are fine, but they're very prone to spam, since there's no verification of their origin. You use a simple discovery method to find the trackpack entrypoint; look for RDF in the target site's source. Then it's simply a RESTful POST request to the destination site's trackback entrypoint passing the requisite trackback information. The specification linked by Sebastian Hörl should give you plenty of information for that.
Pingbacks are a bit more complex, because they require both XMLRPC and a linkback. So while you can just use REST calls to send anonymous trackbacks to any site, sites that you pingback will look at your site to determine whether you're actually linking to the pinged page.
Also, implementing Pingback on the receiving end is a little more complicated, because you have to verify the ping. I suppose you don't have to, but it would be foolish not to, otherwise it would be susceptible to the same spamming techniques as Trackbacks.
The [Pingback specification](http://hixie.ch/specs/pingback/pingback-1.0) has all the information you need to get that implementation off the ground.
Another recommendation is rather than wade through the inevitably dense and spaghetti-ish WordPress Pingback code, have a look at [Habari](http://habariproject.org/)'s self-contained [Pingback implementation](https://trac.habariproject.org/habari/browser/trunk/htdocs/system/plugins/pingback/pingback.plugin.php), which isolates some of the key features of the task into more easily digestible units.
|
Implementing trackbacks isn't that hard at all. [Here](http://www.sixapart.com/pronet/docs/trackback_spec) you can find the official specification and an example at the bottom.
|
Trackbacks in PHP
|
[
"",
"php",
"trackback",
"pingback",
""
] |
Any python libs for parsing Bind zone files?
Basically something that will aid in adding/removing zones and records.
This needs to work even if someone modifies the zone file by hand so overwriting the zone files every time is not a solution.
|
I was unable to use bicop for classical zone files like these:
```
$TTL 86400
@ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. (
2006040800 ; serial
14400 ; refresh
1800 ; retry
604800 ; expire
86400 ) ; minimum
@
IN NS ns1.first-ns.de.
```
I will have a look at [dnspython](http://www.dnspython.org/)
|
[easyzone](http://pypi.python.org/pypi/easyzone) is a nice layer over dnspython
[Zoner](http://pypi.python.org/pypi/zoner/1.4.1) provides a web-interface for editing zone files and makes use of easyzone.
|
Any python libs for parsing Bind zone files?
|
[
"",
"python",
"bind",
""
] |
I have a method that creates a MessageDigest (a hash) from a file, and I need to do this to a lot of files (>= 100,000). How big should I make the buffer used to read from the files to maximize performance?
Most everyone is familiar with the basic code (which I'll repeat here just in case):
```
MessageDigest md = MessageDigest.getInstance( "SHA" );
FileInputStream ios = new FileInputStream( "myfile.bmp" );
byte[] buffer = new byte[4 * 1024]; // what should this value be?
int read = 0;
while( ( read = ios.read( buffer ) ) > 0 )
md.update( buffer, 0, read );
ios.close();
md.digest();
```
What is the ideal size of the buffer to maximize throughput? I know this is system dependent, and I'm pretty sure its OS, FileSystem, *and* HDD dependent, and there maybe other hardware/software in the mix.
(I should point out that I'm somewhat new to Java, so this may just be some Java API call I don't know about.)
**Edit:** I do not know ahead of time the kinds of systems this will be used on, so I can't assume a whole lot. (I'm using Java for that reason.)
**Edit:** The code above is missing things like try..catch to make the post smaller
|
Optimum buffer size is related to a number of things: file system block size, CPU cache size and cache latency.
Most file systems are configured to use block sizes of 4096 or 8192. In theory, if you configure your buffer size so you are reading a few bytes more than the disk block, the operations with the file system can be extremely inefficient (i.e. if you configured your buffer to read 4100 bytes at a time, each read would require 2 block reads by the file system). If the blocks are already in cache, then you wind up paying the price of RAM -> L3/L2 cache latency. If you are unlucky and the blocks are not in cache yet, the you pay the price of the disk->RAM latency as well.
This is why you see most buffers sized as a power of 2, and generally larger than (or equal to) the disk block size. This means that one of your stream reads could result in multiple disk block reads - but those reads will always use a full block - no wasted reads.
Now, this is offset quite a bit in a typical streaming scenario because the block that is read from disk is going to still be in memory when you hit the next read (we are doing sequential reads here, after all) - so you wind up paying the RAM -> L3/L2 cache latency price on the next read, but not the disk->RAM latency. In terms of order of magnitude, disk->RAM latency is so slow that it pretty much swamps any other latency you might be dealing with.
So, I suspect that if you ran a test with different cache sizes (haven't done this myself), you will probably find a big impact of cache size up to the size of the file system block. Above that, I suspect that things would level out pretty quickly.
There are a *ton* of conditions and exceptions here - the complexities of the system are actually quite staggering (just getting a handle on L3 -> L2 cache transfers is mind bogglingly complex, and it changes with every CPU type).
This leads to the 'real world' answer: If your app is like 99% out there, set the cache size to 8192 and move on (even better, choose encapsulation over performance and use BufferedInputStream to hide the details). If you are in the 1% of apps that are highly dependent on disk throughput, craft your implementation so you can swap out different disk interaction strategies, and provide the knobs and dials to allow your users to test and optimize (or come up with some self optimizing system).
|
Yes, it's probably dependent on various things - but I doubt it will make very much difference. I tend to opt for 16K or 32K as a good balance between memory usage and performance.
Note that you should have a try/finally block in the code to make sure the stream is closed even if an exception is thrown.
|
How do you determine the ideal buffer size when using FileInputStream?
|
[
"",
"java",
"performance",
"file-io",
"filesystems",
"buffer",
""
] |
Im currently using ie as an active x com thing on wxWidgets and was wanting to know if there is any easy way to change the user agent that will always work.
Atm im changing the header but this only works when i manually load the link (i.e. call setUrl)
|
The only way that will "always work," so far as I've been able to find, is [changing the user-agent string in the registry](http://www.walkernews.net/2007/07/05/how-to-change-user-agent-string/). That will, of course, affect *every* web browser instance running on that machine.
You might also try a Google search on `DISPID_AMBIENT_USERAGENT`. From [this Microsoft page](http://support.microsoft.com/kb/q183412/):
> MSHTML will also ask for a new user
> agent via `DISPID_AMBIENT_USERAGENT`
> when navigating to clicked hyperlinks.
> This ambient property can be
> overridden, but it is not used when
> programmatically calling the Navigate
> method; it will also not cause the
> userAgent property of the DOM's
> navigator object or clientInformation
> behavior to be altered - this property
> will always reflect Internet
> Explorer's own UserAgent string.
I'm not familiar with the MSHTML component, so I'm not certain that's helpful.
I hope that at least gives you a place to start. :-)
|
I did a bit of googling today with the hint you provided head geek and i worked out how to do it.
wxWidgets uses an activex rapper class called FrameSite that handles the invoke requests. What i did was make a new class that inherits from this, handles the DISPID\_AMBIENT\_USERAGENT event and passes all others on. Thus now i can return a different user agent.
Thanks for the help.
|
ie useragent wxWidgets
|
[
"",
"c++",
"internet-explorer",
"com",
"wxwidgets",
"user-agent",
""
] |
If you provide `0` as the `dayValue` in `Date.setFullYear` you get the last day of the previous month:
```
d = new Date(); d.setFullYear(2008, 11, 0); // Sun Nov 30 2008
```
There is reference to this behaviour at [mozilla](http://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Date/setFullYear). Is this a reliable cross-browser feature or should I look at alternative methods?
|
```
var month = 0; // January
var d = new Date(2008, month + 1, 0);
console.log(d.toString()); // last day in January
```
```
IE 6: Thu Jan 31 00:00:00 CST 2008
IE 7: Thu Jan 31 00:00:00 CST 2008
IE 8: Beta 2: Thu Jan 31 00:00:00 CST 2008
Opera 8.54: Thu, 31 Jan 2008 00:00:00 GMT-0600
Opera 9.27: Thu, 31 Jan 2008 00:00:00 GMT-0600
Opera 9.60: Thu Jan 31 2008 00:00:00 GMT-0600
Firefox 2.0.0.17: Thu Jan 31 2008 00:00:00 GMT-0600 (Canada Central Standard Time)
Firefox 3.0.3: Thu Jan 31 2008 00:00:00 GMT-0600 (Canada Central Standard Time)
Google Chrome 0.2.149.30: Thu Jan 31 2008 00:00:00 GMT-0600 (Canada Central Standard Time)
Safari for Windows 3.1.2: Thu Jan 31 2008 00:00:00 GMT-0600 (Canada Central Standard Time)
```
Output differences are due to differences in the `toString()` implementation, not because the dates are different.
Of course, just because the browsers identified above use 0 as the last day of the previous month does not mean they will continue to do so, or that browsers not listed will do so, but it lends credibility to the belief that it should work the same way in every browser.
|
I find this to be the best solution for me. Let the Date object calculate it for you.
```
var today = new Date();
var lastDayOfMonth = new Date(today.getFullYear(), today.getMonth()+1, 0);
```
Setting day parameter to 0 means one day less than first day of the month which is last day of the previous month.
|
Calculate last day of month
|
[
"",
"javascript",
"date",
""
] |
So my code is below. I'm not getting any errors and it places everything in the node just fine. But based on my debug statements Everytime anything is inserted it's finding the root. I'm not sure if that is right. But according to output file for the assignment, my answers are different when it comes to the height of the tree, the traversals, and I just flat am still having troubles with my leaf count function. Another story though.
Based on the debug statements it looks like everything is going right where they should. But I figure I might need fresh eyes. I don't see how my traversals could change at all since it is really only a matter of where I'm proccessing the node that should effect the Inorder, preorder, and postorder.
```
template <class T>
void BT<T>::insert(const T& item)
{
Node<T>* newNode;
newNode = new Node<T>(item);
insert(root, newNode);
}
template <class T>
void BT<T>::insert(struct Node<T> *&root, struct Node<T> *newNode)
{
if (root == NULL)
{
cout << "Root Found" << newNode->data << endl;
root = newNode;
}
else
{
if (newNode->data < root->data)
{
insert(root->left, newNode);
cout << "Inserting Left" << newNode-> data << endl;
}
else
{
insert(root->right, newNode);
cout << "Inserting Right" << newNode->data << endl;
}
}
}
```
My height function is as follows just in case my insert is actually fine.
```
template <class T>
int BT<T>::height() const
{
return height(root);
}
template <class T>
int BT<T>::height(Node<T>* root) const
{
if (root == NULL)
return 0;
else
{
if (height(root->right) > height(root->left))
return 1 + height(root-> right);
return 1 + height(root->left);
}
}
```
|
You need to change the wording of your debug statements
Really it should read (not Root node)
```
cout << "Leaf Node Found" << newNode->data << endl;
```
It is only the root when it is first called after that any call with node->left or node->right makes it an intermediate node.
To write height() I would do this:
```
template <class T>
int BT<T>::height(Node<T>* root) const
{
if (root == NULL) {return 0;}
return 1 + max(height(root->left),height(root->right));
}
```
|
You need to start off with your root init'd to null. Also, you are passing \*&node in; it should be \*node. Else you're passing a pointer to the address(or reference, I'm not sure which in this context, but both aren't going to be right). You should be passing a pointer to Node in, not a reference.
```
template <class T>
void BT<T>::BT()
{ root = 0;}
template <class T>
void BT<T>::insert(const T& item)
{
Node<T>* newNode;
newNode = new Node<T>(item);
insert(root, newNode);
}
template <class T>
void BT<T>::insert(struct Node<T> *root, struct Node<T> *newNode)
{
/*stuff*/
}
```
|
C++ Binary Search Tree Insert via Recursion
|
[
"",
"c++",
"insert",
"binary-tree",
""
] |
I have been working with T-SQL in MS SQL for some time now and somehow whenever I have to insert data into a table I tend to use syntax:
```
INSERT INTO myTable <something here>
```
I understand that keyword `INTO` is optional here and I do not have to use it but somehow it grew into habit in my case.
My question is:
* Are there any implications of using `INSERT` syntax versus `INSERT INTO`?
* Which one complies fully with the standard?
* Are they both valid in other implementations of SQL standard?
|
`INSERT INTO` is the standard. Even though `INTO` is optional in most implementations, it's required in a few, so it's a good idea to include it if you want your code to be portable.
You can find links to several versions of the SQL standard [here](http://en.wikipedia.org/wiki/SQL#Standardization). I found an HTML version of an older standard [here](http://www.contrib.andrew.cmu.edu/%7Eshadow/sql/sql1992.txt).
|
They are the same thing, `INTO` is completely optional in T-SQL (other SQL dialects may differ).
Contrary to the other answers, I think it impairs readability to use `INTO`.
I think it is a conceptional thing: In my perception, I am not inserting a *row* into a table named "Customer", but I am inserting a *Customer*. (This is connected to the fact that I use to name my tables in singular, not plural).
If you follow the first concept, `INSERT INTO Customer` would most likely "feel right" for you.
If you follow the second concept, it would most likely be `INSERT Customer` for you.
|
INSERT vs INSERT INTO
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
"sql-insert",
""
] |
Eclipse issues warnings when a `serialVersionUID` is missing.
> The serializable class Foo does not declare a static final
> serialVersionUID field of type long
What is `serialVersionUID` and why is it important? Please show an example where missing `serialVersionUID` will cause a problem.
|
The docs for [`java.io.Serializable`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/Serializable.html) are probably about as good an explanation as you'll get:
> The serialization runtime associates with each serializable class a version number, called a `serialVersionUID`, which is used during deserialization to verify that the sender and receiver of a serialized object have loaded classes for that object that are compatible with respect to serialization. If the receiver has loaded a class for the object that has a different `serialVersionUID` than that of the corresponding sender's class, then deserialization will result in an
> [`InvalidClassException`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/InvalidClassException.html). A serializable class can declare its own `serialVersionUID` explicitly by declaring a field named `serialVersionUID` that must be static, final, and of type `long`:
> ```
> ANY-ACCESS-MODIFIER static final long serialVersionUID = 42L;
> ```
> If a serializable class does not explicitly declare a `serialVersionUID`, then the serialization runtime will calculate a default `serialVersionUID` value for that class based on various aspects of the class, as described in the Java(TM) Object Serialization Specification. However, it is *strongly recommended* that all serializable classes explicitly declare `serialVersionUID` values, since the default `serialVersionUID` computation is highly sensitive to class details that may vary depending on compiler implementations, and can thus result in unexpected `InvalidClassExceptions` during deserialization. Therefore, to guarantee a consistent `serialVersionUID` value across different java compiler implementations, a serializable class must declare an explicit `serialVersionUID` value. It is also strongly advised that explicit `serialVersionUID` declarations use the private modifier where possible, since such declarations apply only to the immediately declaring class — `serialVersionUID` fields are not useful as inherited members.
|
If you're serializing just because you have to serialize for the implementation's sake (who cares if you serialize for an `HTTPSession`, for instance...if it's stored or not, you probably don't care about `de-serializing` a form object), then you can ignore this.
If you're actually using serialization, it only matters if you plan on storing and retrieving objects using serialization directly. The `serialVersionUID` represents your class version, and you should increment it if the current version of your class is not backwards compatible with its previous version.
Most of the time, you will probably not use serialization directly. If this is the case, generate a default `SerialVersionUID` by clicking the quick fix option and don't worry about it.
|
What is a serialVersionUID and why should I use it?
|
[
"",
"java",
"serialization",
"serialversionuid",
""
] |
In Visual Studio, I often use objects only for RAII purposes. For example:
```
ScopeGuard close_guard = MakeGuard( &close_file, file );
```
The whole purpose of *close\_guard* is to make sure that the file will be close on function exit, it is not used anywhere else. However, Visual Studio gives me a warning that a "*local variable is initialized but not referenced*". I want to turn this warning off for this specific case.
How do you deal with this kind of situation? Visual Studio thinks that this object is useless, but this is wrong since it has a non-trivial destructor.
I wouldn't want to use a *#pragma warning* directive for this since it would turn off this warning even for legitimate reasons.
|
**Method 1:** Use the `#pragma warning` directive.
`#pragma warning` allows selective modification of the behavior of compiler warning messages.
```
#pragma warning( push )
#pragma warning( disable : 4705 ) // replace 4705 with warning number
ScopeGuard close_guard = MakeGuard( &close_file, file );
#pragma warning( pop )
```
This code saves the current warning state, then it disables the warning for a specific warning code and then restores the last saved warning state.
**Method 2:** Use a workaround like the following. Visual Studio will be happy and so will you. This workaround is used in many Microsoft samples and also in other projects.
```
ScopeGuard close_guard = MakeGuard( &close_file, file );
close_guard;
```
Or you can create a `#define` to workaround the warning.
```
#define UNUSED_VAR(VAR) VAR
...
ScopeGuard close_guard = MakeGuard( &close_file, file );
UNUSED_VAR(close_guard);
```
---
Some users stated that the code presented will not work because ScopeGuard is a typedef. This assumption is wrong.
<http://www.ddj.com/cpp/184403758>
> According to the C++ Standard, a
> reference initialized with a temporary
> value makes that temporary value live
> for the lifetime of the reference
> itself.
|
If your object has a non-trivial destructor, Visual Studio should *not* be giving you that warning. The following code does not generate any warnings in VS2005 with warnings turned all the way up (/W4):
```
class Test
{
public:
~Test(void) { printf("destructor\n"); }
};
Test foo(void) { return Test(); }
int main(void)
{
Test t = foo();
printf("moo\n");
return 0;
}
```
Commenting out the destructor gives a warning; the code as-is does not.
|
Dealing with C++ "initialized but not referenced" warning for destruction of scope helpers?
|
[
"",
"c++",
"visual-studio-2005",
"warnings",
""
] |
I'm teaching Java EE at the university, and this was a question a student asked. I said "no", but I wasn't really sure, so I thought I might ask you mighty developers. :)
Basically, what I'd like to do is to use entities if they were in my context: cat getters, setters, etc, so like normal POJOs. if I use an EJB using its remote inferface, the entities gets decoupled from the core infrastructure, so that's a no-go.
I thought about writing a layer such as this in my MSc thesis. If it's a dead idea, feel free to tell me. If it's not, tell me if you'd like one.
Or if there is such a tool out there, let me know!
|
In a basic modern world Java EE application, it is broken into various layers, where you have 4 basic layers
```
+--------------------+
| Presentation |
+--------------------+
| Controller/Actions |
+--------------------+
| Business Delegate |
| (Service) |
+--------------------+
| Data Access Layer |
+--------------------+
| Database |
+--------------------+
```
Your applications should be split into these layer right from the beginning, such that you can at any given point of time replace any layer without effecting any of it's sibling layer.
Example if you used JDBC for the Data Access layer, you should be able to replace it with Hibernate without affecting the business delegate or Database layer. The benefit of using such an architecture is to allow collaboration with multiple technologies. You business delegate (service layer) should be able to talk to a web service and handle the application processing without even going to a browser!
Regarding using JSP as the presentation layer, there are other technologies available like, [velocity](http://velocity.apache.org/), [freemarker](http://freemarker.sourceforge.net/), as iberck mentioned above, tapestry also has it's own rendering engine. You can use XML + XSLT also to render the UI. There are UI managing apps also available like [Tiles](http://tiles.apache.org/) and [sitemesh](http://www.opensymphony.com/sitemesh/), that help you integrate various techs as different components of the page and show them as one.
You can also use light weight swing components clubbed with [JNLP](http://java.sun.com/javase/technologies/desktop/javawebstart/1.2/docs/developersguide.html) and develop a desktop style enterprise application. All we need is a little imagination and client requirement and we can use literally anything as the presentation layer.
|
I've never tried it, but JSF is supposed to work better with [Facelets](https://facelets.dev.java.net/) than with JSP.
IBM has [an article](http://www.ibm.com/developerworks/java/library/j-facelets/) about it.
|
Is there any other strongly-integrated presentation layer tool other than JSF/JSP for Java EE?
|
[
"",
"java",
"jakarta-ee",
"presentation-layer",
""
] |
Very basic question: how do I write a `short` literal in C++?
I know the following:
* `2` is an `int`
* `2U` is an `unsigned int`
* `2L` is a `long`
* `2LL` is a `long long`
* `2.0f` is a `float`
* `2.0` is a `double`
* `'\2'` is a `char`.
But how would I write a `short` literal? I tried `2S` but that gives a compiler warning.
|
```
((short)2)
```
Yeah, it's not strictly a short literal, more of a casted-int, but the behaviour is the same and I think there isn't a direct way of doing it.
> That's what I've been doing because I couldn't find anything about it. I would guess that the compiler would be smart enough to compile this as if it's a short literal (i.e. it wouldn't actually allocate an int and then cast it every time).
The following illustrates how much you should worry about this:
```
a = 2L;
b = 2.0;
c = (short)2;
d = '\2';
```
Compile -> disassemble ->
```
movl $2, _a
movl $2, _b
movl $2, _c
movl $2, _d
```
|
C++11 gives you pretty close to what you want. *(Search for "user-defined literals" to learn more.)*
```
#include <cstdint>
inline std::uint16_t operator "" _u(unsigned long long value)
{
return static_cast<std::uint16_t>(value);
}
void func(std::uint32_t value); // 1
void func(std::uint16_t value); // 2
func(0x1234U); // calls 1
func(0x1234_u); // calls 2
// also
inline std::int16_t operator "" _s(unsigned long long value)
{
return static_cast<std::int16_t>(value);
}
```
|
How do I write a short literal in C++?
|
[
"",
"c++",
"literals",
""
] |
I'm writing a PHP script and the script outputs a simple text file log of the operations it performs. How would I use PHP to delete the first several lines from this file when it reaches a certain file size?
Ideally, I would like it to keep the first two lines (date/time created and blank) and start deleting from line 3 and delete X amount of lines. I already know about the `filesize()` function, so I'll be using that to check the file size.
Example log text:
```
*** LOG FILE CREATED ON 2008-10-18 AT 03:06:29 ***
2008-10-18 @ 03:06:29 CREATED: gallery/thumbs
2008-10-18 @ 03:08:03 RENAMED: gallery/IMG_9423.JPG to gallery/IMG_9423.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/IMG_9188.JPG to gallery/IMG_9188.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/IMG_9236.JPG to gallery/IMG_9236.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/IMG_9228.JPG to gallery/IMG_9228.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/IMG_3104.JPG to gallery/IMG_3104.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/First dance02.JPG to gallery/First dance02.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/BandG02.JPG to gallery/BandG02.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/official03.JPG to gallery/official03.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/Wedding32.JPG to gallery/Wedding32.jpg
2008-10-18 @ 03:08:03 RENAMED: gallery/Gettaway car16.JPG to gallery/Gettaway car16.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/Afterparty05.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/IMG_9254.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/IMG_9175.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/official05.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/First dance01.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/Wedding29.jpg
2008-10-18 @ 03:08:04 CREATED: gallery/thumbs/men walking.jpg
```
|
```
$x_amount_of_lines = 30;
$log = 'path/to/log.txt';
if (filesize($log) >= $max_size)) {
$file = file($log);
$line = $file[0];
$file = array_splice($file, 2, $x_amount_of_lines);
$file = array_splice($file, 0, 0, array($line, "\n")); // put the first line back in
...
}
```
edit:
with correction from by rcar and saving the first line.
|
## Use the SPL, Luke
PHP 5 comes with plenty of iterators goodness :
```
<?php
$line_to_strip = 5;
$new_file = new SplFileObject('test2.log', 'w');
foreach (new LimitIterator(new SplFileObject('test.log'), $line_to_strip) as $line)
$new_file->fwrite($line);
?>
```
It's cleaner that what you can do while messing with fopen, it does not hold the entire file in memory, only one line at a time, and you can plug it and reuse the pattern anywhere since it's full OO.
|
How to use PHP to delete X number of lines from the beginning of a text file?
|
[
"",
"php",
"file",
"logging",
""
] |
I am looking to set full trust for a single web part, is this possible? manifest.xml maybe?
|
Have you tried registering the assembly in the GAC? This is the preferred approach to giving any assembly full trust on your machine:
```
gacutil.exe \i C:\Path\To\Dll.dll
```
Hope that helps. Let me know if I misunderstood your question.
|
As far as I recall manifest.xml is correct, and you specify the CodeAccessSecurity.
This article has a detailed description about it
<http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2007/07/05/how-to-configure-code-access-security-for-a-web-part.aspx>
when you deploy your solution then you deploy it with the -allowCasPolicies flag on
I'm not sure i'd want to put a webpart into the GAC
|
How do I set full trust for a single Web Part in SharePoint?
|
[
"",
"c#",
"sharepoint",
"web-parts",
"full-trust",
""
] |
I have an importer process which is running as a windows service (debug mode as an application) and it processes various xml documents and csv's and imports into an SQL database. All has been well until I have have had to process a large amount of data (120k rows) from another table (as I do the xml documents).
I am now finding that the SQL server's memory usage is hitting a point where it just hangs. My application never receives a time out from the server and everything just goes STOP.
I am still able to make calls to the database server separately but that application thread is just stuck with no obvious thread in SQL Activity Monitor and no activity in Profiler.
Any ideas on where to begin solving this problem would be greatly appreciated as we have been struggling with it for over a week now.
The basic architecture is c# 2.0 using NHibernate as an ORM data is being pulled into the actual c# logic and processed then spat back into the same database along with logs into other tables.
---
The only other prob which sometimes happens instead is that for some reason a cursor is being opening on this massive table, which I can only assume is being generated from ADO.net the statement like exec sp\_cursorfetch 180153005,16,113602,100 is being called thousands of times according to Profiler
|
I found out that it was nHibernate creating the cursor on the large table. I am yet to understand why, but in the mean time I have replaced the large table data access model with straight forward ado.net calls
|
When are you `COMMIT`ting the data? Are there any locks or deadlocks (sp\_who)? If 120,000 rows is considered large, how much RAM is SQL Server using? When the application hangs, is there anything about the point where it hangs (is it an `INSERT`, a lookup `SELECT`, or what?)?
It seems to me that that commit size is way too small. Usually in SSIS ETL tasks, I will use a batch size of 100,000 for narrow rows with sources over 1,000,000 in cardinality, but I never go below 10,000 even for very wide rows.
I would not use an ORM for large ETL, unless the transformations are extremely complex with a lot of business rules. Even still, with a large number of relatively simple business transforms, I would consider loading the data into simple staging tables and using T-SQL to do all the inserts, lookups etc.
|
Import Process maxing SQL memory
|
[
"",
"c#",
"sql-server-2005",
"nhibernate",
""
] |
I've seen some very good questions on Stack Overflow concerning delegates, events, and the .NET implementation of these two features. One question in particular, "[How do C# Events work behind the scenes?](https://stackoverflow.com/questions/213638/how-do-c-events-work-behind-the-scenes#213651)", produced a great answer that explains some subtle points very well.
The answer to the above question makes this point:
> When you declare a field-like event
> ... the compiler generates the methods
> and a private field (of the same type
> as the delegate). Within the class,
> when you refer to ElementAddedEvent
> you're referring to the field. Outside
> the class, you're referring to the
> field
An MSDN article linked from the same question ("[Field-like events](http://msdn.microsoft.com/en-us/library/aa664455.aspx)") adds:
> The notion of raising an event is
> precisely equivalent to invoking the
> delegate represented by the event —
> thus, there are no special language
> constructs for raising events.
Wanting to examine further, I built a test project in order to view the IL that an event and a delegate are compiled to:
```
public class TestClass
{
public EventHandler handler;
public event EventHandler FooEvent;
public TestClass()
{ }
}
```
I expected the delegate field `handler` and the event `FooEvent` to compile to roughly the same IL code, with some additional methods to wrap access to the compiler-generated `FooEvent` field. But the IL generated wasn't quite what I expected:
```
.class public auto ansi beforefieldinit TestClass
extends [mscorlib]System.Object
{
.event [mscorlib]System.EventHandler FooEvent
{
.addon instance void TestClass::add_FooEvent(class [mscorlib]System.EventHandler)
.removeon instance void TestClass::remove_FooEvent(class [mscorlib]System.EventHandler)
}
.method public hidebysig specialname rtspecialname instance void .ctor() cil managed
{
// Constructor IL hidden
}
.field private class [mscorlib]System.EventHandler FooEvent
.field public class [mscorlib]System.EventHandler handler
}
```
Since events are nothing more than delegates with compiler-generated `add` and `remove` methods, I didn't expect to see events treated as anything more than that in IL. But the add and remove methods are defined in a section that begins `.event`, not `.method` as normal methods are.
My ultimate questions are: if events are implemented simply as delegates with accessor methods, what is the point of having a `.event` IL section? Couldn't they be implemented in IL without this by using `.method` sections? Is `.event` equivalent to `.method`?
|
I'm not sure that is surprising... compare to the same for properties vs fields (since properties before the same function as events: encapsulation via accessors):
```
.field public string Foo // public field
.property instance string Bar // public property
{
.get instance string MyType::get_Bar()
.set instance void MyType::set_Bar(string)
}
```
Also - events *do not* mention anything about fields; they *only* define the accessors (add/remove). The delegate backer is an implementation detail; it just so happens that field-like-events declare a field as a backing member - in the same way that auto-implemented-properties declare a field as a backing member. Other implementations are possible (and very common, especially in forms etc).
Other common implementations:
Sparse-events (Controls, etc) - EventHandlerList (or similar):
```
// only one instance field no matter how many events;
// very useful if we expect most events to be unsubscribed
private EventHandlerList events = new EventHandlerList();
protected EventHandlerList Events {
get { return events; } // usually lazy
}
// this code repeated per event
private static readonly object FooEvent = new object();
public event EventHandler Foo
{
add { Events.AddHandler(FooEvent, value); }
remove { Events.RemoveHandler(FooEvent, value); }
}
protected virtual void OnFoo()
{
EventHandler handler = Events[FooEvent] as EventHandler;
if (handler != null) handler(this, EventArgs.Empty);
}
```
(the above is pretty-much the backbone of win-forms events)
Facade (although this confuses the "sender" a little; some intermediary code is often helpful):
```
private Bar wrappedObject; // via ctor
public event EventHandler SomeEvent
{
add { wrappedObject.SomeOtherEvent += value; }
remove { wrappedObject.SomeOtherEvent -= value; }
}
```
(the above can also be used to effectively rename an event)
|
Events aren't the same as delegates. Events encapsulate adding/removing a handler for an event. The handler is represented with a delegate.
You *could* just write AddClickHandler/RemoveClickHandler etc for every event - but it would be relatively painful, and wouldn't make it easy for tools like VS to separate out events from anything else.
This is just like properties really - you could write GetSize/SetSize etc (as you do in Java) but by separating out properties, there are syntactical shortcuts available and better tool support.
|
If events are implemented as delegates in .NET, what is the point of the .event IL section?
|
[
"",
"c#",
".net",
"events",
"delegates",
"cil",
""
] |
I am trying to fill a form in a php application from a C# client (Outlook addin). I used Fiddler to see the original request from within the php application and the form is transmitted as a multipart/form. Unfortunately .Net does not come with native support for this type of forms (WebClient has only a method for uploading a file).
Does anybody know a library or has some code to achieve this? I want to post different values and additionally (but only sometimes) a file.
Thanks for your help,
Sebastian
|
This is cut and pasted from some sample code I wrote, hopefully it should give the basics. It only supports File data and form-data at the moment.
```
public class PostData
{
private List<PostDataParam> m_Params;
public List<PostDataParam> Params
{
get { return m_Params; }
set { m_Params = value; }
}
public PostData()
{
m_Params = new List<PostDataParam>();
// Add sample param
m_Params.Add(new PostDataParam("email", "MyEmail", PostDataParamType.Field));
}
/// <summary>
/// Returns the parameters array formatted for multi-part/form data
/// </summary>
/// <returns></returns>
public string GetPostData()
{
// Get boundary, default is --AaB03x
string boundary = ConfigurationManager.AppSettings["ContentBoundary"].ToString();
StringBuilder sb = new StringBuilder();
foreach (PostDataParam p in m_Params)
{
sb.AppendLine(boundary);
if (p.Type == PostDataParamType.File)
{
sb.AppendLine(string.Format("Content-Disposition: file; name=\"{0}\"; filename=\"{1}\"", p.Name, p.FileName));
sb.AppendLine("Content-Type: text/plain");
sb.AppendLine();
sb.AppendLine(p.Value);
}
else
{
sb.AppendLine(string.Format("Content-Disposition: form-data; name=\"{0}\"", p.Name));
sb.AppendLine();
sb.AppendLine(p.Value);
}
}
sb.AppendLine(boundary);
return sb.ToString();
}
}
public enum PostDataParamType
{
Field,
File
}
public class PostDataParam
{
public PostDataParam(string name, string value, PostDataParamType type)
{
Name = name;
Value = value;
Type = type;
}
public string Name;
public string FileName;
public string Value;
public PostDataParamType Type;
}
```
To send the data you then need to:
```
HttpWebRequest oRequest = null;
oRequest = (HttpWebRequest)HttpWebRequest.Create(oURL.URL);
oRequest.ContentType = "multipart/form-data";
oRequest.Method = "POST";
PostData pData = new PostData();
byte[] buffer = encoding.GetBytes(pData.GetPostData());
// Set content length of our data
oRequest.ContentLength = buffer.Length;
// Dump our buffered postdata to the stream, booyah
oStream = oRequest.GetRequestStream();
oStream.Write(buffer, 0, buffer.Length);
oStream.Close();
// get the response
oResponse = (HttpWebResponse)oRequest.GetResponse();
```
Hope thats clear, i've cut and pasted from a few sources to get that tidier.
|
Thanks for the answers, everybody! I recently had to get this to work, and used your suggestions heavily. However, there were a couple of tricky parts that did not work as expected, mostly having to do with actually including the file (which was an important part of the question). There are a lot of answers here already, but I think this may be useful to someone in the future (I could not find many clear examples of this online). I [wrote a blog post](http://www.briangrinstead.com/blog/multipart-form-post-in-c) that explains it a little more.
Basically, I first tried to pass in the file data as a UTF8 encoded string, but I was having problems with encoding files (it worked fine for a plain text file, but when uploading a Word Document, for example, if I tried to save the file that was passed through to the posted form using Request.Files[0].SaveAs(), opening the file in Word did not work properly. I found that if you write the file data directly using a Stream (rather than a StringBuilder), it worked as expected. Also, I made a couple of modifications that made it easier for me to understand.
By the way, the [Multipart Forms Request for Comments](http://www.ietf.org/rfc/rfc2388.txt) and the [W3C Recommendation for mulitpart/form-data](http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.2) are a couple of useful resources in case anyone needs a reference for the specification.
I changed the WebHelpers class to be a bit smaller and have simpler interfaces, it is now called `FormUpload`. If you pass a `FormUpload.FileParameter` you can pass the byte[] contents along with a file name and content type, and if you pass a string, it will treat it as a standard name/value combination.
**Here is the FormUpload class:**
```
// Implements multipart/form-data POST in C# http://www.ietf.org/rfc/rfc2388.txt
// http://www.briangrinstead.com/blog/multipart-form-post-in-c
public static class FormUpload
{
private static readonly Encoding encoding = Encoding.UTF8;
public static HttpWebResponse MultipartFormDataPost(string postUrl, string userAgent, Dictionary<string, object> postParameters)
{
string formDataBoundary = String.Format("----------{0:N}", Guid.NewGuid());
string contentType = "multipart/form-data; boundary=" + formDataBoundary;
byte[] formData = GetMultipartFormData(postParameters, formDataBoundary);
return PostForm(postUrl, userAgent, contentType, formData);
}
private static HttpWebResponse PostForm(string postUrl, string userAgent, string contentType, byte[] formData)
{
HttpWebRequest request = WebRequest.Create(postUrl) as HttpWebRequest;
if (request == null)
{
throw new NullReferenceException("request is not a http request");
}
// Set up the request properties.
request.Method = "POST";
request.ContentType = contentType;
request.UserAgent = userAgent;
request.CookieContainer = new CookieContainer();
request.ContentLength = formData.Length;
// You could add authentication here as well if needed:
// request.PreAuthenticate = true;
// request.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;
// request.Headers.Add("Authorization", "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes("username" + ":" + "password")));
// Send the form data to the request.
using (Stream requestStream = request.GetRequestStream())
{
requestStream.Write(formData, 0, formData.Length);
requestStream.Close();
}
return request.GetResponse() as HttpWebResponse;
}
private static byte[] GetMultipartFormData(Dictionary<string, object> postParameters, string boundary)
{
Stream formDataStream = new System.IO.MemoryStream();
bool needsCLRF = false;
foreach (var param in postParameters)
{
// Thanks to feedback from commenters, add a CRLF to allow multiple parameters to be added.
// Skip it on the first parameter, add it to subsequent parameters.
if (needsCLRF)
formDataStream.Write(encoding.GetBytes("\r\n"), 0, encoding.GetByteCount("\r\n"));
needsCLRF = true;
if (param.Value is FileParameter)
{
FileParameter fileToUpload = (FileParameter)param.Value;
// Add just the first part of this param, since we will write the file data directly to the Stream
string header = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"; filename=\"{2}\";\r\nContent-Type: {3}\r\n\r\n",
boundary,
param.Key,
fileToUpload.FileName ?? param.Key,
fileToUpload.ContentType ?? "application/octet-stream");
formDataStream.Write(encoding.GetBytes(header), 0, encoding.GetByteCount(header));
// Write the file data directly to the Stream, rather than serializing it to a string.
formDataStream.Write(fileToUpload.File, 0, fileToUpload.File.Length);
}
else
{
string postData = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"\r\n\r\n{2}",
boundary,
param.Key,
param.Value);
formDataStream.Write(encoding.GetBytes(postData), 0, encoding.GetByteCount(postData));
}
}
// Add the end of the request. Start with a newline
string footer = "\r\n--" + boundary + "--\r\n";
formDataStream.Write(encoding.GetBytes(footer), 0, encoding.GetByteCount(footer));
// Dump the Stream into a byte[]
formDataStream.Position = 0;
byte[] formData = new byte[formDataStream.Length];
formDataStream.Read(formData, 0, formData.Length);
formDataStream.Close();
return formData;
}
public class FileParameter
{
public byte[] File { get; set; }
public string FileName { get; set; }
public string ContentType { get; set; }
public FileParameter(byte[] file) : this(file, null) { }
public FileParameter(byte[] file, string filename) : this(file, filename, null) { }
public FileParameter(byte[] file, string filename, string contenttype)
{
File = file;
FileName = filename;
ContentType = contenttype;
}
}
}
```
**Here is the calling code, which uploads a file and a few normal post parameters:**
```
// Read file data
FileStream fs = new FileStream("c:\\people.doc", FileMode.Open, FileAccess.Read);
byte[] data = new byte[fs.Length];
fs.Read(data, 0, data.Length);
fs.Close();
// Generate post objects
Dictionary<string, object> postParameters = new Dictionary<string, object>();
postParameters.Add("filename", "People.doc");
postParameters.Add("fileformat", "doc");
postParameters.Add("file", new FormUpload.FileParameter(data, "People.doc", "application/msword"));
// Create request and receive response
string postURL = "http://localhost";
string userAgent = "Someone";
HttpWebResponse webResponse = FormUpload.MultipartFormDataPost(postURL, userAgent, postParameters);
// Process response
StreamReader responseReader = new StreamReader(webResponse.GetResponseStream());
string fullResponse = responseReader.ReadToEnd();
webResponse.Close();
Response.Write(fullResponse);
```
|
Multipart forms from C# client
|
[
"",
"c#",
"http",
"multipartform-data",
""
] |
Does anyone have a simple, efficient way of checking that a string doesn't contain HTML? Basically, I want to check that certain fields only contain plain text. I thought about looking for the < character, but that can easily be used in plain text. Another way might be to create a new System.Xml.Linq.XElement using:
```
XElement.Parse("<wrapper>" + MyString + "</wrapper>")
```
and check that the XElement contains no child elements, but this seems a little heavyweight for what I need.
|
I just tried my XElement.Parse solution. I created an extension method on the string class so I can reuse the code easily:
```
public static bool ContainsXHTML(this string input)
{
try
{
XElement x = XElement.Parse("<wrapper>" + input + "</wrapper>");
return !(x.DescendantNodes().Count() == 1 && x.DescendantNodes().First().NodeType == XmlNodeType.Text);
}
catch (XmlException ex)
{
return true;
}
}
```
One problem I found was that plain text ampersand and less than characters cause an XmlException and indicate that the field contains HTML (which is wrong). To fix this, the input string passed in first needs to have the ampersands and less than characters converted to their equivalent XHTML entities. I wrote another extension method to do that:
```
public static string ConvertXHTMLEntities(this string input)
{
// Convert all ampersands to the ampersand entity.
string output = input;
output = output.Replace("&", "amp_token");
output = output.Replace("&", "&");
output = output.Replace("amp_token", "&");
// Convert less than to the less than entity (without messing up tags).
output = output.Replace("< ", "< ");
return output;
}
```
Now I can take a user submitted string and check that it doesn't contain HTML using the following code:
```
bool ContainsHTML = UserEnteredString.ConvertXHTMLEntities().ContainsXHTML();
```
I'm not sure if this is bullet proof, but I think it's good enough for my situation.
|
The following will match any matching set of tags. i.e. <b>this</b>
```
Regex tagRegex = new Regex(@"<\s*([^ >]+)[^>]*>.*?<\s*/\s*\1\s*>");
```
The following will match any single tag. i.e. <b> (it doesn't have to be closed).
```
Regex tagRegex = new Regex(@"<[^>]+>");
```
You can then use it like so
```
bool hasTags = tagRegex.IsMatch(myString);
```
|
How to validate that a string doesn't contain HTML using C#
|
[
"",
"c#",
"html",
"validation",
""
] |
I'm [listening to a talk](http://channel9.msdn.com/shows/Going+Deep/Inside-C-40-dynamic-type-optional-parameters-more-COM-friendly/) about **C#4**'s `dynamic` keyword and I'm wondering... Will this feature be orthogonal to other .NET features, for example will it support extension methods?
```
public static class StrExtension {
public static string twice(this string str) { return str + str; }
}
...
dynamic x = "Yo";
x.twice(); // will this work?
```
---
**Note:** This question was asked before C#4 was shipped which is why it's phrased in the future tense.
|
From the ["New Features in C# 4" word doc](http://code.msdn.microsoft.com/Project/Download/FileDownload.aspx?ProjectName=csharpfuture&DownloadId=3550):
> Dynamic lookup will not be able to
> find extension methods. Whether
> extension methods apply or not depends
> on the static context of the call
> (i.e. which using clauses occur), and
> this context information is not
> currently kept as part of the payload.
|
This works which I find interesting at least...
```
public static class StrExtension
{
public static string twice(this string str) { return str + str; }
}
...
dynamic x = "Yo";
StrExtension.twice(x);
```
Still, if the compiler can find the correct extension method at compile time then I don't see why it can't package up a set of extension methods to be looked up at runtime? It would be like a v-table for non-member methods.
EDIT:
This is cool... <http://www2.research.att.com/~bs/multimethods.pdf>
|
Will the dynamic keyword in C#4 support extension methods?
|
[
"",
"c#",
"dynamic",
"extension-methods",
"c#-4.0",
""
] |
I have some code doing this :
```
var changes = document.getElementsByName(from);
for (var c=0; c<changes.length; c++) {
var ch = changes[c];
var current = new String(ch.innerHTML);
etc.
}
```
This works fine in FF and Chrome but not in IE7. Presumably because getElementsByName isn't working in IE. What's the best workaround?
|
In case you don't know why this isn't working in IE, here is [the MSDN documentation on that function](http://msdn.microsoft.com/en-us/library/ms536438(VS.85).aspx):
> When you use the getElementsByName method, all elements in the document that have the specified NAME attribute or ID attribute value are returned.
>
> Elements that support both the NAME attribute and the ID attribute are included in the collection returned by the getElementsByName method, but elements with a NAME expando are not included in the collection; therefore, this method cannot be used to retrieve custom tags by name.
Firefox allows [`getElementsByName()`](http://msdn.microsoft.com/en-us/library/ms536438(VS.85).aspx) to retrieve elements that use a NAME expando, which is why it works. Whether or not that is a Good Thing™ may be up for debate, but that is the reality of it.
So, one option is to use the [`getAttribute()`](http://msdn.microsoft.com/en-us/library/ms536429(VS.85).aspx) DOM method to ask for the NAME attribute and then test the value to see if it is what you want, and if so, add it to an array. This would require, however, that you iterate over all of the nodes in the page or at least within a subsection, which wouldn't be the most efficient. You could constrain that list beforehand by using something like [`getElementsByTagName()`](http://msdn.microsoft.com/en-us/library/ms536439(VS.85).aspx) perhaps.
Another way to do this, if you are in control of the HTML of the page, is to give all of the elements of interest an Id that varies only by number, e.g.:
```
<div id="Change0">...</div>
<div id="Change1">...</div>
<div id="Change2">...</div>
<div id="Change3">...</div>
```
And then have JavaScript like this:
```
// assumes consecutive numbering, starting at 0
function getElementsByModifiedId(baseIdentifier) {
var allWantedElements = [];
var idMod = 0;
while(document.getElementById(baseIdentifier + idMod)) { // will stop when it can't find any more
allWantedElements.push(document.getElementById(baseIdentifier + idMod++));
}
return allWantedElements;
}
// call it like so:
var changes = getElementsByModifiedId("Change");
```
That is a hack, of course, but it would do the job you need and not be too inefficient compare to some other hacks.
If you are using a JavaScript framework/toolkit of some kind, you options are much better, but I don't have time to get into those specifics unless you indicate you are using one. Personally, I don't know how people live without one, they save so much time, effort and frustration that you can't afford *not* to use one.
|
There are a couple of problems:
1. IE is indeed confusing `id=""` with `name=""`
2. `name=""` isn't allowed on `<span>`
To fix, I suggest:
1. Change all the `name=""` to `class=""`
2. Change your code like this:
-
```
var changes = document.getElementById('text').getElementsByTagName('span');
for (var c=0; c<changes.length; c++) {
var ch = changes[c];
if (ch.className != from)
continue;
var current = new String(ch.innerHTML);
```
|
getElementsByName in IE7
|
[
"",
"javascript",
"dom",
"internet-explorer-7",
""
] |
> **Edit:** The code here still has some bugs in it, and it could do better in the performance department, but instead of trying to fix this, for the record I took the problem over to the Intel discussion groups and got lots of great feedback, and if all goes well a polished version of Atomic float will be included in a near future release of Intel's Threading Building Blocks
Ok here's a tough one, I want an Atomic float, not for super-fast graphics performance, but to use routinely as data-members of classes. And I don't want to pay the price of using locks on these classes, because it provides no additional benefits for my needs.
Now with intel's tbb and other atomic libraries I've seen, integer types are supported, but not floating points. So I went on and implemented one, and it works... but I'm not sure if it REALLY works, or I'm just very lucky that it works.
Anyone here knows if this is not some form of threading heresy?
```
typedef unsigned int uint_32;
struct AtomicFloat
{
private:
tbb::atomic<uint_32> atomic_value_;
public:
template<memory_semantics M>
float fetch_and_store( float value )
{
const uint_32 value_ = atomic_value_.tbb::atomic<uint_32>::fetch_and_store<M>((uint_32&)value);
return reinterpret_cast<const float&>(value_);
}
float fetch_and_store( float value )
{
const uint_32 value_ = atomic_value_.tbb::atomic<uint_32>::fetch_and_store((uint_32&)value);
return reinterpret_cast<const float&>(value_);
}
template<memory_semantics M>
float compare_and_swap( float value, float comparand )
{
const uint_32 value_ = atomic_value_.tbb::atomic<uint_32>::compare_and_swap<M>((uint_32&)value,(uint_32&)compare);
return reinterpret_cast<const float&>(value_);
}
float compare_and_swap(float value, float compare)
{
const uint_32 value_ = atomic_value_.tbb::atomic<uint_32>::compare_and_swap((uint_32&)value,(uint_32&)compare);
return reinterpret_cast<const float&>(value_);
}
operator float() const volatile // volatile qualifier here for backwards compatibility
{
const uint_32 value_ = atomic_value_;
return reinterpret_cast<const float&>(value_);
}
float operator=(float value)
{
const uint_32 value_ = atomic_value_.tbb::atomic<uint_32>::operator =((uint_32&)value);
return reinterpret_cast<const float&>(value_);
}
float operator+=(float value)
{
volatile float old_value_, new_value_;
do
{
old_value_ = reinterpret_cast<float&>(atomic_value_);
new_value_ = old_value_ + value;
} while(compare_and_swap(new_value_,old_value_) != old_value_);
return (new_value_);
}
float operator*=(float value)
{
volatile float old_value_, new_value_;
do
{
old_value_ = reinterpret_cast<float&>(atomic_value_);
new_value_ = old_value_ * value;
} while(compare_and_swap(new_value_,old_value_) != old_value_);
return (new_value_);
}
float operator/=(float value)
{
volatile float old_value_, new_value_;
do
{
old_value_ = reinterpret_cast<float&>(atomic_value_);
new_value_ = old_value_ / value;
} while(compare_and_swap(new_value_,old_value_) != old_value_);
return (new_value_);
}
float operator-=(float value)
{
return this->operator+=(-value);
}
float operator++()
{
return this->operator+=(1);
}
float operator--()
{
return this->operator+=(-1);
}
float fetch_and_add( float addend )
{
return this->operator+=(-addend);
}
float fetch_and_increment()
{
return this->operator+=(1);
}
float fetch_and_decrement()
{
return this->operator+=(-1);
}
};
```
Thanks!
**Edit:** changed size\_t to uint32\_t as Greg Rogers suggested, that way its more portable
**Edit:** added listing for the entire thing, with some fixes.
**More Edits:** Performance wise using a locked float for 5.000.000 += operations with 100 threads on my machine takes 3.6s, while my atomic float even with its silly do-while takes 0.2s to do the same work. So the >30x performance boost means its worth it, (and this is the catch) if its correct.
**Even More Edits:** As Awgn pointed out my `fetch_and_xxxx` parts were all wrong. Fixed that and removed parts of the API I'm not sure about (templated memory models). And implemented other operations in terms of operator += to avoid code repetition
**Added:** Added operator \*= and operator /=, since floats wouldn't be floats without them. Thanks to Peterchen's comment that this was noticed
**Edit:** Latest version of the code follows (I'll leave the old version for reference though)
```
#include <tbb/atomic.h>
typedef unsigned int uint_32;
typedef __TBB_LONG_LONG uint_64;
template<typename FLOATING_POINT,typename MEMORY_BLOCK>
struct atomic_float_
{
/* CRC Card -----------------------------------------------------
| Class: atmomic float template class
|
| Responsability: handle integral atomic memory as it were a float,
| but partially bypassing FPU, SSE/MMX, so it is
| slower than a true float, but faster and smaller
| than a locked float.
| *Warning* If your float usage is thwarted by
| the A-B-A problem this class isn't for you
| *Warning* Atomic specification says we return,
| values not l-values. So (i = j) = k doesn't work.
|
| Collaborators: intel's tbb::atomic handles memory atomicity
----------------------------------------------------------------*/
typedef typename atomic_float_<FLOATING_POINT,MEMORY_BLOCK> self_t;
tbb::atomic<MEMORY_BLOCK> atomic_value_;
template<memory_semantics M>
FLOATING_POINT fetch_and_store( FLOATING_POINT value )
{
const MEMORY_BLOCK value_ =
atomic_value_.tbb::atomic<MEMORY_BLOCK>::fetch_and_store<M>((MEMORY_BLOCK&)value);
//atomic specification requires returning old value, not new one
return reinterpret_cast<const FLOATING_POINT&>(value_);
}
FLOATING_POINT fetch_and_store( FLOATING_POINT value )
{
const MEMORY_BLOCK value_ =
atomic_value_.tbb::atomic<MEMORY_BLOCK>::fetch_and_store((MEMORY_BLOCK&)value);
//atomic specification requires returning old value, not new one
return reinterpret_cast<const FLOATING_POINT&>(value_);
}
template<memory_semantics M>
FLOATING_POINT compare_and_swap( FLOATING_POINT value, FLOATING_POINT comparand )
{
const MEMORY_BLOCK value_ =
atomic_value_.tbb::atomic<MEMORY_BLOCK>::compare_and_swap<M>((MEMORY_BLOCK&)value,(MEMORY_BLOCK&)compare);
//atomic specification requires returning old value, not new one
return reinterpret_cast<const FLOATING_POINT&>(value_);
}
FLOATING_POINT compare_and_swap(FLOATING_POINT value, FLOATING_POINT compare)
{
const MEMORY_BLOCK value_ =
atomic_value_.tbb::atomic<MEMORY_BLOCK>::compare_and_swap((MEMORY_BLOCK&)value,(MEMORY_BLOCK&)compare);
//atomic specification requires returning old value, not new one
return reinterpret_cast<const FLOATING_POINT&>(value_);
}
operator FLOATING_POINT() const volatile // volatile qualifier here for backwards compatibility
{
const MEMORY_BLOCK value_ = atomic_value_;
return reinterpret_cast<const FLOATING_POINT&>(value_);
}
//Note: atomic specification says we return the a copy of the base value not an l-value
FLOATING_POINT operator=(FLOATING_POINT rhs)
{
const MEMORY_BLOCK value_ = atomic_value_.tbb::atomic<MEMORY_BLOCK>::operator =((MEMORY_BLOCK&)rhs);
return reinterpret_cast<const FLOATING_POINT&>(value_);
}
//Note: atomic specification says we return an l-value when operating among atomics
self_t& operator=(self_t& rhs)
{
const MEMORY_BLOCK value_ = atomic_value_.tbb::atomic<MEMORY_BLOCK>::operator =((MEMORY_BLOCK&)rhs);
return *this;
}
FLOATING_POINT& _internal_reference() const
{
return reinterpret_cast<FLOATING_POINT&>(atomic_value_.tbb::atomic<MEMORY_BLOCK>::_internal_reference());
}
FLOATING_POINT operator+=(FLOATING_POINT value)
{
FLOATING_POINT old_value_, new_value_;
do
{
old_value_ = reinterpret_cast<FLOATING_POINT&>(atomic_value_);
new_value_ = old_value_ + value;
//floating point binary representation is not an issue because
//we are using our self's compare and swap, thus comparing floats and floats
} while(self_t::compare_and_swap(new_value_,old_value_) != old_value_);
return (new_value_); //return resulting value
}
FLOATING_POINT operator*=(FLOATING_POINT value)
{
FLOATING_POINT old_value_, new_value_;
do
{
old_value_ = reinterpret_cast<FLOATING_POINT&>(atomic_value_);
new_value_ = old_value_ * value;
//floating point binary representation is not an issue becaus
//we are using our self's compare and swap, thus comparing floats and floats
} while(self_t::compare_and_swap(new_value_,old_value_) != old_value_);
return (new_value_); //return resulting value
}
FLOATING_POINT operator/=(FLOATING_POINT value)
{
FLOATING_POINT old_value_, new_value_;
do
{
old_value_ = reinterpret_cast<FLOATING_POINT&>(atomic_value_);
new_value_ = old_value_ / value;
//floating point binary representation is not an issue because
//we are using our self's compare and swap, thus comparing floats and floats
} while(self_t::compare_and_swap(new_value_,old_value_) != old_value_);
return (new_value_); //return resulting value
}
FLOATING_POINT operator-=(FLOATING_POINT value)
{
return this->operator+=(-value); //return resulting value
}
//Prefix operator
FLOATING_POINT operator++()
{
return this->operator+=(1); //return resulting value
}
//Prefix operator
FLOATING_POINT operator--()
{
return this->operator+=(-1); //return resulting value
}
//Postfix operator
FLOATING_POINT operator++(int)
{
const FLOATING_POINT temp = this;
this->operator+=(1);
return temp//return resulting value
}
//Postfix operator
FLOATING_POINT operator--(int)
{
const FLOATING_POINT temp = this;
this->operator+=(1);
return temp//return resulting value
}
FLOATING_POINT fetch_and_add( FLOATING_POINT addend )
{
const FLOATING_POINT old_value_ = atomic_value_;
this->operator+=(addend);
//atomic specification requires returning old value, not new one as in operator x=
return old_value_;
}
FLOATING_POINT fetch_and_increment()
{
const FLOATING_POINT old_value_ = atomic_value_;
this->operator+=(+1);
//atomic specification requires returning old value, not new one as in operator x=
return old_value_;
}
FLOATING_POINT fetch_and_decrement()
{
const FLOATING_POINT old_value_ = atomic_value_;
this->operator+=(-1);
//atomic specification requires returning old value, not new one as in operator x=
return old_value_;
}
};
typedef atomic_float_<float,uint_32> AtomicFloat;
typedef atomic_float_<double,uint_64> AtomicDouble;
```
|
I would seriously advise against public inheritance. I don't know what the atomic implementation is like, but im assuming it has overloaded operators that use it as the integral type, which means that those promotions will be used instead of your float in many (maybe most?) cases.
I don't see any reason why that wouldn't work, but like you I have to way to prove that...
One note: your `operator float()` routine does not have load-acquire semantics, and shouldn't it be marked const volatile (or definitely at least const)?
EDIT: If you are going to provide operator--() you should provide both prefix/postfix forms.
|
It looks like your implementation assumes that `sizeof(size_t) == sizeof(float)`. Will that always be true for your target platforms?
And I wouldn't say *threading* heresy so much as *casting* heresy. :)
|
Is this C++ implementation for an Atomic float safe?
|
[
"",
"c++",
"thread-safety",
"atomic",
"tbb",
""
] |
this is a question that when programming I always wonder: What to use when we are writing code:
```
var myFiles = Directory.GetFiles(fullPath);
```
or
```
string[] myFiles = Directory.GetFiles(fullPath);
```
***var*** is new and is a *Implicitly Typed Local Variables*, so we can only use locally and it has rules like can't be null, etc., but I wonder if we get any advantage of using it "normally".
The "normally" part says, not in **Anonymous Types**, **Object and Collection Initializers** and **Query Expressions** where that was the intent to use the var anonymous object, so what I mean is... just like the example above.
what are your thoughts?
|
Beyond the obvious use of `var` with LINQ, I also use it to abbreviate hairy variable declarations for readability, e.g.:
```
var d = new Dictionary<string, Dictionary<string, Queue<SomeClass>>>();
```
In general, I get a kind of comfort (for want of a better word) from static typing that makes me reluctant to give it up. I like the feeling that I know what I'm doing when I'm declaring a variable. Declaring a variable isn't just telling the compiler something, it's telling the person reading your code something.
Let me give you an example. Suppose I have a method that returns a `List<string>`. This code is certainly correct, and I think it's how 90% of C# developers would probably write it:
```
List<string> list = MyMethod();
```
Obviously, right? In fact, here's a place you could just as easily use `var`.
True enough. But *this* version of the code isn't just declaring a variable, it's telling me what the person who wrote it is intending to do:
```
IEnumerable<string> list = MyMethod();
```
The developer who wrote that code is telling me "I'm not going to be changing this list, nor am I going to use an index to access its members. All I'm going to do is iterate across it." That's a lot of information to get across in a single line of code. It's something you give up if you use `var`.
Of course, you're not giving it up if you weren't using it in the first place. If you're the kind of developer who would write that line of code, you already know that you wouldn't use `var` there.
**Edit:**
I just reread Jon Skeet's post, and this quote from Eric Lippert jumped out at me:
> Implicitly typed locals are just one small way in which you can deemphasize the how and thereby emphasize the what.
I think that actually in a lot of cases using implicit typing is leaving the what implicit. It's just OK to not dwell on the what. For instance, I'll casually write a LINQ query like:
```
var rows = from DataRow r in parentRow.GetChildRows(myRelation)
where r.Field<bool>("Flag")
orderby r.Field<int>("SortKey")
select r;
```
When I read that code, one of the things I think when I'm reading it is "`rows` is an `IEnumerable<DataRow>`." Because I know that what LINQ queries return is `IEnumerable<T>`, and I can see the type of the object being selected right there.
That's a case where the what *hasn't* been made explicit. It's been left for me to infer.
Now, in about 90% of the cases where I use LINQ, this doesn't matter one tiny little bit. Because 90% of the time, the next line of code is:
```
foreach (DataRow r in rows)
```
But it's not hard to envision code in which it would be very useful to declare `rows` as `IEnumerable<DataRow>` - code where a lot of different kinds of objects were being queried, it wasn't feasible to put the query declaration next to the iteration, and it would be useful to be able inspect `rows` with IntelliSense. And that's a what thing, not a how thing.
|
You'll get a huge variety of opinions on this one - from "use var everywhere" to "only use var with anonymous types, where you basically have to." I like [Eric Lippert's take on it](http://csharpindepth.com/ViewNote.aspx?NoteID=61):
> All code is an abstraction. Is what
> the code is “really” doing is
> manipulating data? No. Numbers? Bits?
> No. Voltages? No. Electrons? Yes, but
> understanding the code at the level of
> electrons is a bad idea! The art of
> coding is figuring out what the right
> level of abstraction is for the
> audience.
>
> In a high level language there is
> always this tension between WHAT the
> code does (semantically) and HOW the
> code accomplishes it. Maintenance
> programmers need to understand both
> the what and the how if they’re going
> to be successful in making changes.
>
> The whole point of LINQ is that it
> massively de-emphasizes the "how" and
> massively emphasizes the "what". By
> using a query comprehension, the
> programmer is saying to the future
> audience "I believe that you should
> neither know nor care exactly how this
> result set is being computed, but you
> should care very much about what the
> semantics of the resulting set are."
> They make the code closer to the
> business process being implemented and
> farther from the bits and electrons
> that make it go.
>
> Implicitly typed locals are just one
> small way in which you can deemphasize
> the how and thereby emphasize the
> what. Whether that is the right thing
> to do in a particular case is a
> judgment call. So I tell people that
> if knowledge of the type is relevant
> and its choice is crucial to the
> continued operation of the method,
> then do not use implicit typing.
> Explicit typing says "I am telling you
> how this works for a reason, pay
> attention". Implicit typing says "it
> doesn’t matter a bit whether this
> thing is a List or a
> Customer[], what matters is that it is
> a collection of customers."
Personally I don't *tend* to use it if the type isn't reasonably obvious - where I include LINQ queries as being "reasonably obvious". I wouldn't do it for `Directory.GetFiles` for instance, as it's not really obvious that that returns a `string[]` instead of (say) a `FileInfo[]` (or something else entirely) - and that makes a big difference to what you do later.
If there's a constructor call on the right hand side of the assignment operator, I'm much more likely to go with `var`: it's blatantly obvious what the type will be. This is particularly handy with complex generic types, e.g. `Dictionary<string,List<int>>`.
|
What to use: var or object name type?
|
[
"",
"c#",
".net-3.5",
"c#-3.0",
"anonymous-objects",
""
] |
I'm looking for a PHP library/function/class which can create [Identicon](http://en.wikipedia.org/wiki/Identicon)s.
|
how about [this](http://scott.sherrillmix.com/blog/blogger/wp_identicon/)
it's how Scott did the identicons for Wordpress, you can download the code and see for yourself.
Hope it helps.
|
i use this:
```
class Gravatar
{
static public function GetGravatarUrl( $email, $size = 128, $type = 'identicon', $rating = 'pg' )
{
$gravatar = sprintf( 'http://www.gravatar.com/avatar/%s?d=%s&s=%d&r=%s',
md5( $email ), $type, $size, $rating );
return $gravatar;
}
}
```
Which is basically the same thing SO uses. It supports everything [gravatar.com](http://en.gravatar.com/site/implement) supports.
|
Is there identicon library for PHP
|
[
"",
"php",
"identicon",
""
] |
I met an interesting issue about C#. I have code like below.
```
List<Func<int>> actions = new List<Func<int>>();
int variable = 0;
while (variable < 5)
{
actions.Add(() => variable * 2);
++ variable;
}
foreach (var act in actions)
{
Console.WriteLine(act.Invoke());
}
```
I expect it to output 0, 2, 4, 6, 8. However, it actually outputs five 10s.
It seems that it is due to all actions referring to one captured variable. As a result, when they get invoked, they all have same output.
Is there a way to work round this limit to have each action instance have its own captured variable?
|
Yes - take a copy of the variable inside the loop:
```
while (variable < 5)
{
int copy = variable;
actions.Add(() => copy * 2);
++ variable;
}
```
You can think of it as if the C# compiler creates a "new" local variable every time it hits the variable declaration. In fact it'll create appropriate new closure objects, and it gets complicated (in terms of implementation) if you refer to variables in multiple scopes, but it works :)
Note that a more common occurrence of this problem is using `for` or `foreach`:
```
for (int i=0; i < 10; i++) // Just one variable
foreach (string x in foo) // And again, despite how it reads out loud
```
See section 7.14.4.2 of the C# 3.0 spec for more details of this, and my [article on closures](http://csharpindepth.com/Articles/Chapter5/Closures.aspx) has more examples too.
Note that as of the C# 5 compiler and beyond (even when specifying an earlier version of C#), the behavior of `foreach` changed so you no longer need to make local copy. See [this answer](https://stackoverflow.com/questions/8898925/is-there-a-reason-for-cs-reuse-of-the-variable-in-a-foreach/8899347#8899347) for more details.
|
I believe what you are experiencing is something known as Closure <http://en.wikipedia.org/wiki/Closure_(computer_science)>. Your lamba has a reference to a variable which is scoped outside the function itself. Your lamba is not interpreted until you invoke it and once it is it will get the value the variable has at execution time.
|
Captured variable in a loop in C#
|
[
"",
"c#",
"closures",
"captured-variable",
""
] |
I've noticed that plenty of opensource project doesn't use BITWISE flags anymore, even if it's fully supported by programming enviroment common for web ( php/Mysql).
It's that a "lost practise" for some effective problem, or is just that a lot of php programmers don't know how to handle this type of implementation?
Nothing too important, just very curious :)
thanks to you all
|
I'll stick my neck out and say that every technical position requires a sound understanding of bitwise operations.
And I have an anecdote that indirectly addresses the topic.
January 2007 I was in Cochin, India, recruiting for permanent development staff. Since I wasn't involved in the preliminary screening of candidates I had no idea what standard to expect so I had prepared a range of questions and topics ranging from simple understanding of binary and hexidecimal through to architecture, design, and project management.
When I discussed my approach with the Indian HR guy I was (gently) chided for pitching too low. He made it clear that my questions about hex would possibly be construed as an insult to the candidates experience or education.
But my experience of interviewing hundreds of candidates in the UK had fixed in me a conviction that it wasn't possible to pitch too low. My opinion was and still is that if it becomes obvious a candidate is well qualified then it's simple and easy to adjust the level of discussion. I've never had anyone express feelings of being insulted, on the contrary I think a well qualified candidate might feel relieved at a flying start to the interview. It also helps to break the ice and build a rapport needed for a meaningful interview. On the other hand, unqualified candidates usually fall at these lower hurdles.
But not wanting to completely ignore local advice I cautiously decided to include my basic interview topics, and was quite prepared to abandon them if they didn't work.
As the interviews progressed I was glad that I started at that level. It didn't offend anyone, and unsuitable candidates were easily identified.
This is not to say that I expect candidates to deal with bit-twiddling day to day, but whatever the language a sound understanding of the fundamentals of programming is essential. Even developers at the higher levels of abstraction are exposed to hex on a regular basis (RGB values, for example). Parroting [stuff you find on the net](http://www.codeproject.com/) will only help to the extent that things work perfectly first time.
But for developers starting out in the past five years I believe it's all too easy to gloss over the fundamentals, cosseted by well intentioned IDEs and the meme of "codeless" programming. The Visual Studio installation spash screens boast about developing without writing code. Indeed, [does Visual Studio rot the mind](http://www.charlespetzold.com/etc/DoesVisualStudioRotTheMind.html)?
|
A lot of programmers these days seem to just have their heads filled with just enough knowledge to brute-force code out and then sent into the workforce without being taught what words like "bitwise" even mean.
It's a dying art I tell you...
|
Bitwise Flags abandoned?
|
[
"",
"php",
"mysql",
"open-source",
""
] |
I'm trying to figure out how to pass arguments to an anonymous function in JavaScript.
Check out this sample code and I think you will see what I mean:
```
<input type="button" value="Click me" id="myButton" />
<script type="text/javascript">
var myButton = document.getElementById("myButton");
var myMessage = "it's working";
myButton.onclick = function(myMessage) { alert(myMessage); };
</script>
```
When clicking the button the message: `it's working` should appear. However the `myMessage` variable inside the anonymous function is null.
jQuery uses a lot of anonymous functions, what is the best way to pass that argument?
|
Your specific case can simply be corrected to be working:
```
<script type="text/javascript">
var myButton = document.getElementById("myButton");
var myMessage = "it's working";
myButton.onclick = function() { alert(myMessage); };
</script>
```
This example will work because the anonymous function created and assigned as a handler to element will have access to variables defined in the context where it was created.
For the record, a handler (that you assign through setting onxxx property) expects single argument to take that is event object being passed by the DOM, and you cannot force passing other argument in there
|
What you've done doesn't work because you're binding an event to a function. As such, it's the event which defines the parameters that will be called when the event is raised (i.e. JavaScript doesn't know about your parameter in the function you've bound to onclick so can't pass anything into it).
You could do this however:
```
<input type="button" value="Click me" id="myButton"/>
<script type="text/javascript">
var myButton = document.getElementById("myButton");
var myMessage = "it's working";
var myDelegate = function(message) {
alert(message);
}
myButton.onclick = function() {
myDelegate(myMessage);
};
</script>
```
|
How can I pass arguments to anonymous functions in JavaScript?
|
[
"",
"javascript",
"jquery",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.