Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm developing an MFC application and I've recently been searching for a good method to remove the icon from a Windows title bar, but retain the close button. The two obvious candidate solutions are turning off the system menu style or using the tool window style, but I would prefer not to disable the system menu or use the shrunken tool window title bar. Many MFC applications have this functionality, so I wonder: am I missing some standard way of doing this?
|
You can use `WM_NCRBUTTONDOWN` to detect if the user has right-clicked on your caption and then bring up the system menu.
|
Set `WS_EX_DLGMODALFRAME` extended style.
|
Removing icon from Windows title bars without ditching close button or system menu?
|
[
"",
"c++",
"windows",
"user-interface",
"mfc",
""
] |
I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is [what the FSF has to say](http://www.gnu.org/licenses/gpl-faq.html) on the issue:
> **If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in?**
>
> It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them.
>
> If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed.
>
> If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via `import` or `execfile`?
(edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything).
The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using [Qt](http://trolltech.com/products/qt)/[PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro) which is GPL.
|
> he distinction between fork/exec and dynamic linking, besides being kind of artificial,
I don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has "plugins" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the "plugin" code would work independently of your code as well.
If, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the "plugin" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product.
So to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not.
Does that make more sense?
|
@Daniel
> The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
I'm not sure that the distinction **is** artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not.
In anycase I would guess that `import`ing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No?
|
Proprietary plug-ins for GPL programs: what about interpreted languages?
|
[
"",
"python",
"plugins",
"open-source",
"licensing",
"interpreted-language",
""
] |
Using regular expressions, what is the simplest way to fetch a websites HTML and find the value inside this tag (or any attribute's value for that matter):
```
<html>
<head>
[snip]
<meta name="generator" value="thevalue i'm looking for" />
[snip]
```
|
Depends on how sophisticated of an Http request you need to build (authentication, etc). Here's one simple way I've seen used in the past.
```
StringBuilder html = new StringBuilder();
java.net.URL url = new URL("http://www.google.com/");
BufferedReader input = null;
try {
input new BufferedReader(
new InputStreamReader(url.openStream()));
String htmlLine;
while ((htmlLine=input.readLine())!=null) {
html.appendLine(htmlLine);
}
}
finally {
input.close();
}
Pattern exp = Pattern.compile(
"<meta name=\"generator\" value=\"([^\"]*)\" />");
Matcher matcher = exp.matcher(html.toString());
if(matcher.find())
{
System.out.println("Generator: "+matcher.group(1));
}
```
*Probably plenty of typos here to be found when compiled.
(hope this wasn't homework)*
|
Its amazing how noone, when addressing the problem of using RegEx with HTML, confronts the problem of HTML often **NOT** being well-formed, thus rendering a lot of HTML-parsers completely useless.
If you are developing tools to analyze webpages and its a fact that these are not well-formed HTML, the statement "Regex should never be used to parse HTML" og "use a HTML parser" is just completely bogus. Facts are that in the real world, people create HTML as they feel like - and not necessarily suited for parsers.
RegEx *is* a completely valid way to find elements in text, thus in HTML. If there are any other reasonable way to confront the problems the Original Poster has, then post them instead of referring to a "use a parser" or "RTFM" statement.
|
Quick way to find a value in HTML (Java)
|
[
"",
"java",
"html",
"regex",
""
] |
What code analysis tools do you use on your Java projects?
I am interested in all kinds
* static code analysis tools (FindBugs, PMD, and any others)
* code coverage tools (Cobertura, Emma, and any others)
* any other instrumentation-based tools
* anything else, if I'm missing something
If applicable, also state what build tools you use and how well these tools integrate with both your IDEs and build tools.
If a tool is only available a specific way (as an IDE plugin, or, say, a build tool plugin) that information is also worth noting.
|
For static analysis tools I often use CPD, [PMD](http://pmd.sourceforge.net), [FindBugs](http://findbugs.sourceforge.net), and [Checkstyle](http://checkstyle.sourceforge.net).
CPD is the PMD "Copy/Paste Detector" tool. I was using PMD for a little while before I noticed the ["Finding Duplicated Code" link](http://pmd.sourceforge.net/cpd.html) on the [PMD web page](http://pmd.sourceforge.net).
I'd like to point out that these tools can sometimes be extended beyond their "out-of-the-box" set of rules. And not just because they're open source so that you can rewrite them. Some of these tools come with applications or "hooks" that allow them to be extended. For example, PMD comes with the ["designer" tool](http://pmd.sourceforge.net/howtowritearule.html) that allows you to create new rules. Also, Checkstyle has the [DescendantToken](http://checkstyle.sourceforge.net/config_misc.html#DescendantToken) check that has properties that allow for substantial customization.
I integrate these tools with [an Ant-based build](http://virtualteamtls.svn.sourceforge.net/viewvc/virtualteamtls/trunk/scm/common.xml?view=markup). You can follow the link to see my commented configuration.
In addition to the simple integration into the build, I find it helpful to configure the tools to be somewhat "integrated" in a couple of other ways. Namely, report generation and warning suppression uniformity. I'd like to add these aspects to this discussion (which should probably have the "static-analysis" tag also): how are folks configuring these tools to create a "unified" solution? (I've asked this question separately [here](https://stackoverflow.com/questions/79918/configuring-static-analysis-tools-for-uniformity))
First, for warning reports, I transform the output so that each warning has the simple format:
```
/absolute-path/filename:line-number:column-number: warning(tool-name): message
```
This is often called the "Emacs format," but even if you aren't using Emacs, it's a reasonable format for homogenizing reports. For example:
```
/project/src/com/example/Foo.java:425:9: warning(Checkstyle):Missing a Javadoc comment.
```
My warning format transformations are done by my Ant script with Ant [filterchains](http://ant.apache.org/manual/Types/filterchain.html).
The second "integration" that I do is for warning suppression. By default, each tool supports comments or an annotation (or both) that you can place in your code to silence a warning that you want to ignore. But these various warning suppression requests do not have a consistent look which seems somewhat silly. When you're suppressing a warning, you're suppressing a warning, so why not always write "`SuppressWarning`?"
For example, PMD's default configuration suppresses warning generation on lines of code with the string "`NOPMD`" in a comment. Also, PMD supports Java's `@SuppressWarnings` annotation. I configure PMD to use comments containing "`SuppressWarning(PMD.`" instead of `NOPMD` so that PMD suppressions look alike. I fill in the particular rule that is violated when using the comment style suppression:
```
// SuppressWarnings(PMD.PreserveStackTrace) justification: (false positive) exceptions are chained
```
Only the "`SuppressWarnings(PMD.`" part is significant for a comment, but it is consistent with PMD's support for the `@SuppressWarning` annotation which does recognize individual rule violations by name:
```
@SuppressWarnings("PMD.CompareObjectsWithEquals") // justification: identity comparision intended
```
Similarly, Checkstyle suppresses warning generation between pairs of comments (no annotation support is provided). By default, comments to turn Checkstyle off and on contain the strings `CHECKSTYLE:OFF` and `CHECKSTYLE:ON`, respectively. Changing this configuration (with Checkstyle's "SuppressionCommentFilter") to use the strings "`BEGIN SuppressWarnings(CheckStyle.`" and "`END SuppressWarnings(CheckStyle.`" makes the controls look more like PMD:
```
// BEGIN SuppressWarnings(Checkstyle.HiddenField) justification: "Effective Java," 2nd ed., Bloch, Item 2
// END SuppressWarnings(Checkstyle.HiddenField)
```
With Checkstyle comments, the particular check violation (`HiddenField`) *is* significant because each check has its own "`BEGIN/END`" comment pair.
FindBugs also supports warning generation suppression with a `@SuppressWarnings` annotation, so no further configuration is required to achieve some level of uniformity with other tools. Unfortunately, Findbugs has to support a custom `@SuppressWarnings` annotation because the built-in Java `@SuppressWarnings` annotation has a `SOURCE` retention policy which is not strong enough to retain the annotation in the class file where FindBugs needs it. I fully qualify FindBugs warnings suppressions to avoid clashing with Java's `@SuppressWarnings` annotation:
```
@edu.umd.cs.findbugs.annotations.SuppressWarnings("UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR")
```
These techniques makes things look reasonably consistent across tools. Note that having each warning suppression contain the string "`SuppressWarnings`" makes it easy to run a simple search to find all instances for all tools over an entire code base.
|
I use a combination of Cobertura, Checkstyle, (Ecl)Emma and Findbugs.
[EclEmma](http://www.eclemma.org/) is an *awesome* Eclipse plugin that shows the code coverage by coloring the java source in the editor ([screenshot](http://www.eclemma.org/images/screen.png)) - the coverage is generated by running a JUnit test. This is really useful when you are trying to figure out which lines are covered in a particular class, or if you want to see just which lines are covered by a single test. This is much more user friendly and useful than generating a report and then looking through the report to see which classes have low coverage.
The Checkstyle and Findbugs Eclipse plugins are also useful, they generate warnings in the editor as you type.
Maven2 has report plugins that work with the above tools to generate reports at build time. We use this to get overall project reports, which are more useful when you want aggregate numbers. These are generated by our CI builds, which run using [Continuum](http://continuum.apache.org/).
|
What code analysis tools do you use for your Java projects?
|
[
"",
"java",
"code-coverage",
"static-analysis",
""
] |
With .Net what is the best way to interact with a service (i.e. how do most tray-apps communicate with their servers). It would be preferred if this method would be cross-platform as well (working in Mono, so I guess remoting is out?)
---
### Edit:
Forgot to mention, we still have to support Windows 2000 machines in the field, so WCF and anything above .Net 2.0 won't fly.
|
Be aware that if you are planning to eventually deploy on Windows Vista or Windows Server 2008, many ways that this can be done today will not work. This is because of the introduction of a new security feature called "Session 0 Isolation".
Most windows services have been moved to run in Session 0 now in order to properly isolate them from the rest of the system. An extension of this is that the first user to login to the system no longer is placed in Session #0, they are placed in Session 1. And hence, the isolation will break code that does certain types of communication between services and desktop applications.
The best way to write code today that will work on Vista and Server 2008 going forward when doing communication between services and applications is to use a proper cross-process API like RPC, Named Pipes, etc. Do not use SendMessage/PostMessage as that will fail under Session 0 Isolation.
<http://www.microsoft.com/whdc/system/vista/services.mspx>
Now, given your requirements, you are going to be in a bit of a pickle. For the cross-platform concerns, I'm not sure if Remoting would be supported. You may have to drop down and go all the way back to sockets: <http://msdn.microsoft.com/en-us/library/system.net.sockets.aspx>
|
If this is a tray app, and not a true service, be wary of how you set up your communications if using pipes or TCP/IP. If multiple users are logged into a machine (Citrix, Remote Desktop), and each user launches a tray app "service", then you can run into a situation where you have multiple processes trying to use the same well known port or pipe. Of course this isn't a problem if you don't plan on supporting multiple pipes or if you have a true service as opposed to a tray app that runs in each user shell.
|
How to communicate with a windows service from an application that interacts with the desktop?
|
[
"",
"c#",
".net",
"windows-services",
"service",
"mono",
""
] |
I constantly hear how bad reflection is to use. While I generally avoid reflection and rarely find situations where it is impossible to solve my problem without it, I was wondering...
For those who have used reflection in applications, have you measured performance hits and, is it really so bad?
|
It is. But that depends on what you're trying to do.
I use reflection to dynamically load assemblies (plugins) and its performance "penalty" is not a problem, since the operation is something I do during startup of the application.
However, if you're reflecting inside a series of nested loops with reflection calls on each, I'd say you should revisit your code :)
For "a couple of time" operations, reflection is perfectly acceptable and you won't notice any delay or problem with it. It's a very powerful mechanism and it is even used by .NET, so I don't see why you shouldn't give it a try.
|
In his talk [The Performance of Everyday Things](http://weblogs.asp.net/craigshoemaker/archive/2008/06/17/the-performance-of-everyday-things.aspx), Jeff Richter shows that calling a method by reflection is about **1000 times slower** than calling it normally.
Jeff's tip: if you need to call the method multiple times, use reflection once to find it, then assign it to a **delegate**, and then call the delegate.
|
How costly is .NET reflection?
|
[
"",
"c#",
".net",
"performance",
"reflection",
""
] |
Is it possible to call a stored procedure from another stored procedure asynchronously?
**Edit:** Specifically I'm working with a DB2 database.
|
Executive summary: Yes, if your database has a message queue service.
You can push a message onto a queue and the queue processor will consume it asynchronously.
* Oracle: queues
* Sql Server: service broker
* DB2: event broker
For "pure" stored procedure languages (PL/Sql or T-Sql) the answer is no, since it works against the fundamental transaction model most databases have.
However, if your database has a queuing mechanism, you can use that to get the same result.
|
With MS Sql Server 2005, try the Service Broker and/or CLR stored procedures. I don't think there's anything built directly into TSQL.
|
Asynchronous Stored Procedure Calls
|
[
"",
"sql",
"stored-procedures",
"db2",
""
] |
In a project I am interfacing between C++ and a C library that uses stdbool.h defined as such.
```
#ifndef _STDBOOL_H
#define _STDBOOL_H
/* C99 Boolean types for compilers without C99 support */
/* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */
#if !defined(__cplusplus)
#if !defined(__GNUC__)
/* _Bool builtin type is included in GCC */
typedef enum { _Bool_must_promote_to_int = -1, false = 0, true = 1 } _Bool;
#endif
#define bool _Bool
#define true 1
#define false 0
#define __bool_true_false_are_defined 1
#endif
#endif
```
Some structures have `bool` members. So if I have one of these structures defined as local variables within a C++ function and pass it to a C function the sizes are inconsistent between C++ and C as bool is one bye in C++ and 4 in C.
Does anyone have any advice to how to overcome this without resorting to my current solution which is
```
//#define bool _Bool
#define bool unsigned char
```
Which is against the C99 standard for [stdbool.h](http://www.opengroup.org/onlinepubs/000095399/basedefs/stdbool.h.html)
|
I found the answer to my own question by finding a more compatible implementation of `stdbool.h` that is compliant with the C99 standard.
```
#ifndef _STDBOOL_H
#define _STDBOOL_H
#include <stdint.h>
/* C99 Boolean types for compilers without C99 support */
/* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */
#if !defined(__cplusplus)
#if !defined(__GNUC__)
/* _Bool builtin type is included in GCC */
/* ISO C Standard: 5.2.5 An object declared as
type _Bool is large enough to store
the values 0 and 1. */
/* We choose 8 bit to match C++ */
/* It must also promote to integer */
typedef int8_t _Bool;
#endif
/* ISO C Standard: 7.16 Boolean type */
#define bool _Bool
#define true 1
#define false 0
#define __bool_true_false_are_defined 1
#endif
#endif
```
This is taken from the [Ada Class Library](http://sourceforge.net/projects/adacl) project.
|
Size is not the only thing that will be inconsistent here. In C++ bool is a keyword, and C++ guarantees that a bool can hold a value of either 1 or 0 and nothing else. C doesn't give you this guarantee.
That said, if interoperability between C and C++ is important you can emulate C's custom-made boolean by defining an identical one for C++ and using that instead of the builtin bool. That will be a tradeoff between a buggy boolean and identical behaviour between the C boolean and the C++ boolean.
|
interfacing with stdbool.h C++
|
[
"",
"c++",
"c",
"boolean",
"standards",
""
] |
I want to do something like:
```
MyObject myObj = GetMyObj(); // Create and fill a new object
MyObject newObj = myObj.Clone();
```
And then make changes to the new object that are not reflected in the original object.
I don't often need this functionality, so when it's been necessary, I've resorted to creating a new object and then copying each property individually, but it always leaves me with the feeling that there is a better or more elegant way of handling the situation.
How can I clone or deep copy an object so that the cloned object can be modified without any changes being reflected in the original object?
|
Whereas one approach is to implement the [`ICloneable`](http://msdn.microsoft.com/en-us/library/system.icloneable.aspx) interface (described [here](https://stackoverflow.com/questions/78536/cloning-objects-in-c/78568#78568), so I won't regurgitate), here's a nice deep clone object copier I found on [The Code Project](http://www.codeproject.com/Articles/23832/Implementing-Deep-Cloning-via-Serializing-objects) a while ago and incorporated it into our code.
As mentioned elsewhere, it requires your objects to be serializable.
```
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
/// <summary>
/// Reference Article http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx
/// Provides a method for performing a deep copy of an object.
/// Binary Serialization is used to perform the copy.
/// </summary>
public static class ObjectCopier
{
/// <summary>
/// Perform a deep copy of the object via serialization.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>A deep copy of the object.</returns>
public static T Clone<T>(T source)
{
if (!typeof(T).IsSerializable)
{
throw new ArgumentException("The type must be serializable.", nameof(source));
}
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
using var stream = new MemoryStream();
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(stream, source);
stream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(stream);
}
}
```
The idea is that it serializes your object and then deserializes it into a fresh object. The benefit is that you don't have to concern yourself about cloning everything when an object gets too complex.
In case of you prefer to use the new [extension methods](http://en.wikipedia.org/wiki/Extension_method) of C# 3.0, change the method to have the following signature:
```
public static T Clone<T>(this T source)
{
// ...
}
```
Now the method call simply becomes `objectBeingCloned.Clone();`.
**EDIT** (January 10 2015) Thought I'd revisit this, to mention I recently started using (Newtonsoft) Json to do this, it [should be](http://maxondev.com/serialization-performance-comparison-c-net-formats-frameworks-xmldatacontractserializer-xmlserializer-binaryformatter-json-newtonsoft-servicestack-text/) lighter, and avoids the overhead of [Serializable] tags. (**NB** @atconway has pointed out in the comments that private members are not cloned using the JSON method)
```
/// <summary>
/// Perform a deep Copy of the object, using Json as a serialization method. NOTE: Private members are not cloned using this method.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T CloneJson<T>(this T source)
{
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
// initialize inner objects individually
// for example in default constructor some list property initialized with some values,
// but in 'source' these items are cleaned -
// without ObjectCreationHandling.Replace default constructor values will be added to result
var deserializeSettings = new JsonSerializerSettings {ObjectCreationHandling = ObjectCreationHandling.Replace};
return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), deserializeSettings);
}
```
|
I wanted a cloner for very simple objects of mostly primitives and lists. If your object is out of the box JSON serializable then this method will do the trick. This requires no modification or implementation of interfaces on the cloned class, just a JSON serializer like JSON.NET.
```
public static T Clone<T>(T source)
{
var serialized = JsonConvert.SerializeObject(source);
return JsonConvert.DeserializeObject<T>(serialized);
}
```
Also, you can use this extension method
```
public static class SystemExtension
{
public static T Clone<T>(this T source)
{
var serialized = JsonConvert.SerializeObject(source);
return JsonConvert.DeserializeObject<T>(serialized);
}
}
```
|
Deep cloning objects
|
[
"",
"c#",
".net",
"clone",
"deep-copy",
""
] |
The Python docs say:
> re.MULTILINE: When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline)... By default, '^' matches only at the beginning of the string...
So what's going on when I get the following unexpected result?
```
>>> import re
>>> s = """// The quick brown fox.
... // Jumped over the lazy dog."""
>>> re.sub('^//', '', s, re.MULTILINE)
' The quick brown fox.\n// Jumped over the lazy dog.'
```
|
Look at the definition of [`re.sub`](http://docs.python.org/library/re.html#re.sub):
```
re.sub(pattern, repl, string[, count, flags])
```
The 4th argument is the count, you are using `re.MULTILINE` (which is 8) as the count, not as a flag.
Either use a named argument:
```
re.sub('^//', '', s, flags=re.MULTILINE)
```
Or compile the regex first:
```
re.sub(re.compile('^//', re.MULTILINE), '', s)
```
|
```
re.sub('(?m)^//', '', s)
```
|
Python re.sub with a flag does not replace all occurrences
|
[
"",
"python",
"regex",
""
] |
I've been tasked (by my wife) with creating a program to allow her to track the family trees on both sides of our family.
Does anyone know of a cost-effective (free) control to represent this type of information?
What I'm looking for is a modified org-chart type chart/tree. The modification is that any node should have 2 parent nodes (E.G. a child should have a Mother/Father).
The solution I've come up with so far is to have 2 trees, an ancestor tree and a descendants tree, with the individual being inspected as the root node for each tree. It works but is sort of clunky.
I'm working primarily in c# WinForms, so .Net type controls or source code is preferable.
|
[Geni](http://www.geni.com/) is probably what your looking for.
|
I actually spotted [GRAMPS](http://gramps-project.org/wiki/index.php?title=Main_Page) just the other day.
|
Genealogy Tree Control
|
[
"",
"c#",
"controls",
"tree",
"genealogy",
""
] |
I'm working on an app that grabs and installs a bunch of updates off an an external server, and need some help with threading. The user follows this process:
* Clicks button
* Method checks for updates, count is returned.
* If greater than 0, then ask the user if they want to install using MessageBox.Show().
* If yes, it runs through a loop and call BeginInvoke() on the run() method of each update to run it in the background.
* My update class has some events that are used to update a progress bar etc.
The progress bar updates are fine, but the MessageBox is not fully cleared from the screen because the update loop starts right after the user clicks yes (see screenshot below).
* What should I do to make the messagebox disappear instantly before the update loop starts?
* Should I be using Threads instead of BeginInvoke()?
* Should I be doing the initial update check on a separate thread and calling MessageBox.Show() from that thread?
**Code**
```
// Button clicked event handler code...
DialogResult dlgRes = MessageBox.Show(
string.Format("There are {0} updates available.\n\nInstall these now?",
um2.Updates.Count), "Updates Available",
MessageBoxButtons.YesNo,
MessageBoxIcon.Question,
MessageBoxDefaultButton.Button2
);
if (dlgRes == DialogResult.Yes)
{
ProcessAllUpdates(um2);
}
// Processes a bunch of items in a loop
private void ProcessAllUpdates(UpdateManager2 um2)
{
for (int i = 0; i < um2.Updates.Count; i++)
{
Update2 update = um2.Updates[i];
ProcessSingleUpdate(update);
int percentComplete = Utilities.CalculatePercentCompleted(i, um2.Updates.Count);
UpdateOverallProgress(percentComplete);
}
}
// Process a single update with IAsyncResult
private void ProcessSingleUpdate(Update2 update)
{
update.Action.OnStart += Action_OnStart;
update.Action.OnProgress += Action_OnProgress;
update.Action.OnCompletion += Action_OnCompletion;
//synchronous
//update.Action.Run();
// async
IAsyncResult ar = this.BeginInvoke((MethodInvoker)delegate() { update.Action.Run(); });
}
```
**Screenshot**
[](http://www.flickr.com/photos/brianly/2760435228/ "Windows Mobile Bug by BrianLy, on Flickr")
|
Your UI isn't updating because all the work is happening in the user interface thread.
Your call to:
```
this.BeginInvoke((MethodInvoker)delegate() {update.Action.Run(); })
```
is saying invoke update.Action.Run() on the thread that created "this" (your form), which is the user interface thread.
```
Application.DoEvents()
```
will indeed give the UI thread the chance to redraw the screen, but I'd be tempted to create new delegate, and call BeginInvoke on that.
This will execute the update.Action.Run() function on a seperate thread allocated from the thread pool. You can then keep checking the IAsyncResult until the update is complete, querying the update object for its progress after every check (because you can't have the other thread update the progress bar/UI), then calling Application.DoEvents().
You also are supposed to call EndInvoke() afterwards otherwise you may end up leaking resources
I would also be tempted to put a cancel button on the progress dialog, and add a timeout, otherwise if the update gets stuck (or takes too long) then your application will have locked up forever.
|
@ John Sibly
[You can get away with *not* calling EndInvoke when dealing with WinForms without any negative consequences.](http://www.interact-sw.co.uk/iangblog/2005/05/16/endinvokerequired)
> The only documented exception to the rule that I'm aware of is in Windows Forms, where you are officially allowed to call Control.BeginInvoke without bothering to call Control.EndInvoke.
However in all other cases when dealing with the Begin/End Async pattern you should assume it will leak, as you stated.
|
Compact Framework/Threading - MessageBox displays over other controls after option is chosen
|
[
"",
"c#",
"winforms",
"multithreading",
"compact-framework",
""
] |
What is your way of passing data to Master Page (using ASP.NET MVC) without breaking MVC rules?
Personally, I prefer to code abstract controller (base controller) or base class which is passed to all views.
|
If you prefer your views to have strongly typed view data classes this might work for you. Other solutions are probably more *correct* but this is a nice balance between design and practicality IMHO.
The master page takes a strongly typed view data class containing only information relevant to it:
```
public class MasterViewData
{
public ICollection<string> Navigation { get; set; }
}
```
Each view using that master page takes a strongly typed view data class containing its information and deriving from the master pages view data:
```
public class IndexViewData : MasterViewData
{
public string Name { get; set; }
public float Price { get; set; }
}
```
Since I don't want individual controllers to know anything about putting together the master pages data I encapsulate that logic into a factory which is passed to each controller:
```
public interface IViewDataFactory
{
T Create<T>()
where T : MasterViewData, new()
}
public class ProductController : Controller
{
public ProductController(IViewDataFactory viewDataFactory)
...
public ActionResult Index()
{
var viewData = viewDataFactory.Create<ProductViewData>();
viewData.Name = "My product";
viewData.Price = 9.95;
return View("Index", viewData);
}
}
```
Inheritance matches the master to view relationship well but when it comes to rendering partials / user controls I will compose their view data into the pages view data, e.g.
```
public class IndexViewData : MasterViewData
{
public string Name { get; set; }
public float Price { get; set; }
public SubViewData SubViewData { get; set; }
}
<% Html.RenderPartial("Sub", Model.SubViewData); %>
```
*This is example code only and is not intended to compile as is. Designed for ASP.Net MVC 1.0.*
|
I prefer breaking off the data-driven pieces of the master view into partials and rendering them using **Html.RenderAction**. This has several distinct advantages over the popular view model inheritance approach:
1. Master view data is completely decoupled from "regular" view models. This is composition over inheritance and results in a more loosely coupled system that's easier to change.
2. Master view models are built up by a completely separate controller action. "Regular" actions don't need to worry about this, and there's no need for a view data factory, which seems overly complicated for my tastes.
3. If you happen to use a tool like [AutoMapper](http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/01/22/automapper-the-object-object-mapper.aspx) to map your domain to your view models, you'll find it easier to configure because your view models will more closely resemble your domain models when they don't inherit master view data.
4. With separate action methods for master data, you can easily apply output caching to certain regions of the page. Typically master views contain data that changes less frequently than the main page content.
|
Passing data to Master Page in ASP.NET MVC
|
[
"",
"c#",
".net",
"asp.net",
"asp.net-mvc",
""
] |
I'm downloading an entire directory from a web server. It works OK, but I can't figure how to get the file size before download to compare if it was updated on the server or not. Can this be done as if I was downloading the file from a FTP server?
```
import urllib
import re
url = "http://www.someurl.com"
# Download the page locally
f = urllib.urlopen(url)
html = f.read()
f.close()
f = open ("temp.htm", "w")
f.write (html)
f.close()
# List only the .TXT / .ZIP files
fnames = re.findall('^.*<a href="(\w+(?:\.txt|.zip)?)".*$', html, re.MULTILINE)
for fname in fnames:
print fname, "..."
f = urllib.urlopen(url + "/" + fname)
#### Here I want to check the filesize to download or not ####
file = f.read()
f.close()
f = open (fname, "w")
f.write (file)
f.close()
```
---
@Jon: thank for your quick answer. It works, but the filesize on the web server is slightly less than the filesize of the downloaded file.
Examples:
```
Local Size Server Size
2.223.533 2.115.516
664.603 662.121
```
It has anything to do with the CR/LF conversion?
|
I have reproduced what you are seeing:
```
import urllib, os
link = "http://python.org"
print "opening url:", link
site = urllib.urlopen(link)
meta = site.info()
print "Content-Length:", meta.getheaders("Content-Length")[0]
f = open("out.txt", "r")
print "File on disk:",len(f.read())
f.close()
f = open("out.txt", "w")
f.write(site.read())
site.close()
f.close()
f = open("out.txt", "r")
print "File on disk after download:",len(f.read())
f.close()
print "os.stat().st_size returns:", os.stat("out.txt").st_size
```
Outputs this:
```
opening url: http://python.org
Content-Length: 16535
File on disk: 16535
File on disk after download: 16535
os.stat().st_size returns: 16861
```
What am I doing wrong here? Is os.stat().st\_size not returning the correct size?
---
Edit:
OK, I figured out what the problem was:
```
import urllib, os
link = "http://python.org"
print "opening url:", link
site = urllib.urlopen(link)
meta = site.info()
print "Content-Length:", meta.getheaders("Content-Length")[0]
f = open("out.txt", "rb")
print "File on disk:",len(f.read())
f.close()
f = open("out.txt", "wb")
f.write(site.read())
site.close()
f.close()
f = open("out.txt", "rb")
print "File on disk after download:",len(f.read())
f.close()
print "os.stat().st_size returns:", os.stat("out.txt").st_size
```
this outputs:
```
$ python test.py
opening url: http://python.org
Content-Length: 16535
File on disk: 16535
File on disk after download: 16535
os.stat().st_size returns: 16535
```
Make sure you are opening both files for binary read/write.
```
// open for binary write
open(filename, "wb")
// open for binary read
open(filename, "rb")
```
|
Using the returned-urllib-object method `info()`, you can get various information on the retrieved document. Example of grabbing the current Google logo:
```
>>> import urllib
>>> d = urllib.urlopen("http://www.google.co.uk/logos/olympics08_opening.gif")
>>> print d.info()
Content-Type: image/gif
Last-Modified: Thu, 07 Aug 2008 16:20:19 GMT
Expires: Sun, 17 Jan 2038 19:14:07 GMT
Cache-Control: public
Date: Fri, 08 Aug 2008 13:40:41 GMT
Server: gws
Content-Length: 20172
Connection: Close
```
It's a dict, so to get the size of the file, you do `urllibobject.info()['Content-Length']`
```
print f.info()['Content-Length']
```
And to get the size of the local file (for comparison), you can use the os.stat() command:
```
os.stat("/the/local/file.zip").st_size
```
|
Get size of a file before downloading in Python
|
[
"",
"python",
"urllib",
""
] |
I'm getting into ASP.NET (C# - I know it doesn't matter for this particular question, but full disclosure and all that), and while I love that the `asp:`-style controls save me a lot of tedious HTML-crafting, I am often frustrated with certain behaviors. I encountered one last night when working with Master Pages: my `<asp:BulletedList ID="nav">`, when converted into HTML, became `<ul id="ct100_nav">`.
There are other issues--I noticed that when you auto-populate a DataGrid, it adds attributes to the resulting table that I don't necessarily want there.
I know that there is a certain amount of "convention over configuration" that you have to accept when you rely on a framework to take over some of your tedious duties, but the "conventions" in these cases aren't so much any established conventions, but rather unnecessary extras. I know *why* the ID adds the prefix, but I should be able to tweak and turn things like this off, especially since, as a bit of a web standards evangelist, I don't duplicated HTML id's in a single page anyway.
So the question here is for those ASP.NET devs more seasoned than I: in your experiences in developing and deploying apps, how do you leverage these controls? Do you find yourself resorting back to hard-coded HTML? Do you use a blend? I don't want to design my HTML around idiosyncratic quirks in these controls, but, if possible, I'd like to leverage them when possible.
What's a boy to do?
|
Personally,
I think the standard ASP.NET controls are fine for inhouse stuff - quick and dirty is good in that scenario. But, I once worked with a web developer who was also a designer and he refused to use the ASP.NET controls and only code in HTML and add runat="server" tags when needed. This was more because he wanted to know exactly how his HTML was going to be rendered, and at the time anyway, some of the ASP.NET controls wouldn't render to standards compliance.
I sit somewhere in the middle - use HTML where appropriate and not when not. You can sort of best of both worlds with the [CSS control Adapters](http://weblogs.asp.net/scottgu/archive/2006/05/02/444850.aspx)
|
I'm actually quite relieved to see some opinions here agreeing with my own: ASP.NET as a template language is very poor.
I'd just like to rebut a couple of the pro points made here (flamesuit on!):
Dave Ward mentions ID collisions - this is true, but my how badly handled. I would have preferred to see nodes referenced by xpath or deep css selectors than by making the ID effectively useless except by deferring to ASP.NET internals like clientID - it just makes writing CSS and JS that much harder pointlessly.
Rob Cooper talks about how the controls are a replacement for HTML so it's all fine (paraphrasing, forgive me Rob) - well it's not fine, because they took an existing and well understood language and said "no, you have to do things our way now", and their way is *very* poorly implemented. e.g. asp:panel renders a table in one browser and a div in another! Without documentation or execution, the markup for a login control (and many others) isn't predictable. How are you going to get a designer to write CSS against that?
Espo writes about how controls give you the benefits of abstraction if the platform changes the html - well this is clearly circular (It's only changing because the platform is changing, and wouldn't need to if I just had my own HTML there instead) and actually creates a problem. If the control is going to change with updates again how is my CSS supposed to cope with that?
Apologists will say "yes but you can change this in the config" or talk about overriding controls and custom controls. Well why should I have to? The css friendly controls package meant to fix some of these problems is anything but with it's unsemantic markup and it doesn't address the ID issue.
It's impossible to implement MVC (the abstract concept, not the 3.5 implementation) out of the box with webform apps becuase these controls so tightly bind the view and control. There's a barrier of entry for the traditional web designer now because he has to get involved with server side code to implement what used to be the separate domains of CSS and JS. I sympathise with these people.
I do strongly agree with Kiwi's point that controls allow for some very rapid development for apps of a certain profile, and I accept that for whatever reason some programmers find HTML unpleasant, and further that the advantages the other parts of ASP.NET give you, which requires these controls, *may* be worth the price.
However, *I* resent the loss of control, I feel the model of dealing with things like classes, styles and scripting on the codebehind is a wrongheaded step backwards, and I further feel that there are better models for templating (implementation of microformats and xslt for this platform) although replacing controls with these is non-trivial.
I think ASP.NET could learn a lot from related tech in LAMP and rails world, until then I hope to work with 3.5 MVC where I can.
(sorry that was so long </rant>)
|
Controls versus standard HTML
|
[
"",
"c#",
"asp.net",
"controls",
""
] |
I just get the beach ball all day long (it's been doing nothing for hours). It's not taking CPU, not reading from disk, not using the network.
I'm using **Java 1.6** on **Mac OS X 10.5.4**. It worked once, now even restarts of the computer won't help. Activity Monitor says it's "(Not Responding)". Only thing that I can do is kill -9 that sucker.
When I sample the process I see this:
```
mach_msg_trap 16620
read 831
semaphore_wait_trap 831
```
An acceptable answer that doesn't fix this would include a url for a decent free Oracle client for the Mac.
Edit:
@Mark Harrison sadly this happens every time I start it up, it's not an old connection. I'll like to avoid running Windows on my laptop. I'm giving some plugins for my IDE a whirl, but still no solution for me.
@Matthew Schinckel Navicat seems to only have a non-commercial Oracle product...I need a commercial friendly one (even if it costs money).
|
I get the same problem after there's been an active connection sitting idle for a while. I solve it by restarting sql developer every once in a while.
I also have Toad for Oracle running on a vmware XP session, and it works great. If you don't mind the money, try that.
|
The company [Navicat](http://www.navicat.com/) has released an Oracle client for Mac (and they do a Windows version too).
It's not free, but I think you can get a 30 day demo.
|
Oracle SQL Developer not responsive when trying to view tables (or suggest an Oracle Mac client)
|
[
"",
"java",
"oracle",
"macos",
""
] |
What would be the easiest way to detach a specific JPA Entity Bean that was acquired through an EntityManager. Alternatively, could I have a query return detached objects in the first place so they would essentially act as 'read only'?
The reason why I want to do this is becuase I want to modify the data within the bean - with in my application only, but not ever have it persisted to the database. In my program, I eventually have to call flush() on the EntityManager, which would persist all changes from attached entities to the underyling database, but I want to exclude specific objects.
|
Unfortunately, there's no way to disconnect one object from the entity manager in the current JPA implementation, AFAIR.
EntityManager.clear() will disconnect *all* the JPA objects, so that might not be an appropriate solution in all the cases, if you have other objects you do plan to keep connected.
So your best bet would be to clone the objects and pass the clones to the code that changes the objects. Since primitive and immutable object fields are taken care of by the default cloning mechanism in a proper way, you won't have to write a lot of plumbing code (apart from deep cloning any aggregated structures you might have).
|
(may be too late to answer, but can be useful for others)
I'm developing my first system with JPA right now. Unfortunately I'm faced with this problem when this system is almost complete.
Simply put. Use Hibernate, or wait for JPA 2.0.
In Hibernate, you can use 'session.evict(object)' to remove one object from session. In [JPA 2.0, in draft right now,](http://jcp.org/en/jsr/detail?id=317) there is the 'EntityManager.detach(object)' method to detach one object from persistence context.
|
Detach an entity from JPA/EJB3 persistence context
|
[
"",
"java",
"orm",
"jpa",
""
] |
This gets the value of whatever is selected in my dropdown menu.
```
document.getElementById('newSkill').value
```
I cannot however find out what property to go after for the text that's currently displayed by the drop down menu. I tried "text" then looked at [W3Schools](http://w3schools.com/) but that didn't have the answer, does anybody here know?
For those not sure, here's the HTML for a drop down box.
```
<select name="newSkill" id="newSkill">
<option value="1">A skill</option>
<option value="2">Another skill</option>
<option value="3">Yet another skill</option>
</select>
```
|
Based on your example HTML code, here's one way to get the displayed text of the currently selected option:
```
var skillsSelect = document.getElementById("newSkill");
var selectedText = skillsSelect.options[skillsSelect.selectedIndex].text;
```
|
Simply You can use jQuery instead of JavaScript
```
$("#yourdropdownid option:selected").text();
```
Try This.
|
Getting the text from a drop-down box
|
[
"",
"javascript",
"dom",
"browser",
"client-side",
""
] |
In MS SQL Server, I create my scripts to use customizable variables:
```
DECLARE @somevariable int
SELECT @somevariable = -1
INSERT INTO foo VALUES ( @somevariable )
```
I'll then change the value of `@somevariable` at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember.
How do I do the same with the PostgreSQL client `psql`?
|
Postgres variables are created through the \set command, for example ...
```
\set myvariable value
```
... and can then be substituted, for example, as ...
```
SELECT * FROM :myvariable.table1;
```
... or ...
```
SELECT * FROM table1 WHERE :myvariable IS NULL;
```
*edit: As of psql 9.1, variables can be expanded in quotes as in:*
```
\set myvariable value
SELECT * FROM table1 WHERE column1 = :'myvariable';
```
*In older versions of the psql client:*
... If you want to use the variable as the value in a conditional string query, such as ...
```
SELECT * FROM table1 WHERE column1 = ':myvariable';
```
... then you need to include the quotes in the variable itself as the above will not work. Instead define your variable as such ...
```
\set myvariable 'value'
```
However, if, like me, you ran into a situation in which you wanted to make a string from an existing variable, I found the trick to be this ...
```
\set quoted_myvariable '\'' :myvariable '\''
```
Now you have both a quoted and unquoted variable of the same string! And you can do something like this ....
```
INSERT INTO :myvariable.table1 SELECT * FROM table2 WHERE column1 = :quoted_myvariable;
```
|
One final word on PSQL variables:
1. They don't expand if you enclose them in single quotes in the SQL statement.
Thus this doesn't work:
```
SELECT * FROM foo WHERE bar = ':myvariable'
```
2. To expand to a string literal in a SQL statement, you have to include the quotes in the variable set. However, the variable value already has to be enclosed in quotes, which means that you need a *second* set of quotes, and the inner set has to be escaped. Thus you need:
```
\set myvariable '\'somestring\''
SELECT * FROM foo WHERE bar = :myvariable
```
**EDIT**: starting with PostgreSQL 9.1, you may write instead:
```
\set myvariable somestring
SELECT * FROM foo WHERE bar = :'myvariable'
```
|
How do you use script variables in psql?
|
[
"",
"sql",
"postgresql",
"variables",
"psql",
""
] |
I think this is specific to IE 6.0 but...
In JavaScript I add a `div` to the DOM. I assign an `id` attribute. When I later try to pick up the `div` by the `id` all I get is `null`.
Any suggestions?
Example:
```
var newDiv = document.createElement("DIV");
newDiv.setAttribute("ID", "obj_1000");
document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") );
```
Alert prints `"::null"`
Seems to work fine in Firefox 2.0+
|
In addition to what the other answers suggest (that you need to actually insert the element into the DOM for it to be found via `getElementById()`), you also need to use a lower-case attribute name in order for IE6 to recognize it as the `id`:
```
var newDiv = document.createElement("DIV");
newDiv.setAttribute("id", "obj_1000");
document.body.appendChild(newDiv);
alert("Added:"
+ newDiv.getAttribute("id")
+ ":" + newDiv.id + ":"
+ document.getElementById("obj_1000") );
```
...responds as expected:
```
Added:obj_1000:obj_1000:[object]
```
---
According to the [MSDN documentation](http://msdn.microsoft.com/en-us/library/ms536739(VS.85).aspx) for `setAttribute()`, up to IE8 there is an optional *third parameter* that controls whether or not it is case sensitive with regard to the attribute name. Guess what the default is...
|
The div needs to be added to an element for it to be part of the document.
```
document.appendChild(newDiv);
alert( document.getElementById("obj_1000") );
```
|
How do I add a div to DOM and pick it up later
|
[
"",
"javascript",
"css",
"dhtml",
""
] |
I have jQuery but I'm not sure if it has any built-in sorting helpers. I could make a 2d array of each item's `text`, `value`, and `selected` properties, but I don't think that javascript's built in `Array.sort()` would work correctly.
|
Extract options into a temporary array, sort, then rebuild the list:
```
var my_options = $("#my_select option");
var selected = $("#my_select").val();
my_options.sort(function(a,b) {
if (a.text > b.text) return 1;
if (a.text < b.text) return -1;
return 0
})
$("#my_select").empty().append( my_options );
$("#my_select").val(selected);
```
[Mozilla's sort documentation](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/sort "Mozilla's sort documentation") (specifically the compareFunction) and [Wikipedia's Sorting Algorithm page](http://en.wikipedia.org/wiki/Sorting_algorithm) are relevant.
If you want to make the sort case insensitive, replace `text` with `text.toLowerCase()`
The sort function shown above illustrates how to sort. Sorting non-english languages accurately can be complex (see the [unicode collation algorithm](http://www.unicode.org/reports/tr10/)). Using [localeCompare](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/localeCompare) in the sort function is a good solution, eg:
```
my_options.sort(function(a,b) {
return a.text.localeCompare(b.text);
});
```
|
Modified Tom's answer above slightly so that it actually modifies the contents of the select box to be sorted, rather than just returning the sorted elements.
```
$('#your_select_box').sort_select_box();
```
jQuery function:
```
$.fn.sort_select_box = function(){
// Get options from select box
var my_options = $("#" + this.attr('id') + ' option');
// sort alphabetically
my_options.sort(function(a,b) {
if (a.text > b.text) return 1;
else if (a.text < b.text) return -1;
else return 0
})
//replace with sorted my_options;
$(this).empty().append( my_options );
// clearing any selections
$("#"+this.attr('id')+" option").attr('selected', false);
}
```
|
What is the most efficient way to sort an Html Select's Options by value, while preserving the currently selected item?
|
[
"",
"javascript",
"jquery",
"arrays",
"sorting",
"html-select",
""
] |
I'm getting into more of a TDD workflow, and have a mix of MVC and asp.net Web Forms apps.
MOQ is recommended for MVC.
I've used Rhino for Web Forms.
Does anyone have a best practice for having 1 framework mock for both?
|
Rhino's latest release includes much of the sweet sweet 3.5 love that MoQ has. I'm a fan of MoQ, so that's what I'm using. But I also have Rhino, in case it does something that MoQ doesn't do.
TL;DR: MoQ it baby.
|
This is sort of a silly question, but I prefer Rhino Mocks as it represents a more complete understanding of [mocks vs. stubs](http://martinfowler.com/articles/mocksArentStubs.html).
Look deep into TypeMock before committing to the price.
Also, there is no recommended mocking framework for ASP.NET MVC.
Finally - I'd suggest you stick to one mocking framework in your project (and even in your team) - the differences, while not huge, can lead to confusion that is unwarranted on such a "polishing-the-rock" decision. By that I mean the decision should not be a long one, just pick what works and get on with creating value.
|
Best mock framework that can do both WebForms and MVC?
|
[
"",
"c#",
"asp.net",
"asp.net-mvc",
"testing",
""
] |
I need a JavaScript library that supports Ajax as well as help me in making simple and neat animation effects [in a website](http://www.voidy.com/) I am working on.
Which library do you recommend?
|
I would definitely recommend JQuery as the easiest to use and the one which requires you to write the least code. <http://jquery.com/>
|
<http://script.aculo.us/>
I think it fits your 'neat animation effects' requirement.
|
Which JavaScript library is recommended for neat UI effects?
|
[
"",
"html",
"javascript",
""
] |
This is actually a two part question. First,does the HttpContext.Current correspond to the current System.UI.Page object?
And the second question, which is probably related to the first, is why can't I use the following to see if the current page implements an interface:
```
private IWebBase FindWebBase()
{
if (HttpContext.Current as IWebBase != null)
{
return (IWebBase)HttpContext.Current.;
}
throw new NotImplementedException("Crawling for IWebBase not implemented yet");
}
```
The general context is that some controls need to know whether they are executing as a SharePoint webpart, or as part of an Asp.Net framework.
I have solved the problem by requiring the control to pass a reference to itself, and checking the Page property of the control, but I'm still curious why the above does not work.
The compiler error is:
Cannot convert System.Web.HttpContext to ...IWebBase via a reference conversion, boxing conversion, unboxing conversion, wrapping conversion or null type conversion.
|
No, from MSDN on HttpContext.Current: "Gets or sets the HttpContext object for the current HTTP request."
In other words it is an HttpContext object, not a Page.
You can get to the Page object via HttpContext using:
```
Page page = HttpContext.Current.Handler as Page;
if (page != null)
{
// Use page instance.
}
```
|
You're looking for `HttpContext.Handler`. Since Page implements IHttpHandler, you'll obtain a reference to the currently executing page.You'll have to cast it, or at least try to cast it to the particular type you're looking for.
`HttpContext.Current` simply returns the singleton instance of HttpContext. Therefore, it is not and can never be, a page.
|
Get current System.Web.UI.Page from HttpContext?
|
[
"",
"c#",
"asp.net",
"httpcontext",
""
] |
Is there a way to get the path for the assembly in which the current code resides? I do not want the path of the calling assembly, just the one containing the code.
Basically my unit test needs to read some xml test files which are located relative to the dll. I want the path to always resolve correctly regardless of whether the testing dll is run from TestDriven.NET, the MbUnit GUI or something else.
**Edit**: People seem to be misunderstanding what I'm asking.
My test library is located in say
> C:\projects\myapplication\daotests\bin\Debug\daotests.dll
and I would like to get this path:
> C:\projects\myapplication\daotests\bin\Debug\
The three suggestions so far fail me when I run from the MbUnit Gui:
* `Environment.CurrentDirectory`
gives *c:\Program Files\MbUnit*
* `System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location`
gives *C:\Documents and
Settings\george\Local
Settings\Temp\ ....\DaoTests.dll*
* `System.Reflection.Assembly.GetExecutingAssembly().Location`
gives the same as the previous.
|
**Note**: Assembly.CodeBase is deprecated in .NET Core/.NET 5+: <https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly.codebase?view=net-5.0>
**Original answer:**
I've defined the following property as we use this often in unit testing.
```
public static string AssemblyDirectory
{
get
{
string codeBase = Assembly.GetExecutingAssembly().CodeBase;
UriBuilder uri = new UriBuilder(codeBase);
string path = Uri.UnescapeDataString(uri.Path);
return Path.GetDirectoryName(path);
}
}
```
The `Assembly.Location` property sometimes gives you some funny results when using NUnit (where assemblies run from a temporary folder), so I prefer to use `CodeBase` which gives you the path in URI format, then `UriBuild.UnescapeDataString` removes the `File://` at the beginning, and `GetDirectoryName` changes it to the normal windows format.
|
It's as simple as this:
```
var dir = AppDomain.CurrentDomain.BaseDirectory;
```
|
How do I get the path of the assembly the code is in?
|
[
"",
"c#",
".net",
"reflection",
"mbunit",
""
] |
Can someone tell me how to get path geometry from a WPF FlowDocument object? Please note that I do **not** want to use `FormattedText`. Thanks.
|
A `FlowDocument` can be viewed in any number of ways, but a `Path` is a fixed shape. I think maybe you really want some simplified, visual-only form of a `FlowDocument`'s contents.
In that case you might try converting the `FlowDocument` to an XPS `FixedDocument` - the `FixedPage`s have `Canvas`es containing a bunch of `Path`s and `Glyph`s.
|
Get the **Text** property of a **TextRange** object *initialized over the entire **FlowDocument***:
```
FlowDocument myFlowDocument = new FlowDocument(); //get your FlowDocument
//put in some (or it already has) text
string inText = "Hello, WPF World!";
TextRange tr = new TextRange(FlowDocument.ContentStart, FlowDocument.ContentEnd);
tr.Text = inText;
//get the current text out of the FlowDocument
TextRange trPrime = new TextRange(FlowDocument.ContentStart, FlowDocument.ContentEnd);
string outText = trPrime.Text;
//now outText == "Hello, WPF World!";
//to get formatting, looks like you would use myFlowDocument.TextEffects
```
|
Get path geometry from FlowDocument object
|
[
"",
"c#",
"wpf",
""
] |
I'm trying to do a simple rotation in OpenGL but must be missing the point.
I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally.
At the moment I have code like this:
```
glPushMatrix();
glRotatef(90.0, 0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex3f( 1.0, 1.0, 0.0 );
glVertex3f( 3.0, 2.0, 0.0 );
glVertex3f( 3.0, 1.0, 0.0 );
glEnd();
glPopMatrix();
```
But the result is not a triangle rotated 90 degrees.
**Edit**
Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
|
Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.
You should try rotating around the Z axis instead and see if you get something that makes more sense.
OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor.
ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen.
Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.
|
Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:
```
glMatrixMode(GL_MODELVIEW);
```
Otherwise, you may be modifying either the projection or a texture matrix instead.
|
OpenGL Rotation
|
[
"",
"c++",
"opengl",
"glut",
""
] |
I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views.
Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure.
What's the best program you've used for such problems?
|
I am a big fan of Embarcadero's [ER/Studio](http://www.embarcadero.com/products/er-studio). It is very powerful and produces excellent on-screen as well as printed results. They have a free trial as well, so you should be able to get in and give it a shot without too much strife.
Good luck!
|
Toad Data Modeller from Quest does a nice job on this and is reasonably priced. Embarcadero E/R studio is good too, as Bruce mentioned.
|
What program can I use to generate diagrams of SQL view/table structure?
|
[
"",
"sql",
"sql-server",
"database",
"diagram",
""
] |
As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments.
An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments.
Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus (`Model->find()`, etc.), but I'm not sanguine about this.
Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work...
|
Heh, I was just about to come back with essentially the same answer (using Cake's Model::find):
```
$this->loadModel('Comment');
$this->Comment->find( 'all', array(
'fields' => array('COUNT(Comment.id) AS popularCount'),
'conditions' => array(
'Comment.created >' => strtotime('-1 month')
),
'group' => 'Comment.blog_post_id',
'order' => 'popularCount DESC',
'contain' => array(
'Entry' => array(
'fields' => array( 'Entry.title' )
)
)
));
```
It's not perfect, but it works and can be improved on.
I made an additional improvement, using the Containable behaviour to extract the Entry data instead of the Comment data.
|
Shouldn't be too bad, you just need a group by (this is off the type of my head, so forgive syntax errors):
```
SELECT entry-id, count(id) AS c
FROM comment
WHERE comment.createdate >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH)
GROUP BY entry-id
ORDER BY c DESC
```
|
construct a complex SQL query (or queries)
|
[
"",
"php",
"cakephp",
""
] |
I can easily set breakpoints in embedded JS functions, but I don't see any way of accessing external JS scripts via Firebug unless I happen to enter them during a debug session. Is there a way to do this without having to 'explore' my way into the script?
@Jason: This is a good point, but in my case I do not have easy access to the script. I am specifically talking about the client scripts which are invoked by the ASP.Net Validators that I would like to debug. I can access them during a debug session through entering the function calls, but I could not find a way to access them directly.
|
To view and access external JavaScript files (\*.js) from within Firebug:
1. Click on the 'Script' tab.
2. Click on the 'all' drop down in the
upper left hand corner above the
script code content window.
3. Select 'Show Static Scripts'.
4. Click on the dropdown button just to
the right of what now says 'static'
(By default, it should show the name
of your current web page). You
should now see a list of files
associated with the current web page
including any external JS files.
5. Select the JavaScript file you are
interested in and it's code will
display in the content window. From
there, you should be able to set
breakpoints as normal.
|
Place `debugger;` in your external script file on the line you want to break on.
|
How can I set breakpoints in an external JS script in Firebug
|
[
"",
"javascript",
"debugging",
"firebug",
""
] |
I am consuming the Twitter API and want to convert all URLs to hyperlinks.
What is the most effective way you've come up with to do this?
from
```
string myString = "This is my tweet check it out http://tinyurl.com/blah";
```
to
```
This is my tweet check it out <a href="http://tinyurl.com/blah">http://tinyurl.com/>blah</a>
```
|
Regular expressions are probably your friend for this kind of task:
```
Regex r = new Regex(@"(https?://[^\s]+)");
myString = r.Replace(myString, "<a href=\"$1\">$1</a>");
```
The regular expression for matching URLs might need a bit of work.
|
I did this exact same thing with [jquery consuming the JSON API](http://ralphwhitbeck.com/2007/11/20/PullingTwitterUpdatesWithJSONAndJQuery.aspx) here is the linkify function:
```
String.prototype.linkify = function() {
return this.replace(/[A-Za-z]+:\/\/[A-Za-z0-9-_]+\.[A-Za-z0-9-_:%&\?\/.=]+/, function(m) {
return m.link(m);
});
};
```
|
Easiest way to convert a URL to a hyperlink in a C# string?
|
[
"",
"c#",
"regex",
"string",
"hyperlink",
""
] |
I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me.
I've tried to use Direct Show with the SampleGrabber filter (using this sample <http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx>), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong.
I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected...
```
[...]
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
// Find the required buffer size.
long cbBuffer = 0;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
for( int i = 0 ; i < 25 ; ++i )
{
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
char *pBuffer = new char[cbBuffer];
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
AM_MEDIA_TYPE mt;
hr = pGrabber->GetConnectedMediaType(&mt);
VIDEOINFOHEADER *pVih;
pVih = (VIDEOINFOHEADER*)mt.pbFormat;
[...]
}
[...]
```
Is there somebody, with video software experience, who can advise me about code or other simpler library?
Thanks
Edit:
Msdn links seems not to work ([see the bug](http://stackoverflow.uservoice.com/pages/general/suggestions/19963))
|
Currently these are the most popular video frameworks available on Win32 platforms:
1. Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed.
2. DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use.
3. [Ffmpeg](http://ffmpeg.mplayerhq.hu/): more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with [VLC](http://www.videolan.org/vlc/)) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well [here](http://arrozcru.no-ip.org/ffmpeg/) (in this moment the link is down, hope not dead).
4. [QuickTime](http://developer.apple.com/quicktime/download/): the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement.
5. [Gstreamer](http://www.gstreamer.net/): latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure).
All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream.
If you want to use QuickTime with OpenCV [this](http://path.berkeley.edu/~zuwhan/QTforOpenCVWin/) can help you.
|
I have used [OpenCV](http://sourceforge.net/projects/opencvlibrary/) to load video files and process them. It's also handy for many types of video processing including those useful for computer vision.
|
C++ : What's the easiest library to open video file
|
[
"",
"c++",
"windows",
"video",
""
] |
I would like to put a string into a byte array, but the string may be too big to fit. In the case where it's too large, I would like to put as much of the string as possible into the array. Is there an efficient way to find out how many characters will fit?
|
In order to truncate a string to a UTF8 byte array without splitting in the middle of a character I use this:
```
static string Truncate(string s, int maxLength) {
if (Encoding.UTF8.GetByteCount(s) <= maxLength)
return s;
var cs = s.ToCharArray();
int length = 0;
int i = 0;
while (i < cs.Length){
int charSize = 1;
if (i < (cs.Length - 1) && char.IsSurrogate(cs[i]))
charSize = 2;
int byteSize = Encoding.UTF8.GetByteCount(cs, i, charSize);
if ((byteSize + length) <= maxLength){
i = i + charSize;
length += byteSize;
}
else
break;
}
return s.Substring(0, i);
}
```
The returned string can then be safely transferred to a byte array of length maxLength.
|
You should be using the Encoding class to do your conversion to byte array correct? All Encoding objects have an overridden method GetMaxCharCount, which will give you "The maximum number of characters produced by decoding the specified number of bytes." You should be able to use this value to trim your string and properly encode it.
|
How do I truncate a string while converting to bytes in C#?
|
[
"",
"c#",
".net",
"arrays",
"string",
"truncate",
""
] |
If I open a file in Design View (web form), I get intellisense for my display code, but not my script code.. If I open with source code editor I, occasionally, get intellisense within the script tags.
Anyone know how to get intellisense working all of the time for all of my code?
Been living with this one for a long time.
|
VS2008. So far doing a re-install seems to be the best advice. I am using the `<%@ Page Language="C#" MasterPageFile="~/common/masterpages/MasterPage.master" %>`. When I say design-view I mean that I right+click on the file and choose "view designer" - this gives me access to the toolbox and tabs for designer,split, and code-view (which is the view I primarily work in). In that mode, all of my `<asp:` tags get intellisense, but then I lose all intellisense within my `<script>` tags. I've never been able to have intellisense working both within the `<script>` tags and within my form.
I should say that when we create a website, we don't do it through file>new>website.. I mention this because I wonder if VS might configure a website differently when creating it that way vs. pointing VS to an existing set of directories which contain our website.
|
What version are you using? Design view is for human-readable elements, you wouldn't be editing code there and therefore wouldn't need intellisense. If you are not using code-behind, you should only have one `<script runat="server">` tag on the page, and you would edit this in Source view. To enable intellisense, add the following on the first line:
```
<%@ Page Language="C#" %>
```
If you change it, the tag will be underlined and it will say that you need to close the file and reopen it.
If you are in VS 2008, JavaScript intellisense will be available to you as well. Make sure you specify the language in the `<script>` tag.
|
Visual Studio Intellisense, c#, no code behind
|
[
"",
"c#",
"visual-studio",
"intellisense",
""
] |
When making changes using `SubmitChanges()`, LINQ sometimes dies with a `ChangeConflictException` exception with the error message `Row not found or changed`, without any indication of either the row that has the conflict or the fields with changes that are in conflict, when another user has changed some data in that row.
Is there any way to determine which row has a conflict and which fields they occur in, and also is there a way of getting LINQ to ignore the issue and simply commit the data regardless?
Additionally, does anybody know whether this exception occurs when *any* data in the row has changed, or only when data has been changed in a field that LINQ is attempting to alter?
|
Here's a way to see where the conflicts are (this is an MSDN example, so you'll need to heavily customize):
```
try
{
db.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
Console.WriteLine("Optimistic concurrency error.");
Console.WriteLine(e.Message);
Console.ReadLine();
foreach (ObjectChangeConflict occ in db.ChangeConflicts)
{
MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
Customer entityInConflict = (Customer)occ.Object;
Console.WriteLine("Table name: {0}", metatable.TableName);
Console.Write("Customer ID: ");
Console.WriteLine(entityInConflict.CustomerID);
foreach (MemberChangeConflict mcc in occ.MemberConflicts)
{
object currVal = mcc.CurrentValue;
object origVal = mcc.OriginalValue;
object databaseVal = mcc.DatabaseValue;
MemberInfo mi = mcc.Member;
Console.WriteLine("Member: {0}", mi.Name);
Console.WriteLine("current value: {0}", currVal);
Console.WriteLine("original value: {0}", origVal);
Console.WriteLine("database value: {0}", databaseVal);
}
}
}
```
To make it ignore the problem and commit anyway:
```
db.SubmitChanges(ConflictMode.ContinueOnConflict);
```
|
These (which you could add in a partial class to your datacontext might help you understand how this works:
```
public void SubmitKeepChanges()
{
try
{
this.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
foreach (ObjectChangeConflict occ in this.ChangeConflicts)
{
//Keep current values that have changed,
//updates other values with database values
occ.Resolve(RefreshMode.KeepChanges);
}
}
}
public void SubmitOverwrite()
{
try
{
this.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
foreach (ObjectChangeConflict occ in this.ChangeConflicts)
{
// All database values overwrite current values with
//values from database
occ.Resolve(RefreshMode.OverwriteCurrentValues);
}
}
}
public void SubmitKeepCurrent()
{
try
{
this.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
foreach (ObjectChangeConflict occ in this.ChangeConflicts)
{
//Swap the original values with the values retrieved from the database. No current value is modified
occ.Resolve(RefreshMode.KeepCurrentValues);
}
}
}
```
|
Data Conflict in LINQ
|
[
"",
"c#",
"linq",
"linq-to-sql",
""
] |
In a web-app I'm writing, the user is supposed to enter the path in the Perforce repository for the node they're entering. The application is supposed to validate that the entered directory exists in the repo.
I've got the P4Package (p4.jar) lib, and I'm configuring it correctly enough that it works for almost everything, EXCEPT this directory validation. I'm creating a DirEntry (from the p4.jar) using a configured Env and a path, but when I call DirEntry.sync(), it issues an incorrect command. Where I want it to issue the command:
p4 [config info] dirs directory\_argument <-- using the dirs command to validate a dir
Instead, it issues:
p4 [config info] dirs directory\_argument%1 <-- note extraneous %1
Which always fails, since none of the directories have a %1 at the end of them.
Any help? Is there a different way to check that a directory exists using this package?
|
So, the code I was using did have a bug requiring me to make a change and check the code into my repository.
However, since then, Perforce has come up with their own Java wrapper for the P4 client which works much better. I'd give that one a shot.
|
Sounds like the sync command has a bug in relation to dir entries and the command. My suggestion would be to just roll the command yourself, using the perforce command line as that has to be set up anyway in order to use the java library.
```
Process p = Runtime.getRuntime().exec("p4 dirs " + directory_argument);
BufferedReader stdOut = new BufferedReader(new InputReader(p.InputStream()));
//Read the output of the command and process appropriately after this
```
|
Using P4Package (Java) from Java app to validate Perforce directory
|
[
"",
"java",
"perforce",
""
] |
Without the use of any external library, what is the simplest way to fetch a website's HTML content into a String?
|
I'm currently using this:
```
String content = null;
URLConnection connection = null;
try {
connection = new URL("http://www.google.com").openConnection();
Scanner scanner = new Scanner(connection.getInputStream());
scanner.useDelimiter("\\Z");
content = scanner.next();
scanner.close();
}catch ( Exception ex ) {
ex.printStackTrace();
}
System.out.println(content);
```
But not sure if there's a better way.
|
This has worked well for me:
```
URL url = new URL(theURL);
InputStream is = url.openStream();
int ptr = 0;
StringBuffer buffer = new StringBuffer();
while ((ptr = is.read()) != -1) {
buffer.append((char)ptr);
}
```
Not sure at to whether the other solution(s) provided are any more efficient or not.
|
How to fetch HTML in Java
|
[
"",
"java",
"html",
"screen-scraping",
""
] |
Can anyone tell me if there is a way with generics to limit a generic type argument `T` to only:
* `Int16`
* `Int32`
* `Int64`
* `UInt16`
* `UInt32`
* `UInt64`
I'm aware of the `where` keyword, but can't find an interface for **only** these types,
Something like:
```
static bool IntegerFunction<T>(T value) where T : INumeric
```
|
This constraint exists in .Net 7.
Check out this [.NET Blog post](https://devblogs.microsoft.com/dotnet/dotnet-7-generic-math/) and the [actual documentation](https://learn.microsoft.com/en-us/dotnet/standard/generics/math).
Starting in .NET 7, you can make use of interfaces such as `INumber` and `IFloatingPoint` to create programs such as:
```
using System.Numerics;
Console.WriteLine(Sum(1, 2, 3, 4, 5));
Console.WriteLine(Sum(10.541, 2.645));
Console.WriteLine(Sum(1.55f, 5, 9.41f, 7));
static T Sum<T>(params T[] numbers) where T : INumber<T>
{
T result = T.Zero;
foreach (T item in numbers)
{
result += item;
}
return result;
}
```
`INumber` is in the `System.Numerics` namespace.
There are also interfaces such as `IAdditionOperators` and `IComparisonOperators` so you can make use of specific operators generically.
|
More than a decade later, this feature finally exists in [.NET 7](https://devblogs.microsoft.com/dotnet/dotnet-7-generic-math/). The most generic interface is `INumber<TSelf>` (in the `System.Numerics` namespace), and it encompasses all numbers. To accept just integer types, consider using [`IBinaryInteger<TSelf>`](https://learn.microsoft.com/en-us/dotnet/api/system.numerics.ibinaryinteger-1?view=net-7.0) instead.
Here’s an example `IntegerFunction` implementation:
```
static bool IntegerFunction<T>(T value) where T : IBinaryInteger<T> {
return value > T.Zero;
}
```
```
Console.WriteLine(IntegerFunction(5)); // True
Console.WriteLine(IntegerFunction((sbyte)-5)); // False
Console.WriteLine(IntegerFunction((ulong)5)); // True
```
---
The (now obsolete) original answer below is left as a historical perspective.
C# does not support this. Hejlsberg has described the reasons for not implementing the feature [in an interview with Bruce Eckel](http://www.artima.com/intv/generics.html):
> And it's not clear that the added complexity is worth the small yield that you get. If something you want to do is not directly supported in the constraint system, you can do it with a factory pattern. You could have a `Matrix<T>`, for example, and in that `Matrix` you would like to define a dot product method. That of course that means you ultimately need to understand how to multiply two `T`s, but you can't say that as a constraint, at least not if `T` is `int`, `double`, or `float`. But what you could do is have your `Matrix` take as an argument a `Calculator<T>`, and in `Calculator<T>`, have a method called `multiply`. You go implement that and you pass it to the `Matrix`.
However, this leads to fairly convoluted code, where the user has to supply their own `Calculator<T>` implementation, for each `T` that they want to use. As long as it doesn’t have to be extensible, i.e. if you just want to support a fixed number of types, such as `int` and `double`, you can get away with a relatively simple interface:
```
var mat = new Matrix<int>(w, h);
```
([Minimal implementation in a GitHub Gist.](https://gist.github.com/klmr/314d05b66c72d62bd8a184514568e22f))
However, as soon as you want the user to be able to supply their own, custom types, you need to open up this implementation so that the user can supply their own `Calculator` instances. For instance, to instantiate a matrix that uses a custom decimal floating point implementation, `DFP`, you’d have to write this code:
```
var mat = new Matrix<DFP>(DfpCalculator.Instance, w, h);
```
… and implement all the members for `DfpCalculator : ICalculator<DFP>`.
An alternative, which unfortunately shares the same limitations, is to work with policy classes, [as discussed in Sergey Shandar’s answer](https://stackoverflow.com/a/4834066/1968).
|
Is there a constraint that restricts my generic method to numeric types?
|
[
"",
"c#",
"generics",
"constraints",
""
] |
I always tend to forget these built-in **Symfony** functions for making links.
|
If your goal is to have user-friendly URLs throughout your application, use the following approach:
1) Create a routing rule for your module/action in the application's routing.yml file. The following example is a routing rule for an action that shows the most recent questions in an application, defaulting to page 1 (using a pager):
```
recent_questions:
url: questions/recent/:page
param: { module: questions, action: recent, page: 1 }
```
2) Once the routing rule is set, use the `url_for()` helper in your template to format outgoing URLs.
```
<a href="<?php echo url_for('questions/recent?page=1') ?>">Recent Questions</a>
```
In this example, the following URL will be constructed: `http://myapp/questions/recent/1.html`.
3) Incoming URLs (requests) will be analyzed by the routing system, and if a pattern match is found in the routing rule configuration, the named wildcards (ie. the `:/page` portion of the URL) will become request parameters.
You can also use the `link_to()` helper to output a URL without using the HTML `<a>` tag.
|
This advice is for symfony 1.0. It probably will work for later versions.
**Within your sfAction class**:
string genUrl($parameters = array(), $absolute = false)
eg.
$this->getController()->genUrl('yourmodule/youraction?key=value&key2=value', true);
**In a template**:
This will generate a normal link.
string link\_to($name, $internal\_uri, $options = array());
eg.
link\_to('My link name', 'yourmodule/youraction?key=value&key2=value');
|
How do I generate a friendly URL in Symfony PHP?
|
[
"",
"php",
"url",
"seo",
"symfony1",
""
] |
I am getting a `NoClassDefFoundError` when I run my Java application. What is typically the cause of this?
|
This is caused when there is a class file that your code depends on and it is present at compile time but not found at runtime. Look for differences in your build time and runtime classpaths.
|
While it's possible that this is due to a classpath mismatch between compile-time and run-time, it's not necessarily true.
It is important to keep two or three different exceptions straight in our head in this case:
1. **`java.lang.ClassNotFoundException`** This exception indicates that the class was not found on the classpath. This indicates that we were trying to load the class definition, and the class did not exist on the classpath.
2. **`java.lang.NoClassDefFoundError`** This exception indicates that the JVM looked in its internal class definition data structure for the definition of a class and did not find it. This is different than saying that it could not be loaded from the classpath. Usually this indicates that we previously attempted to load a class from the classpath, but it failed for some reason - now we're trying to use the class again (and thus need to load it, since it failed last time), but we're not even going to try to load it, because we failed loading it earlier (and reasonably suspect that we would fail again). The earlier failure could be a ClassNotFoundException or an ExceptionInInitializerError (indicating a failure in the static initialization block) or any number of other problems. The point is, a NoClassDefFoundError is not necessarily a classpath problem.
|
Why am I getting a NoClassDefFoundError in Java?
|
[
"",
"java",
"noclassdeffounderror",
""
] |
I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time.
I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then?
My intuition is currently pushing me towards BufferedStreams. Is this the way to go?
|
Could not get the example to work using `FileChannel.read(ByteBuffer)` because it isn't a blocking read. Did however get the code below to work:
```
boolean running = true;
BufferedInputStream reader = new BufferedInputStream(new FileInputStream( "out.txt" ) );
public void run() {
while( running ) {
if( reader.available() > 0 ) {
System.out.print( (char)reader.read() );
}
else {
try {
sleep( 500 );
}
catch( InterruptedException ex ) {
running = false;
}
}
}
}
```
Of course the same thing would work as a timer instead of a thread, but I leave that up to the programmer. I'm still looking for a better way, but this works for me for now.
Oh, and I'll caveat this with: I'm using 1.4.2. Yes I know I'm in the stone ages still.
|
If you want to read a file while it is being written and only read the new content then following will help you achieve the same.
To run this program you will launch it from command prompt/terminal window and pass the file name to read. It will read the file unless you kill the program.
java FileReader c:\myfile.txt
As you type a line of text save it from notepad and you will see the text printed in the console.
```
public class FileReader {
public static void main(String args[]) throws Exception {
if(args.length>0){
File file = new File(args[0]);
System.out.println(file.getAbsolutePath());
if(file.exists() && file.canRead()){
long fileLength = file.length();
readFile(file,0L);
while(true){
if(fileLength<file.length()){
readFile(file,fileLength);
fileLength=file.length();
}
}
}
}else{
System.out.println("no file to read");
}
}
public static void readFile(File file,Long fileLength) throws IOException {
String line = null;
BufferedReader in = new BufferedReader(new java.io.FileReader(file));
in.skip(fileLength);
while((line = in.readLine()) != null)
{
System.out.println(line);
}
in.close();
}
}
```
|
How do I use Java to read from a file that is actively being written to?
|
[
"",
"java",
"file",
"file-io",
""
] |
I plan to be storing all my config settings in my application's app.config section (using the `ConfigurationManager.AppSettings` class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the `AppSettings`. At the same time, while the program is running I plan to be accessing the `AppSettings` constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the `AppSettings` constantly.
Is this a good idea with regard to performance? Using `AppSettings` is supposed to be "the right way" to store and access configuration settings when writing .Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read).
If anyone has experience with this, I would greatly appreciate the input.
**Update:** I should probably clarify a few points.
This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application.
According to the MSDN documention, the `ConfigurationManager` is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.)
**Update 2:** I accepted lomaxx's answer because `Properties` does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits.
Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will *ever* even come close to needing that I feel quite safe in using Properties without impacting performance.
|
since you're using a winforms app, if it's in .net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. [This article on MSDN](https://learn.microsoft.com/en-us/previous-versions/aa730869%28v=vs.80%29) has a pretty good introduction into this
If you're still worried about performance then take a look at [SQL Compact Edition](https://web.archive.org/web/20080918011815/http://www.microsoft.com:80/sql/editions/compact/default.mspx) which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to [make it work with Linq](https://web.archive.org/web/20181015183404/http://geekswithblogs.net:80/steveclements/archive/2007/11/13/linq-to-sql.compact.aspx)
|
Check out SQLite, it seems like a good option for this particular scenario.
|
ConfigurationManager.AppSettings Performance Concerns
|
[
"",
"c#",
".net",
"performance",
"configuration",
"properties",
""
] |
I need a quick algorithm to select 5 random elements from a generic list. For example, I'd like to get 5 random elements from a `List<string>`.
|
Iterate through and for each element make the probability of selection = (number needed)/(number left)
So if you had 40 items, the first would have a 5/40 chance of being selected. If it is, the next has a 4/39 chance, otherwise it has a 5/39 chance. By the time you get to the end you will have your 5 items, and often you'll have all of them before that.
This technique is called [selection sampling](https://stackoverflow.com/questions/35065764/select-n-records-at-random-from-a-set-of-n), a special case of [Reservoir Sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). It's similar in performance to shuffling the input, but of course allows the sample to be generated without modifying the original data.
|
Using linq:
```
YourList.OrderBy(x => rnd.Next()).Take(5)
```
|
Select N random elements from a List<T> in C#
|
[
"",
"c#",
"algorithm",
"collections",
"random",
"element",
""
] |
I was surprised recently to find that it's possible to have a return statement in a finally block in Java.
It seems like lots of people think it's a bad thing to do as described in '[Don't return in a finally clause](https://web.archive.org/web/20070922061412/http://weblogs.java.net/blog/staufferjames/archive/2007/06/_dont_return_in.html)'. Scratching a little deeper, I also found '[Java's return doesn't always](https://web.archive.org/web/20190528011825/http://www.cs.arizona.edu/projects/sumatra/hallofshame/)' which shows some pretty horrible examples of other types of flow control in finally blocks.
So, my question is, can anyone give me an example where a return statement (or other flow control) in a finally block produces better / more readable code?
|
The examples you provided are reason enough to *not* use flow-control from finally.
Even if there's a contrived example where it's "better," consider the developer who has to maintain your code later and who might not be aware of the subtleties. That poor developer might even be you....
|
I had a REALLY hard time to track down a bug years ago that was caused by this. The code was something like:
```
Object problemMethod() {
Object rtn = null;
try {
rtn = somethingThatThrewAnException();
}
finally {
doSomeCleanup();
return rtn;
}
}
```
What happened is that the exception was thrown down in some other code. It was being caught and logged and rethrown within the `somethingThatThrewAnException()` method. But the exception wasn't being propagated up past `problemMethod()`. After a LONG time of looking at this we finally tracked it down to the return method. The return method in the finally block was basically stopping the exception that happened in the try block from propagating up even though it wasn't caught.
Like others have said, while it is legal to return from a finally block according to the Java spec, it is a BAD thing and shouldn't be done.
|
Returning from a finally block in Java
|
[
"",
"java",
"exception",
"return",
"try-catch-finally",
""
] |
How do you iterate through every file/directory recursively in standard C++?
|
In standard C++, technically there is no way to do this since standard C++ has no conception of directories. If you want to expand your net a little bit, you might like to look at using [Boost.FileSystem](http://www.boost.org/doc/libs/1_36_0/libs/filesystem/doc/index.htm). This has been accepted for inclusion in TR2, so this gives you the best chance of keeping your implementation as close as possible to the standard.
An example, taken straight from the website:
```
bool find_file( const path & dir_path, // in this directory,
const std::string & file_name, // search for this name,
path & path_found ) // placing path here if found
{
if ( !exists( dir_path ) ) return false;
directory_iterator end_itr; // default construction yields past-the-end
for ( directory_iterator itr( dir_path );
itr != end_itr;
++itr )
{
if ( is_directory(itr->status()) )
{
if ( find_file( itr->path(), file_name, path_found ) ) return true;
}
else if ( itr->leaf() == file_name ) // see below
{
path_found = itr->path();
return true;
}
}
return false;
}
```
|
From C++17 onward, the [`<filesystem>`](https://en.cppreference.com/w/cpp/filesystem/recursive_directory_iterator) header, and range-`for`, you can simply do this:
```
#include <filesystem>
using recursive_directory_iterator = std::filesystem::recursive_directory_iterator;
...
for (const auto& dirEntry : recursive_directory_iterator(myPath))
std::cout << dirEntry << std::endl;
```
As of C++17, `std::filesystem` is part of the standard library and can be found in the `<filesystem>` header (no longer "experimental").
|
How do you iterate through every file/directory recursively in standard C++?
|
[
"",
"c++",
"filesystems",
""
] |
I am trying to read a single file from a `java.util.zip.ZipInputStream`, and copy it into a `java.io.ByteArrayOutputStream` (so that I can then create a `java.io.ByteArrayInputStream` and hand that to a 3rd party library that will end up closing the stream, and I don't want my `ZipInputStream` getting closed).
I'm probably missing something basic here, but I never enter the while loop here:
```
ByteArrayOutputStream streamBuilder = new ByteArrayOutputStream();
int bytesRead;
byte[] tempBuffer = new byte[8192*2];
try {
while ((bytesRead = zipStream.read(tempBuffer)) != -1) {
streamBuilder.write(tempBuffer, 0, bytesRead);
}
} catch (IOException e) {
// ...
}
```
What am I missing that will allow me to copy the stream?
**Edit:**
I should have mentioned earlier that this `ZipInputStream` is not coming from a file, so I don't think I can use a `ZipFile`. It is coming from a file uploaded through a servlet.
Also, I have already called `getNextEntry()` on the `ZipInputStream` before getting to this snippet of code. If I don't try copying the file into another `InputStream` (via the `OutputStream` mentioned above), and just pass the `ZipInputStream` to my 3rd party library, the library closes the stream, and I can't do anything more, like dealing with the remaining files in the stream.
|
Your loop looks valid - what does the following code (just on it's own) return?
```
zipStream.read(tempBuffer)
```
if it's returning -1, then the zipStream is closed before you get it, and all bets are off. It's time to use your debugger and make sure what's being passed to you is actually valid.
When you call getNextEntry(), does it return a value, and is the data in the entry meaningful (i.e. does getCompressedSize() return a valid value)? IF you are just reading a Zip file that doesn't have read-ahead zip entries embedded, then ZipInputStream isn't going to work for you.
Some useful tidbits about the Zip format:
Each file embedded in a zip file has a header. This header can contain useful information (such as the compressed length of the stream, it's offset in the file, CRC) - or it can contain some magic values that basically say 'The information isn't in the stream header, you have to check the Zip post-amble'.
Each zip file then has a table that is attached to the end of the file that contains all of the zip entries, along with the real data. The table at the end is mandatory, and the values in it must be correct. In contrast, the values embedded in the stream do not have to be provided.
If you use ZipFile, it reads the table at the end of the zip. If you use ZipInputStream, I suspect that getNextEntry() attempts to use the entries embedded in the stream. If those values aren't specified, then ZipInputStream has no idea how long the stream might be. The inflate algorithm is self terminating (you actually don't need to know the uncompressed length of the output stream in order to fully recover the output), but it's possible that the Java version of this reader doesn't handle this situation very well.
I will say that it's fairly unusual to have a servlet returning a ZipInputStream (it's much more common to receive an inflatorInputStream if you are going to be receiving compressed content.
|
You probably tried reading from a `FileInputStream` like this:
```
ZipInputStream in = new ZipInputStream(new FileInputStream(...));
```
This **won’t** work since a zip archive can contain multiple files and you need to specify which file to read.
You could use [java.util.zip.ZipFile](https://docs.oracle.com/javase/8/docs/api/java/util/zip/ZipFile.html) and a library such as [IOUtils from Apache Commons IO](https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/IOUtils.html) or [ByteStreams from Guava](https://google.github.io/guava/releases/21.0/api/docs/com/google/common/io/ByteStreams.html) that assist you in copying the stream.
Example:
```
ByteArrayOutputStream out = new ByteArrayOutputStream();
try (ZipFile zipFile = new ZipFile("foo.zip")) {
ZipEntry zipEntry = zipFile.getEntry("fileInTheZip.txt");
try (InputStream in = zipFile.getInputStream(zipEntry)) {
IOUtils.copy(in, out);
}
}
```
|
Reading from a ZipInputStream into a ByteArrayOutputStream
|
[
"",
"java",
"zipinputstream",
"zipoutputstream",
""
] |
I've seen a few fixes for allowing PNG images to have transparency in Internet Explorer 6, but I've yet to find one that also allows you to set the background position in CSS. If you use sprites, it's a deal-breaker. I've resorted to using GIF's (which are not as high quality), not using transparent images at all, or serving a completely different stylesheet to IE6. Is there a fix for IE6 that allows for PNG transparencies AND background positioning?
|
Yes. Convert your images to use indexed pallets (png256). You can support transparency (just like gif), but not an alpha channel.
You can do this using [Irfanview](http://www.irfanview.com/) and the pngout plugin, [pngquant](http://www.libpng.org/pub/png/apps/pngquant.html) or [pngnq](http://pngnq.sourceforge.net/).
The YUI performance team also did a [great presentation](http://assets.en.oreilly.com/1/event/7/Image%20Optimization_%20How%20Many%20of%20These%207%20Mistakes%20Are%20You%20Making%20Presentation.ppt) that covers this an many other image optimization concepts.
|
[This](http://www.micahtischler.com/#mtjs_iepnghandler.js) is a new technique that has popped up in the last month or so. From the page:
> In this script image tags are supported, both with and without a blank spacer GIF, and background image PNGs may be positioned, as well as repeated, even if they're smaller than the content element they're in.
|
Is there a JavaScript PNG fix for IE6 that allows CSS background positioning?
|
[
"",
"javascript",
"css",
"internet-explorer-6",
"png",
"transparency",
""
] |
I have heard/read the term but don't quite understand what it means.
When should I use this technique and how would I use it? Can anyone provide a good code sample?
|
The visitor pattern is a way of doing double-dispatch in an object-oriented way.
It's useful for when you want to choose which method to use for a given argument based on its type at runtime rather than compile time.
Double dispatch is a special case of **multiple dispatch**.
When you call a virtual method on an object, that's considered single-dispatch because which actual method is called depends on the type of the single object.
For double dispatch, both the object's type and the method sole argument's type is taken into account. This is like method overload resolution, except that the argument type is determined at runtime in double-dispatch instead of statically at compile-time.
In multiple-dispatch, a method can have multiple arguments passed to it and which implementation is used depends on each argument's type. The order that the types are evaluated depends on the language. In LISP, it checks each type from first to last.
Languages with multiple dispatch make use of generic functions, which are just function delcarations and aren't like generic methods, which use type parameters.
**To do double-dispatch in C#**, you can declare a method with a sole object argument and then specific methods with specific types:
```
using System.Linq;
class DoubleDispatch
{
public T Foo<T>(object arg)
{
var method = from m in GetType().GetMethods()
where m.Name == "Foo"
&& m.GetParameters().Length==1
&& arg.GetType().IsAssignableFrom
(m.GetParameters()[0].GetType())
&& m.ReturnType == typeof(T)
select m;
return (T) method.Single().Invoke(this,new object[]{arg});
}
public int Foo(int arg) { /* ... */ }
static void Test()
{
object x = 5;
Foo<int>(x); //should call Foo(int) via Foo<T>(object).
}
}
```
|
The code posted by Mark isn't complete and what ever is there isn't working.
So tweaked and complete.
```
class DoubleDispatch
{
public T Foo<T>(object arg)
{
var method = from m in GetType().GetMethods(System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)
where m.Name == "Foo"
&& m.GetParameters().Length == 1
//&& arg.GetType().IsAssignableFrom
// (m.GetParameters()[0].GetType())
&&Type.GetType(m.GetParameters()[0].ParameterType.FullName).IsAssignableFrom(arg.GetType())
&& m.ReturnType == typeof(T)
select m;
return (T)method.Single().Invoke(this, new object[] { arg });
}
public int Foo(int arg)
{
return 10;
}
public string Foo(string arg)
{
return 5.ToString();
}
public static void Main(string[] args)
{
object x = 5;
DoubleDispatch dispatch = new DoubleDispatch();
Console.WriteLine(dispatch.Foo<int>(x));
Console.WriteLine(dispatch.Foo<string>(x.ToString()));
Console.ReadLine();
}
}
```
Thanks Mark and others for nice explanation on Double Dispatcher pattern.
|
Double dispatch in C#?
|
[
"",
"c#",
"design-patterns",
"language-features",
"double-dispatch",
""
] |
Are there any good books or website that go over creating a `JTable`? I want to make one column editable. I would like to actually put a inherited `JCheckBox` component (that we created here) into one of the table columns instead of just having the table put `JCheckBox` in based on it being an editable `boolean` field.
I have the [JFC Swing Tutorial Second Edition](https://rads.stackoverflow.com/amzn/click/com/0201914670) book but I just would like to know if there are other examples I could look at and learn how to deal with the tables better. The book seems to just take the java 'trail' online and put it in the book.
I am re-reading the stuff though, just curious if anyone has found something that might help out more.
|
To make a column editable you have to override the `isCellEditable` method in the `TableModel`. Creating a `TableModel` is fairly easy if you inherit `AbstractTableModel` and I'd recommend it for all but the most simple `JTable`s.
However, adapting the `TableModel` is only part of what you need to do. To actually get a custom component in the `JTable`, you need to set a custom cell renderer. To use an interactive custom component, you need to set a custom cell editor. In some cases, it's enough to use slightly modificated versions of the default classes for this.
**Editors**
If you already have got a custom component is easily done using delegation: Create a new class implementing `TableCellEditor`, and return a **new** instance of the component in the `getCellEditorComponent` method. The paramaters to this method include the current value as well as the cell coordinates, a link back to the table and wether or not the cell is selected.
The `TableCellEditor` also has a method that is called when the user commits a change to the cell contents (where you can validate user input and adjust the model) or cancels an edit. Be sure to call the `stopEditing()` method on your editor if you ever programmatically abort editing, otherwise the editor component will remain on screen -- this once took me like 2 hours to debug.
Note that within a `JTable` editors and **only** editors receive events! Displaying a button can be done using a renderer. But to get a functioning button, you need to implement an editor with the correct `EventListeners` registered. Registering a listener on a renderer does nothing.
**Renderers**
Implementing a renderer is not strictly necessary for what you describe in your question, but you typically end up doing it anyway, if only for minor modifications. Renderers, unlike editors, are speed critical. *The `getTableCellRendererComponent` of a renderer is called once for every cell in the table!* The component returned by a renderer is only used to paint the cell, not for interaction, and thus can be "reused" for the next cell. In other words, you should adjust the component (e.g. using `setText(...)` or `setFont(...)` if it is a `TextComponent`) in the renderer, you should *not* instantiate a new one -- that's an easy way to cripple the performance.
**Caveats**
Note that for renderers and editors to work, you need to tell the `JTable` when to use a certain renderer/editor. There are basically two ways to do this. You can set the default cell renderer/editor **for a certain type** using the respective `JTable` methods. For this way to work, your `TableModel` needs to return exactly this type in the `getColumnClass(...)` method! The default table model will **not** do this for you, it always returns `Object.class`. I'm sure that one has stumped a lot of people.
The other way to set the editor/renderer is by explicitly setting it on the column itself, that is, by getting the `TableColumn` via the `getTableColumn(...)` method of the `JTable`. This is a lot more elaborate, however, it's also the only way to have two different renderers/editors for a single class. E.g. your model might have two columns of class String which are rendered in entirely different ways, maybe once using a `JLabel/DefaultRenderer` and the other using a `JButton` to access a more elaborate editor.
`JTable` with its custom renderers and editors is *extremely* versatile, but it is also a lot to take in, and there are a lot of things to do wrong. Good luck!
[How to Use Tables](http://java.sun.com/docs/books/tutorial/uiswing/components/table.html) in [The Swing Tutorial](http://java.sun.com/docs/books/tutorial/uiswing/index.html) is mandatory reading for anyone customising JTables. In particular, read and reread [Concepts: Editors and Renderers](http://java.sun.com/docs/books/tutorial/uiswing/components/table.html#editrender) because it typically takes a while for it to "click". The examples on custom renderers and editors are also very worthwhile.
|
The class you want to look into extending to create your own behavior is DefaultTableModel. That will allow you to define your own behavior. A decent tutorial can be found [on sun's site](http://java.sun.com/docs/books/tutorial/uiswing/components/table.html#data).
|
Editable JTable Tutorial
|
[
"",
"java",
"swing",
"jtable",
""
] |
I'm trying to retrieve a file from a server using SFTP (as opposed to FTPS) using Java. How can I do this?
|
Another option is to consider looking at the [JSch library](http://www.jcraft.com/jsch/ "JSch library"). JSch seems to be the preferred library for a few large open source projects, including Eclipse, Ant and Apache Commons HttpClient, amongst others.
It supports both user/pass and certificate-based logins nicely, as well as all a whole host of other yummy SSH2 features.
Here's a simple remote file retrieve over SFTP. Error handling is left as an exercise for the reader :-)
```
JSch jsch = new JSch();
String knownHostsFilename = "/home/username/.ssh/known_hosts";
jsch.setKnownHosts( knownHostsFilename );
Session session = jsch.getSession( "remote-username", "remote-host" );
{
// "interactive" version
// can selectively update specified known_hosts file
// need to implement UserInfo interface
// MyUserInfo is a swing implementation provided in
// examples/Sftp.java in the JSch dist
UserInfo ui = new MyUserInfo();
session.setUserInfo(ui);
// OR non-interactive version. Relies in host key being in known-hosts file
session.setPassword( "remote-password" );
}
session.connect();
Channel channel = session.openChannel( "sftp" );
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
sftpChannel.get("remote-file", "local-file" );
// OR
InputStream in = sftpChannel.get( "remote-file" );
// process inputstream as needed
sftpChannel.exit();
session.disconnect();
```
|
Here is the complete source code of an example using [JSch](http://www.jcraft.com/jsch/) without having to worry about the ssh key checking.
```
import com.jcraft.jsch.*;
public class TestJSch {
public static void main(String args[]) {
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession("username", "127.0.0.1", 22);
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword("password");
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
ChannelSftp sftpChannel = (ChannelSftp) channel;
sftpChannel.get("remotefile.txt", "localfile.txt");
sftpChannel.exit();
session.disconnect();
} catch (JSchException e) {
e.printStackTrace();
} catch (SftpException e) {
e.printStackTrace();
}
}
}
```
|
How to retrieve a file from a server via SFTP?
|
[
"",
"java",
"ftp",
"sftp",
"security",
""
] |
So I have a pointer to an array of pointers. If I delete it like this:
```
delete [] PointerToPointers;
```
Will that delete all the pointed to pointers as well? If not, do I have to loop over all of the pointers and delete them as well, or is there an easier way to do it? My google-fu doesn't seem to give me any good answers to this question.
(And yeah, I know I need to use a vector. This is one of those "catch up on C++" type assignments in school.)
|
Yes you have to loop over the pointers, deleting individually.
Reason: What if other code had pointers to the objects in your array? The C++ compiler doesn't know if that's true or not, so you have to be explicit.
For an "easier way," two suggestions: (1) Make a subroutine for this purpose so at least you won't have to write the code more than once. (2) Use the "smart pointer" design paradigm where you hold an array of objects with reference-counters, then the objects are deleted when the objects are no longer referenced by any code.
|
I agree with Jason Cohen though we can be a bit clearer on the reason for needing to delete your pointers with the loop. For every "new" or dynamic memory allocation there needs to be a "delete" a memory de-allocation. Some times the "delete" can be hidden, as with smartpointers but it is still there.
```
int main()
{
int *pI = new int;
int *pArr = new int[10];
```
so far in the code we have allocated two chunks of dynamic memory. The first is just a general int the second is an array of ints.
```
delete pI;
delete [] pArr;
```
these delete statements clear the memory that was allocated by the "new"s
```
int ppArr = new int *[10];
for( int indx = 0; indx < 10; ++indx )
{
ppArr[indx] = new int;
}
```
This bit of code is doing both of the previous allocations. First we are creating space for our int in a dynamic array. We then need to loop through and allocate an int for each spot in the array.
```
for( int indx = 0; indx < 10; ++indx )
{
delete ppArr[indx];
}
delete [] ppArr;
```
Note the order that I allocated this memory and then that I de-allocated it in the reverse order. This is because if we were to do the delete [] ppArr; first we would lose the array that tells us what our other pointers are. That chunk or memory would be given back to the system and so can no longer be reliably read.
```
int a=0;
int b=1;
int c=2;
ppArr = new int *[3];
ppArr[0] = &a;
ppArr[1] = &b;
ppArr[2] = &c;
```
This I think should be mentioned as well. Just because you are working with pointers does not mean that the memory those pointers point to was dynamically allocated. That is to say just because you have a pointer doesn't mean it necessarily needs to be delete. The array I created here is dynamically allocated but the pointers point to local instances of ints When we delete this we only need to delete the array.
```
delete [] ppArr;
return 0;
}
```
In the end dynamically allocated memory can be tricky and anyway you can wrap it up safely like in a smart pointer or by using stl containers rather then your own can make your life much more pleasant.
|
C++ deleting a pointer to a pointer
|
[
"",
"c++",
"pointers",
""
] |
I am writing a Java utility that helps me to generate loads of data for performance testing. It would be *really* cool to be able to specify a regex for Strings so that my generator spits out things that match this.
Is something out there already baked that I can use to do this? Or is there a library that gets me most of the way there?
|
Firstly, with a complex enough regexp, I believe this can be impossible. But you should be able to put something together for simple regexps.
If you take a look at the source code of the class java.util.regex.Pattern, you'll see that it uses an internal representation of Node instances. Each of the different pattern components have their own implementation of a Node subclass. These Nodes are organised into a tree.
By producing a visitor that traverses this tree, you should be able to call an overloaded generator method or some kind of Builder that cobbles something together.
|
It's too late to help the original poster, but it could help a newcomer. [Generex](https://github.com/mifmif/Generex) is a useful java library that provides many features for using regexes to generate strings (random generation, generating a string based on its index, generating all strings...).
Example :
```
Generex generex = new Generex("[0-3]([a-c]|[e-g]{1,2})");
// generate the second String in lexicographical order that matches the given Regex.
String secondString = generex.getMatchedString(2);
System.out.println(secondString);// it print '0b'
// Generate all String that matches the given Regex.
List<String> matchedStrs = generex.getAllMatchedStrings();
// Using Generex iterator
Iterator iterator = generex.iterator();
while (iterator.hasNext()) {
System.out.print(iterator.next() + " ");
}
// it prints 0a 0b 0c 0e 0ee 0e 0e 0f 0fe 0f 0f 0g 0ge 0g 0g 1a 1b 1c 1e
// 1ee 1e 1e 1f 1fe 1f 1f 1g 1ge 1g 1g 2a 2b 2c 2e 2ee 2e 2e 2f 2fe 2f 2f 2g
// 2ge 2g 2g 3a 3b 3c 3e 3ee 3e 3e 3f 3fe 3f 3f 3g 3ge 3g 3g 1ee
// Generate random String
String randomStr = generex.random();
System.out.println(randomStr);// a random value from the previous String list
```
**Disclosure**
The project mentioned on this post belongs to the user answering (Mifmif) the question. As per the [rules](https://stackoverflow.com/help/behavior), this need to be brought up.
|
Using Regex to generate Strings rather than match them
|
[
"",
"java",
"regex",
""
] |
I've got an RMI call defined as:
```
public void remoteGetCustomerNameNumbers(ArrayList<String> customerNumberList, ArrayList<String> customerNameList) throws java.rmi.RemoteException;
```
The function does a database lookup and populates the two ArrayLists. The calling function gets nothing. I believe this works with Vector types.
Do I need to use the Vector, or is there a way to get this to work without making two calls. I've got some other ideas that I'd probably use, like returning a key/value pair, but I'd like to know if I can get this to work.
Update:
I would accept all of the answers given so far if I could. I hadn't known the network cost, so It makes sense to rework the function to return a LinkedHashMap instead of the two ArrayLists.
|
As Tom mentions, you can pass remote objects. You'd have to create a class to hold your list that implements Remote. Anytime you pass something that implements Remote as an argument, whenever the receiving side uses it, it turns around and makes a remote call *back* to the caller to work with that object.
|
Arguments in RMI calls a serialised. Deserialisation on the server creates a copy of the lists. If the lists remained on the client side, then the number of network calls would be quite high. You can pass remote objects, but beware of the performance implications.
|
Pass by reference not returning in RMI for ArrayList
|
[
"",
"java",
"arraylist",
"rmi",
""
] |
I have a generic class that should allow any type, primitive or otherwise. The only problem with this is using `default(T)`. When you call default on a value type or a string, it initializes it to a reasonable value (such as empty string). When you call `default(T)` on an object, it returns null. For various reasons we need to ensure that if it is not a primitive type, then we will have a default instance of the type, **not** null. Here is attempt 1:
```
T createDefault()
{
if(typeof(T).IsValueType)
{
return default(T);
}
else
{
return Activator.CreateInstance<T>();
}
}
```
Problem - string is not a value type, but it does not have a parameterless constructor. So, the current solution is:
```
T createDefault()
{
if(typeof(T).IsValueType || typeof(T).FullName == "System.String")
{
return default(T);
}
else
{
return Activator.CreateInstance<T>();
}
}
```
But this feels like a kludge. Is there a nicer way to handle the string case?
|
Keep in mind that default(string) is null, not string.Empty. You may want a special case in your code:
```
if (typeof(T) == typeof(String)) return (T)(object)String.Empty;
```
|
```
if (typeof(T).IsValueType || typeof(T) == typeof(String))
{
return default(T);
}
else
{
return Activator.CreateInstance<T>();
}
```
Untested, but the first thing that came to mind.
|
Best way to test if a generic type is a string? (C#)
|
[
"",
"c#",
"generics",
""
] |
I work a lot with network and serial communications software, so it is often necessary for me to have code to display or log hex dumps of data packets.
Every time I do this, I write yet another hex-dump routine from scratch. I'm about to do so again, but figured I'd ask here: Is there any good free hex dump code for C++ out there somewhere?
Features I'd like:
* N bytes per line (where N is somehow configurable)
* optional ASCII/UTF8 dump alongside the hex
* configurable indentation, per-line prefixes, per-line suffixes, etc.
* minimal dependencies (ideally, I'd like the code to all be in a header file, or be a snippet I can just paste in)
**Edit:** Clarification: I am looking for code that I can easily drop in to my own programs to write to stderr, stdout, log files, or other such output streams. I'm not looking for a command-line hex dump utility.
|
The unix tool `xxd` is distributed as part of [`vim`](http://www.vim.org/download.php), and according to <http://www.vmunix.com/vim/util.html#xxd>, the source for xxd is <ftp://ftp.uni-erlangen.de:21/pub/utilities/etc/xxd-1.10.tar.gz>. It was written in C and is about 721 lines. The only licensing information given for it is this:
```
* Distribute freely and credit me,
* make money and share with me,
* lose money and don't ask me.
```
The unix tool `hexdump` is available from <http://gd.tuwien.ac.at/softeng/Aegis/hexdump.html>. It was written in C and can be compiled from source. It's quite a bit bigger than xxd, and is distributed under the GPL.
|
I often use this little snippet I've written long time ago. It's short and easy to add anywhere when debugging etc...
```
#include <ctype.h>
#include <stdio.h>
void hexdump(void *ptr, int buflen) {
unsigned char *buf = (unsigned char*)ptr;
int i, j;
for (i=0; i<buflen; i+=16) {
printf("%06x: ", i);
for (j=0; j<16; j++)
if (i+j < buflen)
printf("%02x ", buf[i+j]);
else
printf(" ");
printf(" ");
for (j=0; j<16; j++)
if (i+j < buflen)
printf("%c", isprint(buf[i+j]) ? buf[i+j] : '.');
printf("\n");
}
}
```
|
Off-the-Shelf C++ Hex Dump Code
|
[
"",
"c++",
"logging",
"hex",
"hexdump",
""
] |
I'm trying to decode the result of the Python os.wait() function. This returns, according to the Python docs:
> a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced.
How do I decode the exit status indication (which is an integer) to obtain the high and low byte? To be specific, how do I implement the decode function used in the following code snippet:
```
(pid,status) = os.wait()
(exitstatus, signum) = decode(status)
```
|
This will do what you want:
```
signum = status & 0xff
exitstatus = (status & 0xff00) >> 8
```
|
To answer your general question, you can use [bit manipulation](https://en.wikipedia.org/wiki/Bit_manipulation)
```
pid, status = os.wait()
exitstatus, signum = status & 0xFF, (status & 0xFF00) >> 8
```
However, there are also [built-in functions](https://docs.python.org/3/library/functions.html) for interpreting exit status values:
```
pid, status = os.wait()
exitstatus, signum = os.WEXITSTATUS( status ), os.WTERMSIG( status )
```
See also:
* os.WCOREDUMP()
* os.WIFCONTINUED()
* os.WIFSTOPPED()
* os.WIFSIGNALED()
* os.WIFEXITED()
* os.WSTOPSIG()
|
How do I treat an integer as an array of bytes in Python?
|
[
"",
"python",
""
] |
I want the server to always serve dates in UTC in the HTML, and have JavaScript on the client site convert it to the user's local timezone.
Bonus if I can output in the user's locale date format.
|
Seems the most foolproof way to start with a UTC date is to create a new `Date` object and use the `setUTC…` methods to set it to the date/time you want.
Then the various `toLocale…String` methods will provide localized output.
### Example:
```
// This would come from the server.
// Also, this whole block could probably be made into an mktime function.
// All very bare here for quick grasping.
d = new Date();
d.setUTCFullYear(2004);
d.setUTCMonth(1);
d.setUTCDate(29);
d.setUTCHours(2);
d.setUTCMinutes(45);
d.setUTCSeconds(26);
console.log(d); // -> Sat Feb 28 2004 23:45:26 GMT-0300 (BRT)
console.log(d.toLocaleString()); // -> Sat Feb 28 23:45:26 2004
console.log(d.toLocaleDateString()); // -> 02/28/2004
console.log(d.toLocaleTimeString()); // -> 23:45:26
```
### Some references:
* [toLocaleString](http://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Date/toLocaleString)
* [toLocaleDateString](http://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Date/toLocaleDateString)
* [toLocaleTimeString](http://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Date/toLocaleTimeString)
* [getTimezoneOffset](http://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Date/getTimezoneOffset)
|
## You can do it with [moment.js](http://momentjs.com/) (deprecated in 2021)
It's best to [parse](http://momentjs.com/docs/#/parsing/) your date string from UTC as follows (create an [ISO-8601](http://en.wikipedia.org/wiki/ISO_8601) compatible string on the server to get consistent results across all browsers):
```
var m = moment("2013-02-08T09:30:26Z");
```
Now just use `m` in your application, moment.js defaults to the local timezone for display operations. There are [many ways to format the date and time values](http://momentjs.com/docs/#/displaying/) or extract portions of it.
You can even format a moment object in the users locale like this:
```
m.format('LLL') // Returns "February 8 2013 8:30 AM" on en-us
```
To transform a moment.js object into a different timezone (i.e. neither the local one nor UTC), you'll need the [moment.js timezone extension](http://momentjs.com/timezone/). That page has also some examples, it's pretty simple to use.
Note: Moment JS recommends more modern alternatives, so it is probably not a good choice for new projects.
|
Display date/time in user's locale format and time offset
|
[
"",
"javascript",
"datetime",
"timezone",
""
] |
I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish `if` and `elif` statements for every site as shown in the following code, which I would like to avoid.
```
if site == 'site1':
...
elif site == 'site2:
...
```
What are some good and clever ways of running multiple sites from a single, common Python web framework (i.e., Pylons, TurboGears, etc)?
|
Django has this built in. See [the sites framework](http://docs.djangoproject.com/en/dev/ref/contrib/sites/#ref-contrib-sites).
As a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the `Host` HTTP header in the query when you are retrieving data.
|
Using Django on apache with mod\_python, I host multiple (unrelated) django sites simply with the following apache config:
```
<VirtualHost 1.2.3.4>
DocumentRoot /www/site1
ServerName site1.com
<Location />
SetHandler python-program
SetEnv DJANGO_SETTINGS_MODULE site1.settings
PythonPath "['/www'] + sys.path"
PythonDebug On
PythonInterpreter site1
</Location>
</VirtualHost>
<VirtualHost 1.2.3.4>
DocumentRoot /www/site2
ServerName site2.com
<Location />
SetHandler python-program
SetEnv DJANGO_SETTINGS_MODULE site2.settings
PythonPath "['/www'] + sys.path"
PythonDebug On
PythonInterpreter site2
</Location>
</VirtualHost>
```
No need for multiple apache instances or proxy servers. Using a different PythonInterpreter directive for each site (the name you enter is arbitrary) keeps the namespaces separate.
|
Running multiple sites from a single Python web framework
|
[
"",
"python",
"frameworks",
""
] |
On Windows I have a problem I never encountered on Unix. That is how to get a thread to sleep for less than one millisecond. On Unix you typically have a number of choices (sleep, usleep and nanosleep) to fit your needs. On Windows, however, there is only *Sleep* with millisecond granularity.
On Unix, I can use the use the `select` system call to create a microsecond sleep which is pretty straightforward:
```
int usleep(long usec)
{
struct timeval tv;
tv.tv_sec = usec/1000000L;
tv.tv_usec = usec%1000000L;
return select(0, 0, 0, 0, &tv);
}
```
How can I achieve the same on Windows?
|
On Windows the use of `select` forces you to include the [Winsock](https://msdn.microsoft.com/en-us/library/windows/desktop/ms740673(v=vs.85).aspx) library which has to be initialized like this in your application:
```
WORD wVersionRequested = MAKEWORD(1,0);
WSADATA wsaData;
WSAStartup(wVersionRequested, &wsaData);
```
And then the select won't allow you to be called without any socket so you have to do a little more to create a microsleep method:
```
int usleep(long usec)
{
struct timeval tv;
fd_set dummy;
SOCKET s = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
FD_ZERO(&dummy);
FD_SET(s, &dummy);
tv.tv_sec = usec/1000000L;
tv.tv_usec = usec%1000000L;
return select(0, 0, 0, &dummy, &tv);
}
```
All these created usleep methods return zero when successful and non-zero for errors.
|
This indicates a mis-understanding of sleep functions. The parameter you pass is a *minimum* time for sleeping. There's no guarantee that the thread will wake up after exactly the time specified. In fact, threads don't "wake up" at all, but are rather chosen for execution by the OS scheduler. The scheduler might choose to wait much longer than the requested sleep duration to activate a thread, especially if another thread is still active at that moment.
|
How to make thread sleep less than a millisecond on Windows
|
[
"",
"c++",
"windows",
"winapi",
"sleep",
""
] |
I'm looking for good/working/simple to use PHP code for parsing raw email into parts.
I've written a couple of brute force solutions, but every time, one small change/header/space/something comes along and my whole parser fails and the project falls apart.
And before I get pointed at PEAR/PECL, I need actual code. My host has some screwy config or something, I can never seem to get the .so's to build right. If I do get the .so made, some difference in path/environment/php.ini doesn't always make it available (apache vs cron vs CLI).
Oh, and one last thing, I'm parsing the raw email text, NOT POP3, and NOT IMAP. It's being piped into the PHP script via a .qmail email redirect.
I'm not expecting SOF to write it for me, I'm looking for some tips/starting points on doing it "right". This is one of those "wheel" problems that I know has already been solved.
|
What are you hoping to end up with at the end? The body, the subject, the sender, an attachment? You should spend some time with [RFC2822](http://www.faqs.org/rfcs/rfc2822.html) to understand the format of the mail, but here's the simplest rules for well formed email:
```
HEADERS\n
\n
BODY
```
That is, the first blank line (double newline) is the separator between the HEADERS and the BODY. A HEADER looks like this:
```
HSTRING:HTEXT
```
HSTRING always starts at the beginning of a line and doesn't contain any white space or colons. HTEXT can contain a wide variety of text, including newlines as long as the newline char is followed by whitespace.
The "BODY" is really just any data that follows the first double newline. (There are different rules if you are transmitting mail via SMTP, but processing it over a pipe you don't have to worry about that).
So, in really simple, circa-1982 [RFC822](http://www.faqs.org/rfcs/rfc822.html) terms, an email looks like this:
```
HEADER: HEADER TEXT
HEADER: MORE HEADER TEXT
INCLUDING A LINE CONTINUATION
HEADER: LAST HEADER
THIS IS ANY
ARBITRARY DATA
(FOR THE MOST PART)
```
Most modern email is more complex than that though. Headers can be encoded for charsets or [RFC2047](http://www.faqs.org/rfcs/rfc2047.html) mime words, or a ton of other stuff I'm not thinking of right now. The bodies are really hard to roll your own code for these days to if you want them to be meaningful. Almost all email that's generated by an MUA will be [MIME](http://www.faqs.org/rfcs/rfc2045.html) encoded. That might be uuencoded text, it might be html, it might be a uuencoded excel spreadsheet.
I hope this helps provide a framework for understanding some of the very elemental buckets of email. If you provide more background on what you are trying to do with the data I (or someone else) might be able to provide better direction.
|
Try the Plancake PHP Email parser:
<https://github.com/plancake/official-library-php-email-parser>
I have used it for my projects. It works great, it is just one class and it is open source.
|
parsing raw email in php
|
[
"",
"php",
"email",
""
] |
MonoDevelop 1.0 doesn't appear to have a code-formatter like Eclipse does for Java. Is there a preferred shell script (or MonoDevelop add-in?) that you've found to work well?
|
At the moment, source code formatting in MonoDevelop is marked as a future enhancement:
<https://bugzilla.novell.com/show_bug.cgi?id=321946>
In the meantime, you may want to check out Artistic Style for C# formatting:
<http://astyle.sourceforge.net/>
I'm planning to see how this might be wired up as an external tool within MonoDevelop. If I get to that, I will edit my answer with the information. **UPDATE: I don't have enough reputation to leave a comment, so I'll make one here: Nice job, Dustin, and patch for MonoDevelop too :-) I wonder how recent the version is that is included with Ubuntu... Either way, I'm glad you found something that works for you.**
|
Thanks, Brandon. I submitted a patch to MonoDevelop. The issue with MonoDevelop add-ins is that there is a mono compiler bug that doesn't handle anonymous delegates correctly. (bug report: <https://bugzilla.novell.com/show_bug.cgi?id=394347>)
The patch/workaround is to just cast the anonymous delegate to the proper delegate type. (bug report & patch: <https://bugzilla.novell.com/show_bug.cgi?id=369538>)
I'm running with the patched version now and am able to execute AStyle on the currently edited document by simply creating a new External Tool setting with the following settings:
TITLE: A\_Style (put in an underscore `_` to enable hotkeys)
COMMAND: astyle
ARGUMENTS: ${ItemPath}
Then, just execute it using Tools->AStyle (or `ALT`-`T`, `S`)
9/25/08 Edit -- I just put up a blog posting on how to patch MonoDevelop 1.0 and get it working with AStyle: <http://dustinbreese.blogspot.com/2008/09/auto-formatting-code-in-monodevelop-10.html>
|
C# Code Formatter for Linux and/or MonoDevelop
|
[
"",
"c#",
"formatting",
"mono",
""
] |
I've never used any of the .NET generics in my work, but I understand that they are fairly popular. Does anyone have any good links or book suggestions for learning them? As a bonus; I only vaguely understand what .NET generic collections are and what they do...does anyone have any practical examples of how they might be used to greater advantage than the normal collections in .NET?
|
* <http://www.informit.com/articles/article.aspx?p=605369>
* <http://www.codeproject.com/KB/cs/genericcache.aspx>
|
The obvious choice..
[MSDN C# Generics](http://msdn.microsoft.com/en-us/library/ms379564(VS.80).aspx)
|
Best resource for learning .NET generics?
|
[
"",
"c#",
".net",
"generics",
""
] |
Does any know of a good calendar (not datepicker, but a BIG browsable calendar) plugin for one of the major javascript frameworks. I'd prefer jQuery.
|
just published a new open source project (jQuery plugin). sounds exactly like what you want:
[FullCalendar](http://arshaw.com/fullcalendar/)
Hope it works well for you!
|
I prefer [Eyecon Calendar.](http://eyecon.ro/datepicker/) Maybe the best.
|
Javascript framework calendar plugin
|
[
"",
"javascript",
"jquery",
"calendar",
""
] |
This is pretty weird.
I have my Profiler open and it obviously shows that a stored procedure is called. I open the database and the SP list, but the SP doesn't exist. However, there's another SP whose name is the same except it has a prefix 'x'
Is SQL Server 2005 mapping the SP name to a different one for security purposes?
EDIT: I found out it's a Synonym, whatever that is.
|
In general, when you know an object exists because it's been used in a query, and you can't find it in the object tree in Management Studio, you can do this to find it.
```
select *
from sys.objects
where name = 'THE_NAME_YOU_WANT'
```
I just checked, and it works with Synonyms.
|
Possibly silly questions, but just in case... have you refreshed the SP list? Have you checked for a stored procedure of that name under a different owner? If you created the stored procedure without specifying the owner then it could be in the list under your ownership (or not at all if the list is filtered to only "dbo" for example).
|
I traced a Stored Procedure as shown in the SQL Server 2005 Profiler. It's not found but works. Why?
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"stored-procedures",
""
] |
I have a 'foreach' macro I use frequently in C++ that works for most STL containers:
```
#define foreach(var, container) \
for(typeof((container).begin()) var = (container).begin(); \
var != (container).end(); \
++var)
```
(Note that 'typeof' is a gcc extension.) It is used like this:
```
std::vector< Blorgus > blorgi = ...;
foreach(blorgus, blorgi) {
blorgus->draw();
}
```
I would like to make something similar that iterates over a map's values. Call it "foreach\_value", perhaps. So instead of writing
```
foreach(pair, mymap) {
pair->second->foo();
}
```
I would write
```
foreach_value(v, mymap) {
v.foo();
}
```
I can't come up with a macro that will do this, because it requires declaring two variables: the iterator and the value variable ('v', above). I don't know how to do that in the initializer of a for loop, even using gcc extensions. I could declare it just before the foreach\_value call, but then it will conflict with other instances of the foreach\_value macro in the same scope. If I could suffix the current line number to the iterator variable name, it would work, but I don't know how to do that.
|
You can do this using two loops. The first declares the iterator, with a name which is a function of the container variable (and you can make this uglier if you're worried about conflicts with your own code). The second declares the value variable.
```
#define ci(container) container ## iter
#define foreach_value(var, container) \
for (typeof((container).begin()) ci(container) = container.begin(); \
ci(container) != container.end(); ) \
for (typeof(ci(container)->second)* var = &ci(container)->second; \
ci(container) != container.end(); \
(++ci(container) != container.end()) ? \
(var = &ci(container)->second) : var)
```
By using the same loop termination condition, the outer loop only happens once (and if you're lucky, gets optimized away). Also, you avoid calling ->second on the iterator if the map is empty. That's the same reason for the ternary operator in the increment of the inner loop; at the end, we just leave var at the last value, since it won't be referenced again.
You could inline ci(container), but I think it makes the macro more readable.
|
You would be looking for [BOOST\_FOREACH](http://www.boost.org/doc/libs/1_36_0/doc/html/foreach.html) - they have done all the work for you already!
If you do want to roll your own, you can declare a block anywhere in C++, which resolves your scope issue with your intermediate storage of itr->second
...
```
// Valid C++ code (which does nothing useful)
{
int a = 21; // Which could be storage of your value type
}
// a out of scope here
{
int a = 32; // Does not conflict with a above
}
```
|
"foreach values" macro in gcc & cpp
|
[
"",
"c++",
"gcc",
""
] |
Is there any difference to the following code:
```
class Foo
{
inline int SomeFunc() { return 42; }
int AnotherFunc() { return 42; }
};
```
Will both functions gets inlined? Does inline actually make any difference? Are there any rules on when you should or shouldn't inline code? I often use the `AnotherFunc` syntax (accessors for example) but I rarely specify `inline` directly.
|
Both forms should be inlined in the exact same way. Inline is implicit for function bodies defined in a class definition.
|
The `inline` keyword is essentially a hint to the compiler. Using `inline` doesn't guarantee that your function *will* be inlined, nor does omitting it guarantee that it *won't*. You are just letting the compiler know that it might be a good idea to try harder to inline that particular function.
|
Inlining C++ code
|
[
"",
"c++",
"inline-functions",
""
] |
Considering such code:
```
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
```
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
```
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
```
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
*UPD*: I'm trying to check not only the **state** of an objects, but also the **execution order** to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
|
If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the *actual requirement* instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
1. Move them to another class, call it Collaborator
2. Add an instance of this other class to the ToBeTested class
3. Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
4. Call the method under test
5. Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
|
You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
|
How to test function call order
|
[
"",
"c++",
"unit-testing",
"tdd",
""
] |
I need to execute a large set of SQL statements (creating a bunch of tables, views and stored procedures) from within a C# program.
These statements need to be separated by `GO` statements, but `SqlCommand.ExecuteNonQuery()` does not like `GO` statements. My solution, which I suppose I'll post for reference, was to split the SQL string on `GO` lines, and execute each batch separately.
Is there an easier/better way?
|
Use SQL Server Management Objects (SMO) which understands GO separators. See my blog post here: [http://weblogs.asp.net/jongalloway/Handling-\_2200\_GO\_2200\_-Separators-in-SQL-Scripts-*2D00*-the-easy-way](http://weblogs.asp.net/jgalloway/archive/2006/11/07/Handling-_2200_GO_2200_-Separators-in-SQL-Scripts-_2D00_-the-easy-way.aspx)
Sample code:
```
public static void Main()
{
string scriptDirectory = "c:\\temp\\sqltest\\";
string sqlConnectionString = "Integrated Security=SSPI;" +
"Persist Security Info=True;Initial Catalog=Northwind;Data Source=(local)";
DirectoryInfo di = new DirectoryInfo(scriptDirectory);
FileInfo[] rgFiles = di.GetFiles("*.sql");
foreach (FileInfo fi in rgFiles)
{
FileInfo fileInfo = new FileInfo(fi.FullName);
string script = fileInfo.OpenText().ReadToEnd();
using (SqlConnection connection = new SqlConnection(sqlConnectionString))
{
Server server = new Server(new ServerConnection(connection));
server.ConnectionContext.ExecuteNonQuery(script);
}
}
}
```
If that won't work for you, see Phil Haack's library which handles that: <http://haacked.com/archive/2007/11/04/a-library-for-executing-sql-scripts-with-go-separators-and.aspx>
|
This is what I knocked together to solve my immediate problem.
```
private void ExecuteBatchNonQuery(string sql, SqlConnection conn) {
string sqlBatch = string.Empty;
SqlCommand cmd = new SqlCommand(string.Empty, conn);
conn.Open();
sql += "\nGO"; // make sure last batch is executed.
try {
foreach (string line in sql.Split(new string[2] { "\n", "\r" }, StringSplitOptions.RemoveEmptyEntries)) {
if (line.ToUpperInvariant().Trim() == "GO") {
cmd.CommandText = sqlBatch;
cmd.ExecuteNonQuery();
sqlBatch = string.Empty;
} else {
sqlBatch += line + "\n";
}
}
} finally {
conn.Close();
}
}
```
It requires GO commands to be on their own line, and will not detect block-comments, so this sort of thing will get split, and cause an error:
```
ExecuteBatchNonQuery(@"
/*
GO
*/", conn);
```
|
Execute a large SQL script (with GO commands)
|
[
"",
"c#",
"sql-server",
""
] |
If one Googles for "difference between `notify()` and `notifyAll()`" then a lot of explanations will pop up (leaving apart the javadoc paragraphs). It all boils down to the number of waiting threads being waken up: one in `notify()` and all in `notifyAll()`.
However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition; in the first case the one selected by the VM, in the second case the one selected by the system thread scheduler. The exact selection procedures for both of them (in the general case) are not known to the programmer.
What's the **useful** difference between [notify()](http://download.oracle.com/javase/6/docs/api/java/lang/Object.html#notify%28%29) and [notifyAll()](http://download.oracle.com/javase/6/docs/api/java/lang/Object.html#notifyAll%28%29) then? Am I missing something?
|
> However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition.
That is not correct. `o.notifyAll()` wakes *all* of the threads that are blocked in `o.wait()` calls. The threads are only allowed to return from `o.wait()` one-by-one, but they each *will* get their turn.
---
Simply put, it depends on why your threads are waiting to be notified. Do you want to tell one of the waiting threads that something happened, or do you want to tell all of them at the same time?
In some cases, all waiting threads can take useful action once the wait finishes. An example would be a set of threads waiting for a certain task to finish; once the task has finished, all waiting threads can continue with their business. In such a case you would use **notifyAll()** to wake up all waiting threads at the same time.
Another case, for example mutually exclusive locking, only one of the waiting threads can do something useful after being notified (in this case acquire the lock). In such a case, you would rather use **notify()**. Properly implemented, you *could* use **notifyAll()** in this situation as well, but you would unnecessarily wake threads that can't do anything anyway.
---
In many cases, the code to await a condition will be written as a loop:
```
synchronized(o) {
while (! IsConditionTrue()) {
o.wait();
}
DoSomethingThatOnlyMakesSenseWhenConditionIsTrue_and_MaybeMakeConditionFalseAgain();
}
```
That way, if an `o.notifyAll()` call wakes more than one waiting thread, and the first one to return from the `o.wait()` makes leaves the condition in the false state, then the other threads that were awakened will go back to waiting.
|
Clearly, `notify` wakes (any) one thread in the wait set, `notifyAll` wakes all threads in the waiting set. The following discussion should clear up any doubts. `notifyAll` should be used most of the time. If you are not sure which to use, then use `notifyAll`.Please see explanation that follows.
Read very carefully and understand. Please send me an email if you have any questions.
Look at producer/consumer (assumption is a ProducerConsumer class with two methods). IT IS BROKEN (because it uses `notify`) - yes it MAY work - even most of the time, but it may also cause deadlock - we will see why:
```
public synchronized void put(Object o) {
while (buf.size()==MAX_SIZE) {
wait(); // called if the buffer is full (try/catch removed for brevity)
}
buf.add(o);
notify(); // called in case there are any getters or putters waiting
}
public synchronized Object get() {
// Y: this is where C2 tries to acquire the lock (i.e. at the beginning of the method)
while (buf.size()==0) {
wait(); // called if the buffer is empty (try/catch removed for brevity)
// X: this is where C1 tries to re-acquire the lock (see below)
}
Object o = buf.remove(0);
notify(); // called if there are any getters or putters waiting
return o;
}
```
FIRSTLY,
**Why do we need a while loop surrounding the wait?**
We need a `while` loop in case we get this situation:
Consumer 1 (C1) enter the synchronized block and the buffer is empty, so C1 is put in the wait set (via the `wait` call). Consumer 2 (C2) is about to enter the synchronized method (at point Y above), but Producer P1 puts an object in the buffer, and subsequently calls `notify`. The only waiting thread is C1, so it is woken and now attempts to re-acquire the object lock at point X (above).
Now C1 and C2 are attempting to acquire the synchronization lock. One of them (nondeterministically) is chosen and enters the method, the other is blocked (not waiting - but blocked, trying to acquire the lock on the method). Let's say C2 gets the lock first. C1 is still blocking (trying to acquire the lock at X). C2 completes the method and releases the lock. Now, C1 acquires the lock. Guess what, lucky we have a `while` loop, because, C1 performs the loop check (guard) and is prevented from removing a non-existent element from the buffer (C2 already got it!). If we didn't have a `while`, we would get an `IndexArrayOutOfBoundsException` as C1 tries to remove the first element from the buffer!
NOW,
**Ok, now why do we need notifyAll?**
In the producer/consumer example above it looks like we can get away with `notify`. It seems this way, because we can prove that the guards on the *wait* loops for producer and consumer are mutually exclusive. That is, it looks like we cannot have a thread waiting in the `put` method as well as the `get` method, because, for that to be true, then the following would have to be true:
`buf.size() == 0 AND buf.size() == MAX_SIZE` (assume MAX\_SIZE is not 0)
HOWEVER, this is not good enough, we NEED to use `notifyAll`. Let's see why ...
Assume we have a buffer of size 1 (to make the example easy to follow). The following steps lead us to deadlock. Note that ANYTIME a thread is woken with notify, it can be non-deterministically selected by the JVM - that is any waiting thread can be woken. Also note that when multiple threads are blocking on entry to a method (i.e. trying to acquire a lock), the order of acquisition can be non-deterministic. Remember also that a thread can only be in one of the methods at any one time - the synchronized methods allow only one thread to be executing (i.e. holding the lock of) any (synchronized) methods in the class. If the following sequence of events occurs - deadlock results:
**STEP 1:**
- P1 puts 1 char into the buffer
**STEP 2:**
- P2 attempts `put` - checks wait loop - already a char - waits
**STEP 3:**
- P3 attempts `put` - checks wait loop - already a char - waits
**STEP 4:**
- C1 attempts to get 1 char
- C2 attempts to get 1 char - blocks on entry to the `get` method
- C3 attempts to get 1 char - blocks on entry to the `get` method
**STEP 5:**
- C1 is executing the `get` method - gets the char, calls `notify`, exits method
- The `notify` wakes up P2
- BUT, C2 enters method before P2 can (P2 must reacquire the lock), so P2 blocks on entry to the `put` method
- C2 checks wait loop, no more chars in buffer, so waits
- C3 enters method after C2, but before P2, checks wait loop, no more chars in buffer, so waits
**STEP 6:**
- NOW: there is P3, C2, and C3 waiting!
- Finally P2 acquires the lock, puts a char in the buffer, calls notify, exits method
**STEP 7:**
- P2's notification wakes P3 (remember any thread can be woken)
- P3 checks the wait loop condition, there is already a char in the buffer, so waits.
- NO MORE THREADS TO CALL NOTIFY and THREE THREADS PERMANENTLY SUSPENDED!
SOLUTION: Replace `notify` with `notifyAll` in the producer/consumer code (above).
|
Java: notify() vs. notifyAll() all over again
|
[
"",
"java",
"multithreading",
""
] |
I am looking for either a FireFox extension, or a similar program, that allows you to craft GET and POST requests. The user would put in a form action, and as many form key/value pairs as desired. It would also send any cookie information (or send the current cookies from any domain the user chooses.) The Web Developer add-on is almost what I'm looking for; It let's you quickly see the form keys, but it doesn't let you change them or add new ones (which leads to a lot of painful JavaScript in the address bar...)
|
Actually I think [Poster](https://addons.mozilla.org/en-US/firefox/addon/2691) is what you're looking for.
[A Screen shot of an older Poster version](https://addons.mozilla.org/img/uploads/previews/full/19/19951.png)
|
If you're a windows user, use [Fiddler](http://www.fiddler2.com/fiddler2/version.asp). It is invaluable for looking at the raw Http requests and responses. It also has the ability to create requests with the request builder and it has an auto responder also, so you can intercept requests. It even lets you inspect HTTPS traffic and it has a built in event scripting engine, where you can create your own rules.
|
Looking for a specific FireFox extension / program for Form posting
|
[
"",
"javascript",
"html",
""
] |
I'm generating some xml files that needs to conform to an xsd file that was given to me. How should I verify they conform?
|
The Java runtime library supports validation. Last time I checked this was the Apache Xerces parser under the covers. You should probably use a [javax.xml.validation.Validator](http://java.sun.com/j2se/1.5.0/docs/api/javax/xml/validation/Validator.html).
```
import javax.xml.XMLConstants;
import javax.xml.transform.Source;
import javax.xml.transform.stream.StreamSource;
import javax.xml.validation.*;
import java.net.URL;
import org.xml.sax.SAXException;
//import java.io.File; // if you use File
import java.io.IOException;
...
URL schemaFile = new URL("http://host:port/filename.xsd");
// webapp example xsd:
// URL schemaFile = new URL("http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd");
// local file example:
// File schemaFile = new File("/location/to/localfile.xsd"); // etc.
Source xmlFile = new StreamSource(new File("web.xml"));
SchemaFactory schemaFactory = SchemaFactory
.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
try {
Schema schema = schemaFactory.newSchema(schemaFile);
Validator validator = schema.newValidator();
validator.validate(xmlFile);
System.out.println(xmlFile.getSystemId() + " is valid");
} catch (SAXException e) {
System.out.println(xmlFile.getSystemId() + " is NOT valid reason:" + e);
} catch (IOException e) {}
```
The schema factory constant is the string `http://www.w3.org/2001/XMLSchema` which defines XSDs. The above code validates a WAR deployment descriptor against the URL `http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd` but you could just as easily validate against a local file.
You should not use the DOMParser to validate a document (unless your goal is to create a document object model anyway). This will start creating DOM objects as it parses the document - wasteful if you aren't going to use them.
|
Here's how to do it using [Xerces2](http://xerces.apache.org/xerces2-j/). A tutorial for this, [here](http://www.ibm.com/developerworks/edu/x-dw-xvalid-i.html) (req. signup).
Original attribution: blatantly copied from [here](http://forums.sun.com/thread.jspa?messageID=3411478):
```
import org.apache.xerces.parsers.DOMParser;
import java.io.File;
import org.w3c.dom.Document;
public class SchemaTest {
public static void main (String args[]) {
File docFile = new File("memory.xml");
try {
DOMParser parser = new DOMParser();
parser.setFeature("http://xml.org/sax/features/validation", true);
parser.setProperty(
"http://apache.org/xml/properties/schema/external-noNamespaceSchemaLocation",
"memory.xsd");
ErrorChecker errors = new ErrorChecker();
parser.setErrorHandler(errors);
parser.parse("memory.xml");
} catch (Exception e) {
System.out.print("Problem parsing the file.");
}
}
}
```
|
How to validate an XML file against an XSD file?
|
[
"",
"java",
"xml",
"validation",
"xsd",
""
] |
I know that there is no official API for Google Analytics but is there a way to access Google Analytics Reports with C#?
|
**Update**: Google launched a Google Analytics API today.
[Google Analytics Blog - API Launched](http://analytics.blogspot.com/2009/04/attention-developers-google-analytics.html)
|
I wrote a small project that lets you generate pretty much any Analytics report. It's listed on Google's Analytics API page - <http://code.google.com/apis/analytics/docs/gdata/gdataArticlesCode.html>
You can read about it here and get the source code - <http://www.reimers.dk/blogs/jacob_reimers_weblog/archive/2009/05/09/added-google-analytics-reader-for-net.aspx>
|
Google Analytics Access with C#
|
[
"",
"c#",
"google-analytics",
"google-analytics-api",
""
] |
I once worked with an architect who banned the use of SQL views. His main reason was that views made it too easy for a thoughtless coder to needlessly involve joined tables which, if that coder tried harder, could be avoided altogether. Implicitly he was encouraging code reuse via copy-and-paste instead of encapsulation in views.
The database had nearly 600 tables and was highly normalised, so most of the useful SQL was necessarily verbose.
Several years later I can see at least one bad outcome from the ban - we have many hundreds of dense, lengthy stored procs that verge on unmaintainable.
In hindsight I would say it was a bad decision, but what are your experiences with SQL views? Have you found them bad for performance? Any other thoughts on when they are or are not appropriate?
|
There are some very good uses for views; I have used them a lot for tuning and for exposing less normalized sets of information, or for UNION-ing results from multiple selects into a single result set.
Obviously any programming tool can be used incorrectly, but I can't think of any times in my experience where a poorly tuned view has caused any kind of drawbacks from a performance standpoint, and the value they can provide by providing explicitly tuned selects and avoiding duplication of complex SQL code can be significant.
Incidentally, I have never been a fan of architectural "rules" that are based on keeping developers from hurting themselves. These rules often have unintended side-effects -- the last place I worked didn't allow using NULLs in the database, because developers might forget to check for null. This ended up forcing us to work around "1/1/1900" dates and integers defaulted to "0" in all the software built against the databases, and introducing a litany of bugs caused by devs working around places where NULL was the appropriate value.
|
You've answered your own question:
> he was encouraging code reuse via copy-and-paste
Reuse the code by creating a view. If the view performs poorly, it will be much easier to track down than if you have the same poorly performing code in several places.
|
SQL Server Views, blessing or curse?
|
[
"",
"sql",
"sql-server",
""
] |
I am looking to write some C# code for linux/windows/mac/any other platform, and am looking for best practices for portable code.
Project [mono](http://go-mono.org) has some great [porting](http://www.mono-project.com/Category:Porting) resources.
What are the best practices for portable C#?
|
I've actually used winforms and it was fine. It was BUTT UGLY, but it worked.
Obviously, don't use P/Invoke, or any win32 stuff like the registry. Also be aware of any third party DLL's. For example, we use a third party SQLite dll which actually contains native code in it which we have to swap out if we want to run on OSX/linux.
|
I hate the term "Best practice" because it seems that some practices can be the best in any context, which is a risky thing, but I'll tell what I consider a "Good practice" for multi-platform code (and for most other type of development):
Use a continuous integration engine and **build for all the target platforms all the time**.
Sounds too complex? Well, if you really need to support multiple platforms, better to do it. No matter how careful you are with your code and library usage, if you test too late, you'll find yourself spending looong hours reworking big portions of the app.
|
Best practices for portable C#
|
[
"",
"c#",
"mono",
"cross-platform",
""
] |
Consider:
```
List<String> someList = new ArrayList<>();
// add "monkey", "donkey", "skeleton key" to someList
```
```
for (String item : someList) {
System.out.println(item);
}
```
What would the equivalent `for` loop look like without using the *for each* syntax?
---
People new to Java commonly encounter issues when trying to *modify the original data* using the new style foreach loop. Use [Why doesn't assigning to the iteration variable in a foreach loop change the underlying data?](https://stackoverflow.com/questions/15844443) to close duplicates about that common problem. Note that other languages with analogous constructs generally have the same issue; for example, see [Why doesn't modifying the iteration variable affect subsequent iterations?](https://stackoverflow.com/questions/15363138) for the same issue in Python.
|
```
for (Iterator<String> i = someIterable.iterator(); i.hasNext();) {
String item = i.next();
System.out.println(item);
}
```
Note that if you need to use `i.remove();` in your loop, or access the actual iterator in some way, you cannot use the `for ( : )` idiom, since the actual iterator is merely inferred.
As was noted by Denis Bueno, this code works for any object that implements the [`Iterable` interface](http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html).
If the right-hand side of the `for (:)` idiom is an array rather than an `Iterable` object, the internal code uses an int index counter and checks against `array.length` instead. See the [Java Language Specification](http://docs.oracle.com/javase/specs/jls/se8/html/jls-14.html#jls-14.14.2).
```
for (int i = 0; i < someArray.length; i++) {
String item = someArray[i];
System.out.println(item);
}
```
|
The construct *for each* is also valid for arrays. e.g.
```
String[] fruits = new String[] { "Orange", "Apple", "Pear", "Strawberry" };
for (String fruit : fruits) {
// fruit is an element of the `fruits` array.
}
```
which is essentially equivalent of
```
for (int i = 0; i < fruits.length; i++) {
String fruit = fruits[i];
// fruit is an element of the `fruits` array.
}
```
So, overall summary:
[[nsayer]](https://stackoverflow.com/questions/85190/how-does-the-java-for-each-loop-work/85206#85206) The following is the longer form of what is happening:
> ```
> for(Iterator<String> i = someList.iterator(); i.hasNext(); ) {
> String item = i.next();
> System.out.println(item);
> }
> ```
>
> Note that if you need to use
> i.remove(); in your loop, or access
> the actual iterator in some way, you
> cannot use the for( : ) idiom, since
> the actual Iterator is merely
> inferred.
[[Denis Bueno]](https://stackoverflow.com/questions/85190/how-does-the-java-for-each-loop-work/85242#85242)
> It's implied by nsayer's answer, but
> it's worth noting that the OP's for(..)
> syntax will work when "someList" is
> anything that implements
> java.lang.Iterable -- it doesn't have
> to be a list, or some collection from
> java.util. Even your own types,
> therefore, can be used with this
> syntax.
|
In detail, how does the 'for each' loop work in Java?
|
[
"",
"java",
"foreach",
"syntactic-sugar",
""
] |
I'm creating a custom drop down list with AJAX dropdownextender. Inside my drop panel I have linkbuttons for my options.
```
<asp:Label ID="ddl_Remit" runat="server" Text="Select remit address."
Style="display: block; width: 300px; padding:2px; padding-right: 50px; font-family: Tahoma; font-size: 11px;" />
<asp:Panel ID="DropPanel" runat="server" CssClass="ContextMenuPanel" Style="display :none; visibility: hidden;">
<asp:LinkButton runat="server" ID="Option1z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" />
<asp:LinkButton runat="server" ID="Option2z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" />
<asp:LinkButton runat="server" ID="Option3z" Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem" />-->
</asp:Panel>
<ajaxToolkit:DropDownExtender runat="server" ID="DDE"
TargetControlID="ddl_Remit"
DropDownControlID="DropPanel" />
```
And this works well. Now what I have to do is dynamically fill this dropdownlist. Here is my best attempt:
```
private void fillRemitDDL()
{
//LinkButton Text="451 Stinky Place Drive <br/>North Nowhere, Nebraska 20503-2343 " OnClick="OnSelect" CssClass="ContextMenuItem"
DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter ta = new DAL_ScanlineTableAdapters.SL_GetRemitByScanlineIDTableAdapter();
DataTable dt = (DataTable)ta.GetData(int.Parse(this.SLID));
if (dt.Rows.Count > 0)
{
Panel ddl = this.FindControl("DropPanel") as Panel;
ddl.Controls.Clear();
for (int x = 0; x < dt.Rows.Count; x++)
{
LinkButton lb = new LinkButton();
lb.Text = dt.Rows[x]["Remit3"].ToString().Trim() + "<br />" + dt.Rows[x]["Remit4"].ToString().Trim() + "<br />" + dt.Rows[x]["RemitZip"].ToString().Trim();
lb.CssClass = "ContextMenuItem";
lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")");
ddl.Controls.Add(lb);
}
}
}
```
My problem is that I cannot get the event to run script! I've tried the above code as well as replacing
```
lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")");
```
with
```
lb.Click += new EventHandler(OnSelect);
```
and also
```
lb.OnClientClick = "setDDL(" + lb.Text + ")");
```
I'm testing the the branches with Alerts on client-side and getting nothing.
Edit:
I would like to try adding the generic anchor but I think I can add the element to an asp.net control. Nor can I access a client-side div from server code to add it that way. I'm going to have to use some sort of control with an event. My setDLL function goes as follows:
```
function setDDL(var)
{
alert(var);
document.getElementById('ctl00_ContentPlaceHolder1_Scanline1_ddl_Remit').innerText = var;
}
```
Also I just took out the string variable in the function call (i.e. from
```
lb.Attributes.Add("onclick", "setDDL(" + lb.Text + ")");
```
to
```
lb.Attributes.Add("onclick", "setDDL()");
```
|
I'm not sure what your setDDL method does in your script but it should fire if one of the link buttons is clicked. I think you might be better off just inserting a generic html anchor though instead of a .net linkbutton as you will have no reference to the control on the server side. Then you can handle the data excahnge with your setDDL method. Furthermore you might want to quote the string you are placing inside the call to setDDL because will cause script issues (like not calling the method + page errors) given you are placing literal string data without quotes.
|
Ok, I used Literals to create anchor tags with onclicks on them and that seems to be working great. Thanks alot.
|
Linkbutton click event not running handler
|
[
"",
"c#",
"events",
"asp.net-ajax",
"webforms",
""
] |
I am trying to build a function in C/C++ to sort an array and replace each value with its "score" or rank. It takes in a double pointer array to an array of ints, and sorts the double pointers based on the dereferenced value of the integers. I have tried quite a few times to make it work, but can't get it down. Once again, it must sort the double pointers based on the values they point to. This is what I have:
```
void SortArray( int ** pArray, int ArrayLength )
{
int i, j, flag = 1; // set flag to 1 to begin initial pass
int * temp; // holding variable orig with no *
for(i = 1; (i <= ArrayLength) && flag; i++)
{
flag = 0;
for (j = 0; j < (ArrayLength -1); j++)
{
if (*pArray[j+1] > *pArray[j]) // ascending order simply changes to <
{
temp = &pArray[j]; // swap elements
pArray[j] = &pArray[j+1];
pArray[j+1] = &temp;
flag = 1; // indicates that a swap occurred.
}
}
}
}
```
|
You're close. You're referencing the address of the array items when you swap, which isn't necessary. The items in the array are pointers, and that's what needs to be swapped.
See below:
```
void SortArray( int ** pArray, int ArrayLength )
{
int i, j, flag = 1; // set flag to 1 to begin initial pass
int * temp; // holding variable orig with no *
for(i = ArrayLength - 1; i > 0 && flag; i--)
{
flag = 0;
for (j = 0; j < i; j++)
{
if (*pArray[j] > *pArray[j+1]) // ascending order simply changes to <
{
temp = pArray[j]; // swap elements
pArray[j] = pArray[j+1];
pArray[j+1] = temp;
flag = 1; // indicates that a swap occurred.
}
}
}
}
```
Also, check out [this lovely blog post on Bubble Sorting](http://rant.blackapache.net/2008/08/14/sorting-algorithms-the-bubble-sort/) in case you're interested (sorry, shameless plug :)). Hope that helps you with your homework ;)
---
Edit: Note the subtle "optimisation" where you count back from the array length and only increment up until 'i' in the inner loop. This saves you from needlessly reparsing items that have already been sorted.
|
> Heh, this isnt homework.
If thats the case then consider using the STL to manage arrays and sort. Its easier to develop and maintain and the std::sort algorithm is asymptotically faster than bubble sort.
|
How can I sort an array of double pointers based on the values they point to?
|
[
"",
"c++",
"c",
"arrays",
"pointers",
"reference",
""
] |
How would you find the fractional part of a floating point number in PHP?
For example, if I have the value `1.25`, I want to return `0.25`.
|
```
$x = $x - floor($x)
```
|
```
$x = fmod($x, 1);
```
Here's a demo:
```
<?php
$x = 25.3333;
$x = fmod($x, 1);
var_dump($x);
```
Should ouptut
```
double(0.3333)
```
[Credit.](https://stackoverflow.com/questions/50801/whats-the-best-way-to-get-the-fractional-part-of-a-float-in-php/41626638#comment17394709_50806)
|
What's the best way to get the fractional part of a float in PHP?
|
[
"",
"php",
""
] |
Edit:
From another question I provided an answer that has links to a lot of questions/answers about singletons: [More info about singletons here:](https://stackoverflow.com/questions/1008019/c-singleton-design-pattern/1008289#1008289)
So I have read the thread [Singletons: good design or a crutch?](https://stackoverflow.com/questions/11831/singletons-good-design-or-a-crutch)
And the argument still rages.
I see Singletons as a Design Pattern (good and bad).
The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!).
So the main questions that need to be answered are:
* When should you use a Singleton
* How do you implement a Singleton correctly
My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses.
---
So get the ball rolling:
I will hold my hand up and say this is what I use but probably has problems.
I like "Scott Myers" handling of the subject in his books "Effective C++"
> Good Situations to use Singletons (not many):
>
> * Logging frameworks
> * Thread recycling pools
```
/*
* C++ Singleton
* Limitation: Single Threaded Design
* See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
* For problems associated with locking in multi threaded applications
*
* Limitation:
* If you use this Singleton (A) within a destructor of another Singleton (B)
* This Singleton (A) must be fully constructed before the constructor of (B)
* is called.
*/
class MySingleton
{
private:
// Private Constructor
MySingleton();
// Stop the compiler generating methods of copy the object
MySingleton(MySingleton const& copy); // Not Implemented
MySingleton& operator=(MySingleton const& copy); // Not Implemented
public:
static MySingleton& getInstance()
{
// The only instance
// Guaranteed to be lazy initialized
// Guaranteed that it will be destroyed correctly
static MySingleton instance;
return instance;
}
};
```
OK. Lets get some criticism and other implementations together.
:-)
|
Answer:
Use a Singleton if:
* You need to have one and only one object of a type in system
Do not use a Singleton if:
* You want to save memory
* You want to try something new
* You want to show off how much you know
* Because everyone else is doing it (See [cargo cult programmer](http://en.wikipedia.org/wiki/Cargo_cult_programming) in wikipedia)
* In user interface widgets
* It is supposed to be a cache
* In strings
* In Sessions
* I can go all day long
How to create the best singleton:
* The smaller, the better. I am a minimalist
* Make sure it is thread safe
* Make sure it is never null
* Make sure it is created only once
* Lazy or system initialization? Up to your requirements
* Sometimes the OS or the JVM creates singletons for you (e.g. in Java every class definition is a singleton)
* Provide a destructor or somehow figure out how to dispose resources
* Use little memory
|
Singletons give you the ability to combine two bad traits in one class. That's wrong in pretty much every way.
A singleton gives you:
1. Global access to an object, and
2. A guarantee that no more than one object of this type *can ever be created*
Number one is straightforward. Globals are generally bad. We should never make objects globally accessible unless we *really* need it.
Number two may sound like it makes sense, but let's think about it. When was the last time you \*\*accidentally\* created a new object instead of referencing an existing one? Since this is tagged C++, let's use an example from that language. Do you often accidentally write
```
std::ostream os;
os << "hello world\n";
```
When you intended to write
```
std::cout << "hello world\n";
```
Of course not. We don't need protection against this error, because that kind of error just doesn't happen. If it does, the correct response is to go home and sleep for 12-20 hours and hope you feel better.
If only one object is needed, simply create one instance. If one object should be globally accessible, make it a global. But that doesn't mean it should be impossible to create other instances of it.
The "only one instance is possible" constraint doesn't really protect us against likely bugs. But it *does* make our code very hard to refactor and maintain. Because quite often we find out *later* that we did need more than one instance. We *do* have more than one database, we *do* have more than one configuration object, we do want several loggers. Our unit tests may want to be able to create and recreate these objects every test, to take a common example.
So a singleton should be used if and only if, we need *both* the traits it offers: If we *need* global access (which is rare, because globals are generally discouraged) **and** we *need* to prevent anyone from **ever** creating more than one instance of a class (which sounds to me like a design issue). The only reason I can see for this is if creating two instances would corrupt our application state - probably because the class contains a number of static members or similar silliness. In which case the obvious answer is to fix that class. It shouldn't depend on being the only instance.
If you need global access to an object, make it a global, like `std::cout`. But don't constrain the number of instances that can be created.
If you absolutely, positively need to constrain the number of instances of a class to just one, and there is no way that creating a second instance can ever be handled safely, then enforce that. But don't make it globally accessible as well.
If you do need both traits, then 1) make it a singleton, and 2) let me know what you need that for, because I'm having a hard time imagining such a case.
|
Singleton: How should it be used
|
[
"",
"c++",
"design-patterns",
"singleton",
""
] |
I've seen news of [John Resig's fast new selector engine named Sizzle](http://github.com/jeresig/sizzle/tree/master) pop up in quite a few places, but I don't know what a selector engine is, nor have any of the articles given an explanation of what it is. I know Resig is the creator of jQuery, and that Sizzle is something in Javascript, but beyond that I don't know what it is. So, what is a selector engine?
Thanks!
|
A selector engine is used to query a page's DOM for particular elements, based on some sort of query (usually CSS syntax or similar).
For example, this jQuery:
```
$('div')
```
Would search for and return all of the <div> elements on the page. It uses jQuery's selector engine to do that.
Optimizing the selector engine is a big deal because almost every operation you perform with these frameworks is based on some sort of DOM query.
|
A selector engine is a JavaScript library that lets you select elements in the DOM tree using some kind of string for identifying them (think regular expressions for DOM elements). Most selector engines use some variation of the CSS3 selectors syntax so, for example, you can write something like:
```
var paragraphs = selectorengine.select('p.firstParagraph')
```
to select all P elements in the document with class firstParagraph.
Some selector engines also support a partial implementation of XPath, and even some custom syntaxes. For example, jQuery lets you write:
```
var checkedBoxes = jQuery('form#login input:checked')
```
To select all checked check boxes in the login form in the document.
|
What is a selector engine?
|
[
"",
"javascript",
"jquery",
"glossary",
"resig",
""
] |
I stumbled over this passage in the [Django tutorial](http://www.djangoproject.com/documentation/tutorial01/):
> Django models have a default **str**() method that calls **unicode**() and converts the result to a UTF-8 bytestring. This means that unicode(p) will return a Unicode string, and str(p) will return a normal string, with characters encoded as UTF-8.
Now, I'm confused because afaik Unicode is not any particular representation, so what is a "Unicode string" in Python? Does that mean UCS-2? Googling turned up [this "Python Unicode Tutorial"](http://www.reportlab.com/i18n/python_unicode_tutorial.html) which boldly states
> Unicode is a two-byte encoding which covers all of the world's common writing systems.
which is plain wrong, or is it? I have been confused many times by character set and encoding issues, but here I'm quite sure that the documentation I'm reading is confused. Does anybody know what's going on in Python when it gives me a "Unicode string"?
|
> what is a "Unicode string" in Python? Does that mean UCS-2?
Unicode strings in Python are stored internally either as UCS-2 (fixed-length 16-bit representation, almost the same as UTF-16) or UCS-4/UTF-32 (fixed-length 32-bit representation). It's a compile-time option; on Windows it's always UTF-16 whilst many Linux distributions set UTF-32 (‘wide mode’) for their versions of Python.
You are generally not supposed to care: you will see Unicode code-points as single elements in your strings and you won't know whether they're stored as two or four bytes. If you're in a UTF-16 build and you need to handle characters outside the Basic Multilingual Plane you'll be Doing It Wrong, but that's still very rare, and users who really need the extra characters should be compiling wide builds.
> plain wrong, or is it?
Yes, it's quite wrong. To be fair I think that tutorial is rather old; it probably pre-dates wide Unicode strings, if not Unicode 3.1 (the version that introduced characters outside the Basic Multilingual Plane).
There is an additional source of confusion stemming from Windows's habit of using the term “Unicode” to mean, specifically, the UTF-16LE encoding that NT uses internally. People from Microsoftland may often copy this somewhat misleading habit.
|
Meanwhile, I did a refined research to verify what the internal representation in Python is, and also what its limits are. "[The Truth About Unicode In Python](http://www.cmlenz.net/archives/2008/07/the-truth-about-unicode-in-python "The truth about Unicode in Python")" is a very good article which cites directly from the Python developers. Apparently, internal representation is either UCS-2 or UCS-4 depending on a compile-time switch. So Jon, it's not UTF-16, but your answer put me on the right track anyway, thanks.
|
Unicode vs UTF-8 confusion in Python / Django?
|
[
"",
"python",
"django",
"unicode",
""
] |
I've run on a little problem today: I have a JS drop down menu and when I inserted a GoogleMap... the Menu is rendered behind the Google Map... Any ideas on how to chance the z Index of the Google Map?
Thanks!
|
If your problem happens in Internet Explorer, but it renders the way you'd expect in FireFox or Safari, [this link](http://aplus.rs/lab/z-pos/) was extraordinarily helpful for me with a [similar problem](https://stackoverflow.com/questions/88883/yui-autocomplete-renders-under-other-page-elements-in-ie7).
It appears to boil down to the idea that marking an element as "position:relative;" in CSS causes IE6&7 to mess with it's z-index relative to other elements that come before it in the HTML document, in unintuitive and anti-spec ways. Supposedly IE8 behaves "correctly" but I haven't tested it myself.
Anutron's advice is going to be really helpful if your problem is with a `<SELECT>` form element, but if you're using JavaScript to manipulate divs or uls to act like a drop down I don't think it's going to help.
|
Note that dropdown menus in some browsers (*ahem*IE\*ahem) cannot be zPositioned at all. You'll need to use an "iframe shim" to obscure it or hide the dropdown entirely if you want to position something above it. See: <http://clientside.cnet.com/wiki/cnet-libraries/02-browser/02-iframeshim>
|
Google Maps, Z Index and Drop Down Javascript menus
|
[
"",
"javascript",
"html",
"css",
"google-maps",
"z-index",
""
] |
I have two classes, and want to include a static instance of one class inside the other and access the static fields from the second class via the first.
This is so I can have non-identical instances with the same name.
```
Class A
{
public static package1.Foo foo;
}
Class B
{
public static package2.Foo foo;
}
//package1
Foo
{
public final static int bar = 1;
}
// package2
Foo
{
public final static int bar = 2;
}
// usage
assertEquals(A.foo.bar, 1);
assertEquals(B.foo.bar, 2);
```
This works, but I get a warning "The static field Foo.bar shoudl be accessed in a static way".
Can someone explain why this is and offer a "correct" implementation.
I realize I could access the static instances directly, but if you have a long package hierarchy, that gets ugly:
```
assertEquals(net.FooCorp.divisions.A.package.Foo.bar, 1);
assertEquals(net.FooCorp.divisions.B.package.Foo.bar, 2);
```
|
I agree with others that you're probably thinking about this the wrong way. With that out of the way, this may work for you if you are only accessing static members:
```
public class A {
public static class Foo extends package1.Foo {}
}
public class B {
public static class Foo extends package2.Foo {}
}
```
|
You should use:
```
Foo.bar
```
And not:
```
A.foo.bar
```
That's what the warning means.
The reason is that `bar` isn't a member of an *instance* of `Foo`. Rather, `bar` is global, on the class `Foo`. The compiler wants you to reference it globally rather than pretending it's a member of the instance.
|
How do I correctly access static member classes?
|
[
"",
"java",
"static",
""
] |
I've recently inherited an application that makes very heavy use of session, including storing a lot of custom data objects in session. One of my first points of business with this application was to at least move the session data away from InProc, and off load it to either a stateserver or SQL Server.
After I made all of the appropriate data objects serializable, and changed the web.config to use a state service, everything appeared to work fine.
However, I found that this application does a lot of object comparisons using GetHashCode(). Methods that worked fine when the session was InProc no longer work because the HashCodes no longer match when they are supposed to. This appears to be the case when trying to find a specific child object from a parent when you know the child object's original hash code
If I simply change the web.config back to using inproc, it works again.
Anyone have any thoughts on where to begin with this?
---
EDIT:
qbeuek: thanks for the quick reply. In regards to:
> The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode.
I should have given more information on how they are using this. Basically, they have one parent data object, and there are several arrays of child objects. They happen to know the hash code for a particular object they need, so they are looping through a specific array of child objects looking for a hash code that matches. Once a match is found, they then use that object for other work.
|
When you write
> does a lot of object comparisons using GetHashCode()
i sense there is something horribly wrong with this code. The GetHashCode method does not guarantee, that the returned hash values should be in any way unique given two different objects. As far as GetHashCode is concerned, it can return 0 for all objects and still be considered correct.
When two object are the same (the Equals method returns true), they **MUST** have the same value returned from GetHashCode. When two objects have the same hash value, they **can** be the same object (Equals returns true) or be different objects (Equals returns false).
There are no other guarantees on the result of GetHashCode.
The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you **have to** override both Equals and GetHashCode.
|
Override the GetHashCode method in classes that get called this method and calculate the hash code based on unique object properties (like ID or all object fields).
|
Different values of GetHashCode for inproc and stateserver session variables
|
[
"",
"c#",
"asp.net",
"session",
""
] |
If I have a query like:
```
Select EmployeeId
From Employee
Where EmployeeTypeId IN (1,2,3)
```
and I have an index on the `EmployeeTypeId` field, does SQL server still use that index?
|
Yeah, that's right. If your `Employee` table has 10,000 records, and only 5 records have `EmployeeTypeId` in (1,2,3), then it will most likely use the index to fetch the records. However, if it finds that 9,000 records have the `EmployeeTypeId` in (1,2,3), then it would most likely just do a table scan to get the corresponding `EmployeeId`s, as it's faster just to run through the whole table than to go to each branch of the index tree and look at the records individually.
SQL Server does a lot of stuff to try and optimize how the queries run. However, sometimes it doesn't get the right answer. If you know that SQL Server isn't using the index, by looking at the execution plan in query analyzer, you can tell the query engine to use a specific index with the following change to your query.
```
SELECT EmployeeId FROM Employee WITH (Index(Index_EmployeeTypeId )) WHERE EmployeeTypeId IN (1,2,3)
```
Assuming the index you have on the `EmployeeTypeId` field is named `Index_EmployeeTypeId`.
|
Usually it would, unless the IN clause covers too much of the table, and then it will do a table scan. Best way to find out in your specific case would be to run it in the query analyzer, and check out the execution plan.
|
Do indexes work with "IN" clause
|
[
"",
"sql",
"indexing",
""
] |
How can I get the `IDENTITY` of an inserted row?
I know about `@@IDENTITY` and `IDENT_CURRENT` and `SCOPE_IDENTITY`, but don't understand the implications or impacts attached to each. How do these differ, and when would each be used?
|
* [`@@IDENTITY`](http://msdn.microsoft.com/en-us/library/ms187342.aspx) returns the last identity value generated for any table in the current session, across all scopes. **You need to be careful here**, since it's across scopes. You could get a value from a trigger, instead of your current statement.
* [`SCOPE_IDENTITY()`](http://msdn.microsoft.com/en-us/library/ms190315.aspx) returns the last identity value generated for any table in the current session and the current scope. **Generally what you want to use**.
* [`IDENT_CURRENT('tableName')`](http://msdn.microsoft.com/en-us/library/ms175098.aspx) returns the last identity value generated for a specific table in any session and any scope. This lets you specify which table you want the value from, in case the two above aren't quite what you need (**very rare**). Also, as @[Guy Starbuck](https://stackoverflow.com/questions/42648/best-way-to-get-identity-of-inserted-row#42665) mentioned, "You could use this if you want to get the current IDENTITY value for a table that you have not inserted a record into."
* The [`OUTPUT` clause](http://msdn.microsoft.com/en-us/library/ms177564.aspx) of the `INSERT` statement will let you access every row that was inserted via that statement. Since it's scoped to the specific statement, it's **more straightforward** than the other functions above. However, it's a little **more verbose** (you'll need to insert into a table variable/temp table and then query that) and it gives results even in an error scenario where the statement is rolled back. That said, if your query uses a parallel execution plan, this is the **only guaranteed method** for getting the identity (short of turning off parallelism). However, it is executed *before* triggers and cannot be used to return trigger-generated values.
|
I believe the safest and most accurate method of retrieving the inserted id would be using the output clause.
for example (taken from the following [MSDN](http://msdn.microsoft.com/en-us/library/ms177564.aspx) article)
> ```
> USE AdventureWorks2008R2;
> GO
> DECLARE @MyTableVar table( NewScrapReasonID smallint,
> Name varchar(50),
> ModifiedDate datetime);
> INSERT Production.ScrapReason
> OUTPUT INSERTED.ScrapReasonID, INSERTED.Name, INSERTED.ModifiedDate
> INTO @MyTableVar
> VALUES (N'Operator error', GETDATE());
>
> --Display the result set of the table variable.
> SELECT NewScrapReasonID, Name, ModifiedDate FROM @MyTableVar;
> --Display the result set of the table.
> SELECT ScrapReasonID, Name, ModifiedDate
> FROM Production.ScrapReason;
> GO
> ```
|
How to get the identity of an inserted row?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
It is generally accepted that the use of cursors in stored procedures should be avoided where possible (replaced with set based logic etc). If you take the cases where you need to iterate over some data, and can do in a read only manner, are fast forward (read only forward) cursor more or less inefficient than say while loops? From my investigations it looks as though the cursor option is generally faster and uses less reads and cpu time. I haven't done any extensive testing, but is this what others find? Do cursors of this type (fast forward) carry additional overhead or resource that could be expensive that I don't know about.
Is all the talk about not using cursors really about avoiding the use of cursors when set-based approaches are available, and the use of updatable cursors etc.
|
The 'Best Practice' of avoiding cursors in SQL Server dates back to SQL Server 2000 and earlier versions. The rewrite of the engine in SQL 2005 addressed most of the issues related to the problems of cursors, particularly with the introduction of the fast forward option. Cursors are not neccessarily worse than set-based and are used extensively and successfully in Oracle PL/SQL (LOOP).
The 'generally accepted' that you refer to **was** valid, but is now outdated and incorrect - go on the assumption that fast forward cursors behave as advertised and perform. Do some tests and research, basing your findings on SQL2005 and later
|
While a fast forward cursor does have some optimizations in Sql Server 2005, it is *not* true that they are anywhere close to a set based query in terms of performance. There are very few situations where cursor logic cannot be replaced by a set-based query. Cursors will always be inherently slower, due in part to the fact that you have to keep interrupting the execution in order to fill your local variables.
Here are few references, which would only be the tip of the iceberg if you research this issue:
<http://www.code-magazine.com/Article.aspx?quickid=060113>
<http://dataeducation.com/re-inventing-the-recursive-cte/>
|
SQL Server Fast Forward Cursors
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a custom installer action that updates the PATH environment, and creates an additional environment variable. Appending a directory to the existing path variable is working fine, but for some reason my attempts to create a new environment variable have been unsuccessful. The code I am using is:
```
using (RegistryKey reg = Registry.LocalMachine.OpenSubKey(@"SYSTEM\CurrentControlSet\Control\Session Manager\Environment", true))
{
reg.SetValue("MYVAR", "SomeVal", RegistryValueKind.ExpandString);
}
```
Edit: The OS is 32-bit XP, and as far as I can tell it is failing silently.
|
Is there any reason that you have to do it through the registry?
If not, you can use Environment.SetEnvironmentVariable() since .NET 2.0. It allows you to set on a machine, process or user basis.
|
What OS is this? Is it on a 64-bit system? What is the nature of the failure: silent or is an exception thrown?
You could try running [ProcessMonitor](http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and seeing if it sees the attempt to set the value.
|
Cannot create an environment variable in the registry
|
[
"",
"c#",
"registry",
"environment",
""
] |
Is there another way to render graphics in C# beyond [GDI+](https://en.wikipedia.org/wiki/Graphics_Device_Interface#Windows_XP) and [XNA](http://en.wikipedia.org/wiki/Microsoft_XNA)?
(For the development of a tile map editor.)
|
[SDL.NET](http://cs-sdl.sourceforge.net/) is the solution I've come to love. If you need 3D on top of it, you can use Tao.OpenGL to render inside it. It's fast, industry standard ([SDL](http://en.wikipedia.org/wiki/Simple_DirectMedia_Layer), that is), and cross-platform.
|
Yes, I have written a Windows Forms control that wraps DirectX 9.0 and provides direct pixel level manipulation of the video surface.
I actually wrote another post on Stack Overflow asking if there are other better approaches: *[Unsafe C# and pointers for 2D rendering, good or bad?](https://stackoverflow.com/questions/55963/unsafe-c-and-pointers-for-2d-rendering-good-or-bad)*
While it is relatively high performance, it requires the unsafe compiler option as it uses pointers to access the memory efficiently. Hence the reason for this earlier post.
This is a high level of the required steps:
1. Download the DirectX SDK.
2. Create a new C# [Windows Forms](http://en.wikipedia.org/wiki/Windows_Forms) project and reference the installed
Microsoft DirectX assembly.
3. Initialize a new DirectX Device object with Presentation Parameters
(windowed, back buffering, etc.) you require.
4. Create the Device, taking care to record the surface "Pitch" and
current display mode (bits per pixel).
5. When you need to display something, `Lock` the backbuffer
surface and store the returned pointer to the start of surface
memory.
6. Use pointer arithmetic, calculate the actual pixel position in the
data based on the surface pitch,
bits per pixel and the actual x/y pixel coordinate.
7. In my case for simplicity I am sticking to 32 bpp, meaning setting a pixel is as simple as: \*(surfacePointer + (y \* pitch + x))=Color.FromARGB(255,0,0);
8. When finished drawing, `Unlock` the back buffer surface. Present the surface.
9. Repeat from step 5 as required.
Be aware that taking this approach you need to be very careful about checking the current display mode (pitch and bits per pxiel) of the target surface. Also you will need to have a strategy in place to deal with window resizing or changes of screen format while your program is running.
|
Rendering graphics in C#
|
[
"",
"c#",
"gdi+",
"xna",
"rendering",
""
] |
In C# you can make a block inside of a method that is not attached to any other statement.
```
public void TestMethod()
{
{
string x = "test";
string y = x;
{
int z = 42;
int zz = z;
}
}
}
```
This code compiles and runs just as if the braces inside the main method weren't there. Also notice the block inside of a block.
Is there a scenario where this would be valuable? I haven't found any yet, but am curious to hear of other people's findings.
|
Scope and garbage collection: When you leave the unattached block, any variables declared in it go out of scope. That lets the garbage collector clean up those objects.
[Ray Hayes](https://stackoverflow.com/users/7093/ray-hayes) points out that the .NET garbage collector will not immediately collect the out-of-scope objects, so scoping is the main benefit.
|
An example would be if you wanted to reuse a variable name, normally you can't reuse variable names
This is not valid
```
int a = 10;
Console.WriteLine(a);
int a = 20;
Console.WriteLine(a);
```
but this is:
```
{
int a = 10;
Console.WriteLine(a);
}
{
int a = 20;
Console.WriteLine(a);
}
```
The only thing I can think of right now, is for example if you were processing some large object, and you extracted some information out of it, and after that you were going to perform a bunch of operations, you could put the large object processing in a block, so that it goes out of scope, then continue with the other operations
```
{
//Process a large object and extract some data
}
//large object is out of scope here and will be garbage collected,
//you can now perform other operations with the extracted data that can take a long time,
//without holding the large object in memory
//do processing with extracted data
```
|
What is the value of an anonymous unattached block in C#?
|
[
"",
"c#",
""
] |
I'm writting a financial C# application which receive messages from the network, translate them into different object according to the message type and finaly apply the application business logic on them.
The point is that after the business logic is applied, I'm very sure I will never need this instance again. Rather than to wait for the garbage collector to free them, I'd like to explicitly "delete" them.
Is there a better way to do so in C#, should I use a pool of object to reuse always the same set of instance or is there a better strategy.
The goal being to avoid the garbage collection to use any CPU during a time critical process.
|
Don't delete them right away. Calling the garbage collector for each object is a bad idea. Normally you *really* don't want to mess with the garbage collector at all, and even time critical processes are just race conditions waiting to happen if they're that sensitive.
But if you know you'll have busy vs light load periods for your app, you might try a more general GC.Collect() when you reach a light period to encourage cleanup before the next busy period.
|
Look here: <http://msdn.microsoft.com/en-us/library/bb384202.aspx>
You can tell the garbage collector that you're doing something critical at the moment, and it will try to be nice to you.
|
How to avoid garbage collection in real time .NET application?
|
[
"",
"c#",
".net",
"garbage-collection",
"real-time",
"finance",
""
] |
Basically, I've written an API to www.thetvdb.com in Python. The current code can be found [here](http://github.com/dbr/tvdb_api/tree/master/tvdb_api.py).
It grabs data from the API as requested, and has to store the data somehow, and make it available by doing:
```
print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1
```
What is the "best" way to abstract this data within the `Tvdb()` class?
I originally used a extended `Dict()` that automatically created sub-dicts (so you could do `x[1][2][3][4] = "something"` without having to do `if x[1].has_key(2): x[1][2] = []` and so on)
Then I just stored the data by doing `self.data[show_id][season_number][episode_number][attribute_name] = "something"`
This worked okay, but there was no easy way of checking if `x[3][24]` was supposed to exist or not (so I couldn't raise the season\_not\_found exception).
Currently it's using four classes: `ShowContainer`, `Show`, `Season` and `Episode`. Each one is a very basic dict, which I can easily add extra functionality in (the `search()` function on `Show()` for example). Each has a `__setitem__`, `__getitem_` and `has_key`.
This works mostly fine, I can check in Shows if it has that season in it's `self.data` dict, if not, `raise season_not_found`. I can also check in `Season()` if it has that episode and so on.
The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the `__getitem__` and `__setitem__` functions, it's easy to accidentally recursively call `__getitem__` (so I'm not sure if extending the `Dict` class will cause problems).
The other slight problem is adding data into the dict is a lot more work than the old `Dict` method (which was `self.data[seas_no][ep_no]['attribute'] = 'something'`). See `_setItem` and `_setData`. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.
I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the `ShowContainer`/etc classes with `Dict` cause problems?
|
OK, what you need is `classobj` from new module. That would allow you to construct exception classes dynamically (`classobj` takes a string as an argument for the class name).
```
import new
myexc=new.classobj("ExcName",(Exception,),{})
i=myexc("This is the exc msg!")
raise i
```
this gives you:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
__main__.ExcName: This is the exc msg!
```
remember that you can always get the class name through:
```
self.__class__.__name__
```
So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.
P.S. - you can also raise strings, but this is deprecated.
```
raise(self.__class__.__name__+"Exception")
```
|
Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for [sqlite3](http://docs.python.org/lib/module-sqlite3.html)
---
If you don't want to use SQLite you could do an array of dicts.
```
episodes = []
episodes.append({'season':1, 'episode': 2, 'name':'Something'})
episodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']})
```
That way you add metadata to any record and search it very easily
```
season_1 = [e for e in episodes if e['season'] == 1]
billy_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]
for episode in billy_bob:
print "Billy bob was in Season %s Episode %s" % (episode['season'], episode['episode'])
```
|
Best way to abstract season/show/episode data
|
[
"",
"python",
"data-structures",
""
] |
Often I find myself interacting with files in some way but after writing the code I'm always uncertain how robust it actually is. The problem is that I'm not entirely sure how file related operations can fail and, therefore, the best way to handle exceptions.
The simple solution would seem to be just to catch any `IOExceptions` thrown by the code and give the user an "Inaccessible file" error message, but is it possible to get a bit more fine-grained error messages? Is there a way to determine the difference between such errors as a file being locked by another program and the data being unreadable due to a hardware error?
Given the following C# code, how would you handle errors in a user friendly (as informative as possible) way?
```
public class IO
{
public List<string> ReadFile(string path)
{
FileInfo file = new FileInfo(path);
if (!file.Exists)
{
throw new FileNotFoundException();
}
StreamReader reader = file.OpenText();
List<string> text = new List<string>();
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
reader.Close();
reader.Dispose();
return text;
}
public void WriteFile(List<string> text, string path)
{
FileInfo file = new FileInfo(path);
if (!file.Exists)
{
throw new FileNotFoundException();
}
StreamWriter writer = file.CreateText();
foreach(string line in text)
{
writer.WriteLine(line);
}
writer.Flush();
writer.Close();
writer.Dispose();
}
}
```
|
> ...but is it possible to get a bit more fine-grained error messages.
Yes. Go ahead and catch `IOException`, and use the `Exception.ToString()` method to get a relatively relevant error message to display. Note that the exceptions generated by the .NET Framework will supply these useful strings, but if you are going to throw your own exception, you must remember to plug in that string into the `Exception`'s constructor, like:
`throw new FileNotFoundException("File not found");`
Also, absolutely, as per [Scott Dorman](https://stackoverflow.com/users/1559/scott-dorman), use that `using` statement. The thing to notice, though, is that the `using` statement doesn't actually `catch` anything, which is the way it ought to be. Your test to see if the file exists, for instance, will introduce a race condition that may be rather [vexing](http://blogs.msdn.com/ericlippert/archive/2008/09/10/vexing-exceptions.aspx). It doesn't really do you any good to have it in there. So, now, for the reader we have:
```
try {
using (StreamReader reader = file.OpenText()) {
// Your processing code here
}
} catch (IOException e) {
UI.AlertUserSomehow(e.ToString());
}
```
In short, for basic file operations:
1. Use `using`
2, Wrap the using statement or function in a `try`/`catch` that `catch`es `IOException`
3. Use `Exception.ToString()` in your `catch` to get a useful error message
4. Don't try to detect exceptional file issues yourself. Let .NET do the throwing for you.
|
The first thing you should change are your calls to StreamWriter and StreamReader to wrap them in a using statement, like this:
```
using (StreamReader reader = file.OpenText())
{
List<string> text = new List<string>();
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
}
```
This will take care of calling Close and Dispose for you and will actually wrap it in a try/finally block so the actual compiled code looks like this:
```
StreamReader reader = file.OpenText();
try
{
List<string> text = new List<string>();
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
}
finally
{
if (reader != null)
((IDisposable)reader).Dispose();
}
```
The benefit here is that you ensure the stream gets closed even if an exception occurs.
As far as any more explicit exception handling, it really depends on what you want to happen. In your example you explicitly test if the file exists and throw a FileNotFoundException which may be enough for your users but it may not.
|
How to properly handle exceptions when performing file io
|
[
"",
"c#",
".net",
"exception",
"file-io",
""
] |
Is there a way to fall through multiple case statements without stating `case value:` repeatedly?
I know this works:
```
switch (value)
{
case 1:
case 2:
case 3:
// Do some stuff
break;
case 4:
case 5:
case 6:
// Do some different stuff
break;
default:
// Default stuff
break;
}
```
but I'd like to do something like this:
```
switch (value)
{
case 1,2,3:
// Do something
break;
case 4,5,6:
// Do something
break;
default:
// Do the Default
break;
}
```
Is this syntax I'm thinking of from a different language, or am I missing something?
|
There is no syntax in C++ nor C# for the second method you mentioned.
There's nothing wrong with your first method. If however you have very big ranges, just use a series of if statements.
|
I guess this has been already answered. However, I think that you can still mix both options in a syntactically better way by doing:
```
switch (value)
{
case 1: case 2: case 3:
// Do Something
break;
case 4: case 5: case 6:
// Do Something
break;
default:
// Do Something
break;
}
```
|
Multiple cases in switch statement
|
[
"",
"c#",
"switch-statement",
""
] |
I have the following code snippet.
```
$items['A'] = "Test";
$items['B'] = "Test";
$items['C'] = "Test";
$items['D'] = "Test";
$index = 0;
foreach($items as $key => $value)
{
echo "$index is a $key containing $value\n";
$index++;
}
```
Expected output:
```
0 is a A containing Test
1 is a B containing Test
2 is a C containing Test
3 is a D containing Test
```
Is there a way to leave out the `$index` variable?
|
Your $index variable there kind of misleading. That number isn't the index, your "A", "B", "C", "D" keys are. You can still access the data through the numbered index $index[1], but that's really not the point. If you really want to keep the numbered index, I'd almost restructure the data:
```
$items[] = array("A", "Test");
$items[] = array("B", "Test");
$items[] = array("C", "Test");
$items[] = array("D", "Test");
foreach($items as $key => $value) {
echo $key.' is a '.$value[0].' containing '.$value[1];
}
```
|
You can do this:
```
$items[A] = "Test";
$items[B] = "Test";
$items[C] = "Test";
$items[D] = "Test";
for($i=0;$i<count($items);$i++)
{
list($key,$value) = each($items[$i]);
echo "$i $key contains $value";
}
```
I haven't done that before, but in theory it should work.
|
foreach access the index or an associative array
|
[
"",
"php",
"foreach",
""
] |
I'm having problems deciding on what is the best way is to handle and store time measurements.
I have an app that has a textbox that allows the users to input time in either hh:mm:ss or mm:ss format.
So I was planning on parsing this string, tokenizing it on the colons and creating TimeSpan (or using TimeSpan.Parse() and just adding a "00:" to the mm:ss case) for my business logic. Ok?
How do I store this as in a database though? What would the field type be? DateTime seems wrong. I don't want a time of 00:54:12 to be stored as 1901-01-01 00:54:12 that seems a bit poor?
|
TimeSpan has an Int64 [Ticks](https://learn.microsoft.com/en-us/dotnet/api/system.timespan.ticks) property that you can store instead, and a [constructor](https://learn.microsoft.com/en-us/dotnet/api/system.timespan.-ctor#System_TimeSpan__ctor_System_Int64_) that takes a Ticks value.
|
I think the simplest is to just convert user input into a integer number of seconds. So 54:12 == 3252 seconds, so store the 3252 in your database or wherever. Then when you need to display it to the user, you can convert it back again.
|
Handling and storing elapsed time
|
[
"",
"c#",
"database",
"datetime",
"timespan",
""
] |
One of the problems I have come across having complex tasks on the browser is with automatic timeouts.
Currently our site has a sliding expiration of 30 minutes. Normally, this isn't a problem because we use asp.net and most of the time the users update one or two fields and then submit the form. This obviously keeps the session active. I have struggled with how to handle this with significantly more complex forms/user interaction where there are few if no round trips to the server.
Now users can view reports, type emails etc. on our site, and I am struggling with how to keep their session active while they're actually using browser but they are not actively posting information back. I would dispense with the sliding expiration, but it's a requirement, so I am stuck with it. Most of our users are outside of the organization, so I can't use any form of *SSO*.
Does anyone have an elegant solution (I'll even go for an ugly one if it works), or am I out of luck?
|
We recently went through this in my organization. Although it is not the best solution, and hitting the right session across multiple browser windows is rough, we put a countdown timer on our page, included a button that just went back and hit the server to restart the session, and also provided the user with a JavaScript popup (the user's favorite part of the solution) with a message saying that the session was, say, five minutes from timing out and to hit the "OK" button to restart. Then the button would hit the server to restart the session, restart the timer on the base page, close the popup and the base page didn't need to be refreshed at all.
|
Have the browser ping the server via Ajax periodically to keep the session alive. Most sites do something useful in this ping, like save a draft of the user's submission.
|
Automatic Timeout Web Client Use
|
[
"",
"c#",
"asp.net",
""
] |
I've been learning python for a while now with some success. I even managed to create one or two (simple) programs using PyGtk + Glade.
The thing is: I am not sure if the best way to use GTK with python is by building the interfaces using Glade.
I was wondering if the more experienced ones among us (remember, I'm just a beginner) could point out the benefits and caveats of using Glade as opposed to creating everything in the code itself (assuming that learning the correct gtk bindings wouldn't exactly be a problem).
|
I would say that it depends: if you find that using Glade you can build the apps you want or need to make than that's absolutely fine. If however you actually want to learn how GTK works or you have some non-standard UI requirements you will **have** to dig into GTK internals (which are not that complicated).
Personally I'm usually about 5 minutes into a rich client when I need some feature or customization that is simply impossible through a designer such as Glade or [Stetic](http://www.mono-project.com/Stetic). Perhaps it's just me. Nevertheless it is still useful for me to bootstrap window design using a graphical tool.
My recommendation: if making rich clients using GTK is going to be a significant part of your job/hobby then learn GTK as well since you **will** need to write that code someday.
P.S. I personally find [Stetic](http://www.mono-project.com/Stetic) to be superior to Glade for design work, if a little bit more unstable.
|
Use GtkBuilder instead of Glade, it's integrated into Gtk itself instead of a separate library.
The main benefit of Glade is that it's much, much easier to create the interface. It's a bit more work to connect signal handlers, but I've never felt that matters much.
|
Glade or no glade: What is the best way to use PyGtk?
|
[
"",
"python",
"gtk",
"pygtk",
"glade",
"gtk2",
""
] |
How can I generate a (pseudo)random alpha-numeric string, something like: 'd79jd8c' in PHP?
|
First make a string with all your possible characters:
```
$characters = 'abcdefghijklmnopqrstuvwxyz0123456789';
```
You could also use [range()](http://php.net/range) to do this more quickly.
Then, in a loop, choose a random number and use it as the index to the `$characters` string to get a random character, and append it to your string:
```
$string = '';
$max = strlen($characters) - 1;
for ($i = 0; $i < $random_string_length; $i++) {
$string .= $characters[mt_rand(0, $max)];
}
```
`$random_string_length` is the length of the random string.
|
I like this function for the job
```
function randomKey($length) {
$pool = array_merge(range(0,9), range('a', 'z'),range('A', 'Z'));
for($i=0; $i < $length; $i++) {
$key .= $pool[mt_rand(0, count($pool) - 1)];
}
return $key;
}
echo randomKey(20);
```
|
Generating (pseudo)random alpha-numeric strings
|
[
"",
"php",
"random",
""
] |
I have a question on the best way of exposing an asynchronous remote interface.
The conditions are as follows:
* The protocol is asynchronous
* A third party can modify the data at any time
* The command round-trip can be significant
* The model should be well suited for UI interaction
* The protocol supports queries over certain objects, and so must the model
As a means of improving my lacking skills in this area (and brush up my Java in general), I have started a [project](http://Telharmonium.devjavu.com/) to create an Eclipse-based front-end for [xmms2](http://xmms2.xmms.se) (described below).
So, the question is; how should I expose the remote interface as a neat data model (In this case, track management and event handling)?
I welcome anything from generic discussions to pattern name-dropping or concrete examples and patches :)
---
My primary goal here is learning about this class of problems in general. If my project can gain from it, fine, but I present it strictly to have something to start a discussion around.
I've implemented a protocol abstraction which I call ['client'](http://telharmonium.devjavu.com/browser/trunk/xmms2-client) (for legacy reasons) which allows me to access most exposed features using method calls which I am happy with even if it's far from perfect.
The features provided by the xmms2 daemon are things like track searching, meta-data retrieval and manipulation, change playback state, load playlists and so on and so forth.
I'm in the middle of updating to the latest stable release of xmms2, and I figured I might as well fix some of the glaring weaknesses of my current implementation.
My plan is to build a better abstraction on top of the protocol interface, one that allows a more natural interaction with the daemon. The current ['model'](http://telharmonium.devjavu.com/browser/trunk/xmms2-model) implementation is hard to use and is frankly quite ugly (not to mention the UI-code which is truly horrible atm).
Today I have the [Tracks](http://telharmonium.devjavu.com/browser/trunk/xmms2-model/src/se/fnord/xmms2/model/Tracks.java) interface which I can use to get instances of [Track](http://telharmonium.devjavu.com/browser/trunk/xmms2-model/src/se/fnord/xmms2/model/Track.java) classes based on their id. Searching is performed through the [Collections](http://telharmonium.devjavu.com/browser/trunk/xmms2-model/src/se/fnord/xmms2/model/Collections.java) interface (unfortunate namespace clash) which I'd rather move to Tracks, I think.
Any data can be modified by a third party at any time, and this should be properly reflected in the model and change-notifications distributed
These interfaces are exposed when connecting, by returning an object hierarchy that looks like this:
* Connection
+ Playback getPlayback()
- Play, pause, jump, current track etc
- Expose playback state changes
+ Tracks getTracks()
- Track getTrack(id) etc
- Expose track updates
+ Collections getCollection()
- Load and manipulate playlists or named collections
- Query media library
- Expose collection updates
|
For the asynchronous bit, I would suggest checking into `java.util.concurrent`, and especially the `Future<T>` interface. The future interface is used to represent objects which are not ready yet, but are being created in a separate thread. You say that objects can be modified at any time by a third party, but I would still suggest you use immutable return objects here, and instead have a separate thread/event log you can subscribe to to get noticed when objects expire. I have little programming with UIs, but I believe using Futures for asynchronous calls would let you have a responsive GUI, rather than one that was waiting for a server reply.
For the queries I would suggest using method chaining to build the query object, and each object returned by method chaining should be `Iterable`. Similar to how Djangos model is. Say you have `QuerySet` which implements `Iterable<Song>`. You can then call `allSongs()` which would return a result iterating over all Songs. Or `allSongs().artist("Beatles")`, and you would have an iterable over all Betles songs. Or even `allSongs().artist("Beatles").years(1965,1967)` and so on.
Hope this helps as a starting place.
|
@[Staale](https://stackoverflow.com/questions/37041/exposing-a-remote-interface-or-object-model#37175)
It is certainly possibly, but as you note, that would make it blocking (at home for something like 10 seconds due to sleeping disks), meaning I can't use it to update the UI directly.
I could use the iterator to create a copy of the result in a separate thread and then send that to the UI, but while the iterator solution by itself is rather elegant, it won't fit in very well. In the end, something implementing [IStructuredContentProvider](http://help.eclipse.org/stable/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/jface/viewers/IStructuredContentProvider.html) needs to return an array of all the objects in order to display it in a [TableViewer](http://help.eclipse.org/stable/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/jface/viewers/TableViewer.html), so if I can get away with getting something like that out of a callback... :)
I'll give it some more thought. I might just be able to work out something. It does give the code a nice look.
|
Exposing a remote interface or object model
|
[
"",
"java",
"eclipse",
"osgi",
"oop",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.