Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I would like to monitor the following system information in Java:
* Current CPU usage\*\* (percent)
* Available memory\* (free/total)
* Available disk space (free/total)
\*Note that I mean overall memory available to the whole system, not just the JVM.
I'm looking for a cross-platform solution (Linux, Mac, and Windows) that doesn't rely on my own code calling external programs or using JNI. Although these are viable options, I would prefer not to maintain OS-specific code myself if someone already has a better solution.
If there's a free library out there that does this in a reliable, cross-platform manner, that would be great (even if it makes external calls or uses native code itself).
Any suggestions are much appreciated.
To clarify, I would like to get the current CPU usage for the whole system, not just the Java process(es).
The SIGAR API provides all the functionality I'm looking for in one package, so it's the best answer to my question so far. However, due it being licensed under the GPL, I cannot use it for my original purpose (a closed source, commercial product). It's possible that Hyperic may license SIGAR for commercial use, but I haven't looked into it. For my GPL projects, I will definitely consider SIGAR in the future.
For my current needs, I'm leaning towards the following:
* For CPU usage, `OperatingSystemMXBean.getSystemLoadAverage() / OperatingSystemMXBean.getAvailableProcessors()` (load average per cpu)
* For memory, `OperatingSystemMXBean.getTotalPhysicalMemorySize()` and `OperatingSystemMXBean.getFreePhysicalMemorySize()`
* For disk space, `File.getTotalSpace()` and `File.getUsableSpace()`
Limitations:
The `getSystemLoadAverage()` and disk space querying methods are only available under Java 6. Also, some JMX functionality may not be available to all platforms (i.e. it's been reported that `getSystemLoadAverage()` returns -1 on Windows).
Although originally licensed under GPL, it [has been changed](http://forums.hyperic.com/jiveforums/thread.jspa?threadID=11426&tstart=0) to [Apache 2.0](http://support.hyperic.com/display/SIGAR/Home#Home-license), which can generally be used for closed source, commercial products.
|
Along the lines of what I mentioned [in this post](https://stackoverflow.com/questions/25552/using-java-to-get-os-level-system-information#25583). I recommend you use the [SIGAR API](https://github.com/hyperic/sigar). I use the SIGAR API in one of my own applications and it is great. You'll find it is stable, well supported, and full of useful examples. It is open-source with a GPL 2 Apache 2.0 license. Check it out. I have a feeling it will meet your needs.
Using Java and the Sigar API you can get Memory, CPU, Disk, Load-Average, Network Interface info and metrics, Process Table information, Route info, etc.
|
The following supposedly gets you CPU and RAM. See [ManagementFactory](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/management/ManagementFactory.html) for more details.
```
import java.lang.management.ManagementFactory;
import java.lang.management.OperatingSystemMXBean;
import java.lang.reflect.Method;
import java.lang.reflect.Modifier;
private static void printUsage() {
OperatingSystemMXBean operatingSystemMXBean = ManagementFactory.getOperatingSystemMXBean();
for (Method method : operatingSystemMXBean.getClass().getDeclaredMethods()) {
method.setAccessible(true);
if (method.getName().startsWith("get")
&& Modifier.isPublic(method.getModifiers())) {
Object value;
try {
value = method.invoke(operatingSystemMXBean);
} catch (Exception e) {
value = e;
} // try
System.out.println(method.getName() + " = " + value);
} // if
} // for
}
```
|
How do I monitor the computer's CPU, memory, and disk usage in Java?
|
[
"",
"java",
"memory",
"cross-platform",
"cpu",
"diskspace",
""
] |
If I was going to start an open source project using Python what version should I use to ensure that the vast majority of users can use it on their system?
I'm the kind of person who quickly jumps to the next version (which I'll do when Python 3 comes out) but many people may be more conservative if their current version seems to be working fine. What version would hit the sweet spot but still allow me to enjoy the newest and coolest language enhancements?
|
As python is in kind of an transition phase towards python 3 with breaking backward compatibility I don't think it is a good idea to go python 3 only. Based on the [time line](http://www.python.org/dev/peps/pep-3000/#timeline) there will be at least one or two following releases of the 2.x series after 2.6/3.0 in october.
Beside not having python 3 available on your target platforms, it will take some time until important external python libraries will be ported and usable on python 3.
So as Matthew suggests staying at 2.4/2.5 and keeping the [transition](http://www.python.org/dev/peps/pep-3000/#compatibility-and-transition) plan to python 3 in mind is a solid choice.
|
I've not seen a system with less than 2.3 installed for some time. Mostly 2.4+ is installed by default for most OS I use now. 2.3 is just on an older Solaris machine. Linux distros tend to have 2.4+, as does OS X.
IIRC, 2.4 has a lot of the features 2.5 does, but usable only with
```
from __future__ import *
```
|
Which Version of Python to Use for Maximum Compatibility
|
[
"",
"python",
"compatibility",
""
] |
Can i print out a url `/admin/manage/products/add` of a certain view in a template?
Here is the rule i want to create a link for
```
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
```
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
**Edit:** I am not using the default admin (well, i am but it is at another url), this is my own
|
You can use `get_absolute_url`, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.
You want to use [named URL patterns](https://docs.djangoproject.com/en/1.2/topics/http/urls/#naming-url-patterns). Here's a quick intro:
Change the line in your urls.py to:
```
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"),
```
Then, in your template you use this to display the URL:
```
{% url create-product %}
```
If you're using Django 1.5 or higher you need this:
```
{% url 'create-product' %}
```
You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).
|
If you use [named url patterns](https://docs.djangoproject.com/en/1.2/topics/http/urls/#naming-url-patterns) you can do the follwing in your template
```
{% url create_object %}
```
|
Django: Print url of view without hardcoding the url
|
[
"",
"python",
"django",
""
] |
I'm creating a .net custom control and it should be able to load multiple text files. I have a public property named ListFiles with those properties set :
```
[Browsable(true), Category("Configuration"), Description("List of Files to Load")]
public string ListFiles
{
get { return m_oList; }
set { m_oList = value; }
}
```
Depending upon the type of object, (string, string[], List, ...), the property grid will allow the user to enter some data.. My goal would be to have a filtered openfiledialog in the Properties Grid of my component that would enable the user to choose multiple files and return it as an array or string (or something else...).
Sooo... Here's my question : **How can I get an OpenFileDialog in a custom control's property grid?**
Thanks a lot!
|
You can do this by adding a [UITypeEditor](http://msdn.microsoft.com/en-us/library/ms171839.aspx).
[Here is an example](http://web.archive.org/web/20090218231316/http://www.winterdom.com/weblog/2006/08/23/ACustomUITypeEditorForActivityProperties.aspx) of a UITypeEditor that gives you the OpenFileDialog for chossing a filename.
|
You can use built-in UITypeEditor. It is called [FileNameEditor](http://msdn.microsoft.com/en-us/library/system.windows.forms.design.filenameeditor.aspx)
```
[EditorAttribute(typeof(System.Windows.Forms.Design.FileNameEditor), typeof(System.Drawing.Design.UITypeEditor))]
public string SomeFilePath
{
get;
set;
}
```
|
How can I get an OpenFileDialog in a custom control's property grid?
|
[
"",
"c#",
".net",
"controls",
"propertygrid",
""
] |
Does anyone know a way to auto-generate database tables for a given class? I'm not looking for an entire persistence layer - I already have a data access solution I'm using, but I suddenly have to store a lot of information from a large number of classes and I really don't want to have to create all these tables by hand. For example, given the following class:
```
class Foo
{
private string property1;
public string Property1
{
get { return property1; }
set { property1 = value; }
}
private int property2;
public int Property2
{
get { return property2; }
set { property2 = value; }
}
}
```
I'd expect the following SQL:
```
CREATE TABLE Foo
(
Property1 VARCHAR(500),
Property2 INT
)
```
I'm also wondering how you could handle complex types. For example, in the previously cited class, if we changed that to be :
```
class Foo
{
private string property1;
public string Property1
{
get { return property1; }
set { property1 = value; }
}
private System.Management.ManagementObject property2;
public System.Management.ManagementObject Property2
{
get { return property2; }
set { property2 = value; }
}
}
```
How could I handle this?
I've looked at trying to auto-generate the database scripts by myself using reflection to enumerate through each class' properties, but it's clunky and the complex data types have me stumped.
|
It's really late, and I only spent about 10 minutes on this, so its extremely sloppy, however it does work and will give you a good jumping off point:
```
using System;
using System.Collections.Generic;
using System.Text;
using System.Reflection;
namespace TableGenerator
{
class Program
{
static void Main(string[] args)
{
List<TableClass> tables = new List<TableClass>();
// Pass assembly name via argument
Assembly a = Assembly.LoadFile(args[0]);
Type[] types = a.GetTypes();
// Get Types in the assembly.
foreach (Type t in types)
{
TableClass tc = new TableClass(t);
tables.Add(tc);
}
// Create SQL for each table
foreach (TableClass table in tables)
{
Console.WriteLine(table.CreateTableScript());
Console.WriteLine();
}
// Total Hacked way to find FK relationships! Too lazy to fix right now
foreach (TableClass table in tables)
{
foreach (KeyValuePair<String, Type> field in table.Fields)
{
foreach (TableClass t2 in tables)
{
if (field.Value.Name == t2.ClassName)
{
// We have a FK Relationship!
Console.WriteLine("GO");
Console.WriteLine("ALTER TABLE " + table.ClassName + " WITH NOCHECK");
Console.WriteLine("ADD CONSTRAINT FK_" + field.Key + " FOREIGN KEY (" + field.Key + ") REFERENCES " + t2.ClassName + "(ID)");
Console.WriteLine("GO");
}
}
}
}
}
}
public class TableClass
{
private List<KeyValuePair<String, Type>> _fieldInfo = new List<KeyValuePair<String, Type>>();
private string _className = String.Empty;
private Dictionary<Type, String> dataMapper
{
get
{
// Add the rest of your CLR Types to SQL Types mapping here
Dictionary<Type, String> dataMapper = new Dictionary<Type, string>();
dataMapper.Add(typeof(int), "BIGINT");
dataMapper.Add(typeof(string), "NVARCHAR(500)");
dataMapper.Add(typeof(bool), "BIT");
dataMapper.Add(typeof(DateTime), "DATETIME");
dataMapper.Add(typeof(float), "FLOAT");
dataMapper.Add(typeof(decimal), "DECIMAL(18,0)");
dataMapper.Add(typeof(Guid), "UNIQUEIDENTIFIER");
return dataMapper;
}
}
public List<KeyValuePair<String, Type>> Fields
{
get { return this._fieldInfo; }
set { this._fieldInfo = value; }
}
public string ClassName
{
get { return this._className; }
set { this._className = value; }
}
public TableClass(Type t)
{
this._className = t.Name;
foreach (PropertyInfo p in t.GetProperties())
{
KeyValuePair<String, Type> field = new KeyValuePair<String, Type>(p.Name, p.PropertyType);
this.Fields.Add(field);
}
}
public string CreateTableScript()
{
System.Text.StringBuilder script = new StringBuilder();
script.AppendLine("CREATE TABLE " + this.ClassName);
script.AppendLine("(");
script.AppendLine("\t ID BIGINT,");
for (int i = 0; i < this.Fields.Count; i++)
{
KeyValuePair<String, Type> field = this.Fields[i];
if (dataMapper.ContainsKey(field.Value))
{
script.Append("\t " + field.Key + " " + dataMapper[field.Value]);
}
else
{
// Complex Type?
script.Append("\t " + field.Key + " BIGINT");
}
if (i != this.Fields.Count - 1)
{
script.Append(",");
}
script.Append(Environment.NewLine);
}
script.AppendLine(")");
return script.ToString();
}
}
}
```
I put these classes in an assembly to test it:
```
public class FakeDataClass
{
public int AnInt
{
get;
set;
}
public string AString
{
get;
set;
}
public float AFloat
{
get;
set;
}
public FKClass AFKReference
{
get;
set;
}
}
public class FKClass
{
public int AFKInt
{
get;
set;
}
}
```
And it generated the following SQL:
```
CREATE TABLE FakeDataClass
(
ID BIGINT,
AnInt BIGINT,
AString NVARCHAR(255),
AFloat FLOAT,
AFKReference BIGINT
)
CREATE TABLE FKClass
(
ID BIGINT,
AFKInt BIGINT
)
GO
ALTER TABLE FakeDataClass WITH NOCHECK
ADD CONSTRAINT FK_AFKReference FOREIGN KEY (AFKReference) REFERENCES FKClass(ID)
GO
```
Some further thoughts...I'd consider adding an attribute such as [SqlTable] to your classes, that way it only generates tables for the classes you want. Also, this can be cleaned up a ton, bugs fixed, optimized (the FK Checker is a joke) etc etc...Just to get you started.
|
@Jonathan Holland
Wow, I think that's the most raw work I've ever seen put into a StackOverflow post. Well done. **However**, instead of constructing DDL statements as strings, you should definitely use the [SQL Server Management Objects](http://msdn.microsoft.com/en-us/library/ms162169.aspx) classes introduced with SQL 2005.
David Hayden has a post entitled [Create Table in SQL Server 2005 Using C# and SQL Server Management Objects (SMO) - Code Generation](http://www.davidhayden.com/blog/dave/archive/2006/01/27/2775.aspx) that walks through how to create a table using SMO. The strongly-typed objects make it a breeze with methods like:
```
// Create new table, called TestTable
Table newTable = new Table(db, "TestTable");
```
and
```
// Create a PK Index for the table
Index index = new Index(newTable, "PK_TestTable");
index.IndexKeyType = IndexKeyType.DriPrimaryKey;
```
VanOrman, if you're using SQL 2005, definitely make SMO part of your solution.
|
How can I generate database tables from C# classes?
|
[
"",
"c#",
"code-generation",
"data-access",
""
] |
I work in a Windows environment and would prefer to deploy code to IIS. At the same time I would like to code in Python.
Having read that IIS can run fastCGI application, I went to the IIS site where it describes in detail how to get PHP up and running but not much about anything else.
Does anyone have experience getting a **Python** framework **running under IIS** using something other that plain old CGI?
If so can you explain to direct me to some instructions on setting this up?
|
There shouldn't be any need to use FastCGI. There exists a [ISAPI extension for WSGI](https://github.com/hexdump42/isapi-wsgi).
|
Microsoft itself develops [wfastcgi](https://pypi.python.org/pypi/wfastcgi) ([source code](https://github.com/Microsoft/PTVS/tree/master/Python/Product/WFastCgi)) to host Python code on IIS.
|
How do you set up a Python WSGI server under IIS?
|
[
"",
"python",
"iis",
"deployment",
"windows-server",
"iis-modules",
""
] |
I have a class to parse a matrix that keeps the result in an array member:
```
class Parser
{
...
double matrix_[4][4];
};
```
The user of this class needs to call an API function (as in, a function I have no control over, so I can't just change its interface to make things work more easily) that looks like this:
```
void api_func(const double matrix[4][4]);
```
The only way I have come up with for the caller to pass the array result to the function is by making the member public:
```
void myfunc()
{
Parser parser;
...
api_func(parser.matrix_);
}
```
Is this the only way to do things? I'm astounded by how inflexible multidimensional arrays declared like this are. I thought `matrix_` would essentially be the same as a `double**` and I could cast (safely) between the two. As it turns out, I can't even find an *unsafe* way to cast between the things. Say I add an accessor to the `Parser` class:
```
void* Parser::getMatrix()
{
return (void*)matrix_;
}
```
This will compile, but I can't use it, because there doesn't seem to be a way to cast back to the weirdo array type:
```
// A smorgasbord of syntax errors...
api_func((double[][])parser.getMatrix());
api_func((double[4][4])parser.getMatrix());
api_func((double**)parser.getMatrix()); // cast works but it's to the wrong type
```
The error is:
> error C2440: 'type cast' : cannot convert from 'void \*' to 'const double [4][4]'
...with an intriguing addendum:
> There are no conversions to array types, although there are conversions to references or pointers to arrays
I can't determine how to cast to a reference or pointer to array either, albeit that it probably won't help me here.
To be sure, at this point the matter is purely academic, as the `void*` casts are hardly cleaner than a single class member left public!
|
Here's a nice, clean way:
```
class Parser
{
public:
typedef double matrix[4][4];
// ...
const matrix& getMatrix() const
{
return matrix_;
}
// ...
private:
matrix matrix_;
};
```
Now you're working with a descriptive type name rather than an array, but since it's a `typedef` the compiler will still allow passing it to the unchangeable API function that takes the base type.
|
Try this. It compiles cleanly on gcc 4.1.3:
```
typedef double FourSquare[4][4];
class Parser
{
private:
double matrix_[4][4];
public:
Parser()
{
for(int i=0; i<4; i++)
for(int j=0; j<4; j++)
matrix_[i][j] = i*j;
}
public:
const FourSquare& GetMatrix()
{
return matrix_;
}
};
void api_func( const double matrix[4][4] )
{
}
int main( int argc, char** argv )
{
Parser parser;
api_func( parser.GetMatrix() );
return 0;
}
```
|
How to deal with arrays (declared on the stack) in C++?
|
[
"",
"c++",
"arrays",
""
] |
Assuming I'm trying to automate the installation of something on windows and I want to try to test whether another installation is in progress before attempting install. I don't have control over the installer and have to do this in the automation framework. Is there a better way to do this, some win32 api?, than just testing if msiexec is running?
[Update 2]
Improved the previous code I had been using to just access the mutex directly, this is a lot more reliable:
```
using System.Threading;
[...]
/// <summary>
/// Wait (up to a timeout) for the MSI installer service to become free.
/// </summary>
/// <returns>
/// Returns true for a successful wait, when the installer service has become free.
/// Returns false when waiting for the installer service has exceeded the timeout.
/// </returns>
public static bool WaitForInstallerServiceToBeFree(TimeSpan maxWaitTime)
{
// The _MSIExecute mutex is used by the MSI installer service to serialize installations
// and prevent multiple MSI based installations happening at the same time.
// For more info: http://msdn.microsoft.com/en-us/library/aa372909(VS.85).aspx
const string installerServiceMutexName = "Global\\_MSIExecute";
try
{
Mutex MSIExecuteMutex = Mutex.OpenExisting(installerServiceMutexName,
System.Security.AccessControl.MutexRights.Synchronize | System.Security.AccessControl.MutexRights.Modify);
bool waitSuccess = MSIExecuteMutex.WaitOne(maxWaitTime, false);
MSIExecuteMutex.ReleaseMutex();
return waitSuccess;
}
catch (WaitHandleCannotBeOpenedException)
{
// Mutex doesn't exist, do nothing
}
catch (ObjectDisposedException)
{
// Mutex was disposed between opening it and attempting to wait on it, do nothing
}
return true;
}
```
|
See the description of the [\_MSIExecute Mutex](http://msdn.microsoft.com/en-us/library/aa372909(VS.85).aspx) on MSDN.
|
I was getting an unhandled exception using the code above. I cross referenced this article witt this [one](https://stackoverflow.com/questions/17070583/releasing-a-mutex)
Here's my updated code:
```
/// <summary>
/// Wait (up to a timeout) for the MSI installer service to become free.
/// </summary>
/// <returns>
/// Returns true for a successful wait, when the installer service has become free.
/// Returns false when waiting for the installer service has exceeded the timeout.
/// </returns>
public static bool IsMsiExecFree(TimeSpan maxWaitTime)
{
// The _MSIExecute mutex is used by the MSI installer service to serialize installations
// and prevent multiple MSI based installations happening at the same time.
// For more info: http://msdn.microsoft.com/en-us/library/aa372909(VS.85).aspx
const string installerServiceMutexName = "Global\\_MSIExecute";
Mutex MSIExecuteMutex = null;
var isMsiExecFree = false;
try
{
MSIExecuteMutex = Mutex.OpenExisting(installerServiceMutexName,
System.Security.AccessControl.MutexRights.Synchronize);
isMsiExecFree = MSIExecuteMutex.WaitOne(maxWaitTime, false);
}
catch (WaitHandleCannotBeOpenedException)
{
// Mutex doesn't exist, do nothing
isMsiExecFree = true;
}
catch (ObjectDisposedException)
{
// Mutex was disposed between opening it and attempting to wait on it, do nothing
isMsiExecFree = true;
}
finally
{
if(MSIExecuteMutex != null && isMsiExecFree)
MSIExecuteMutex.ReleaseMutex();
}
return isMsiExecFree;
}
```
|
How do I test if another installation is already in progress?
|
[
"",
"c#",
"windows",
"windows-installer",
""
] |
What is the command to list all triggers in a MySQL database?
|
The command for listing all triggers is:
```
show triggers;
```
or you can access the `INFORMATION_SCHEMA` table directly by:
```
select trigger_schema, trigger_name, action_statement
from information_schema.triggers
```
* You can do this from version 5.0.10 onwards.
* More information about the [`TRIGGERS` table is here](http://dev.mysql.com/doc/refman/5.0/en/triggers-table.html).
|
I hope following code will give you more information.
```
select * from information_schema.triggers where
information_schema.triggers.trigger_schema like '%your_db_name%'
```
This will give you total 22 Columns in **MySQL version: 5.5.27** and Above
```
TRIGGER_CATALOG
TRIGGER_SCHEMA
TRIGGER_NAME
EVENT_MANIPULATION
EVENT_OBJECT_CATALOG
EVENT_OBJECT_SCHEMA
EVENT_OBJECT_TABLE
ACTION_ORDER
ACTION_CONDITION
ACTION_STATEMENT
ACTION_ORIENTATION
ACTION_TIMING
ACTION_REFERENCE_OLD_TABLE
ACTION_REFERENCE_NEW_TABLE
ACTION_REFERENCE_OLD_ROW
ACTION_REFERENCE_NEW_ROW
CREATED
SQL_MODE
DEFINER
CHARACTER_SET_CLIENT
COLLATION_CONNECTION
DATABASE_COLLATION
```
|
Show all triggers in a MySQL database
|
[
"",
"sql",
"mysql",
"database",
"triggers",
"sqlcommand",
""
] |
I have a Linq query that I want to call from multiple places:
```
var myData = from a in db.MyTable
where a.MyValue == "A"
select new {
a.Key,
a.MyValue
};
```
How can I create a method, put this code in it, and then call it?
```
public ??? GetSomeData()
{
// my Linq query
}
```
|
IQueryable and IEnumerable both work. But you want to use a type specific version, IQueryable`<`T`>` or IEnumerable `<`T`>`.
So you'll want to create a type to keep the data.
```
var myData = from a in db.MyTable
where a.MyValue == "A"
select new MyType
{
Key = a.Key,
Value = a.MyValue
};
```
|
[`IQueryable`](http://msdn.microsoft.com/en-us/library/system.linq.iqueryable.aspx)
So your method declaration would look like
```
public IQueryable GetSomeData()
```
|
How can I return an anonymous type from a method?
|
[
"",
"c#",
"linq",
"data-structures",
"parameter-passing",
""
] |
Conceptually, I would like to accomplish the following but have had trouble understand how to code it properly in C#:
```
SomeMethod { // Member of AClass{}
DoSomething;
Start WorkerMethod() from BClass in another thread;
DoSomethingElse;
}
```
Then, when WorkerMethod() is complete, run this:
```
void SomeOtherMethod() // Also member of AClass{}
{ ... }
```
Can anyone please give an example of that?
|
The [BackgroundWorker](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx) class was added to .NET 2.0 for this exact purpose.
In a nutshell you do:
```
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += delegate { myBClass.DoHardWork(); }
worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(SomeOtherMethod);
worker.RunWorkerAsync();
```
You can also add fancy stuff like cancellation and progress reporting if you want :)
|
In .Net 2 the BackgroundWorker was introduced, this makes running async operations really easy:
```
BackgroundWorker bw = new BackgroundWorker { WorkerReportsProgress = true };
bw.DoWork += (sender, e) =>
{
//what happens here must not touch the form
//as it's in a different thread
};
bw.ProgressChanged += ( sender, e ) =>
{
//update progress bars here
};
bw.RunWorkerCompleted += (sender, e) =>
{
//now you're back in the UI thread you can update the form
//remember to dispose of bw now
};
worker.RunWorkerAsync();
```
In .Net 1 you have to use threads.
|
Delegating a task in and getting notified when it completes (in C#)
|
[
"",
"c#",
"delegates",
"notifications",
""
] |
I'm reading text from a flat file in c# and need to test whether certain values are dates. They could be in either YYYYMMDD format or MM/DD/YY format. What is the simplest way to do this in .Net?
|
```
string[] formats = {"yyyyMMdd", "MM/dd/yy"};
var Result = DateTime.ParseExact(input, formats, CultureInfo.CurrentCulture, DateTimeStyles.None);
```
or
```
DateTime result;
string[] formats = {"yyyyMMdd", "MM/dd/yy"};
DateTime.TryParseExact(input, formats, CultureInfo.CurrentCulture, DateTimeStyles.None, out result);
```
More info in the MSDN documentation on [ParseExact](https://learn.microsoft.com/en-us/dotnet/api/system.datetime.parseexact) and [TryParseExact](https://learn.microsoft.com/en-us/dotnet/api/system.datetime.tryparseexact).
|
[DateTime.TryParse](http://msdn.microsoft.com/en-us/library/ch92fbc1.aspx) method
|
Convert a string to a date in .net
|
[
"",
"c#",
".net",
""
] |
**Edit**: Solved, there was a trigger with a loop on the table (read my own answer further below).
---
We have a simple delete statement that looks like this:
```
DELETE FROM tablename WHERE pk = 12345
```
This just hangs, no timeout, no nothing.
We've looked at the execution plan, and it consists of many lookups on related tables to ensure no foreign keys would trip up the delete, but we've verified that none of those other tables have any rows referring to that particular row.
There is no other user connected to the database at this time.
We've run DBCC CHECKDB against it, and it reports 0 errors.
Looking at the results of *`sp_who`* and *`sp_lock`* while the query is hanging, I notice that my spid has plenty of PAG and KEY locks, as well as the occasional TAB lock.
The table has 1.777.621 rows, and yes, pk is the primary key, so it's a single row delete based on index. There is no table scan in the execution plan, though I notice that it contains something that says *Table Spool (Eager Spool)*, but says Estimated number of rows 1. Can this actually be a table-scan in disguise? It only says it looks at the primary key column.
Tried DBCC DBREINDEX and UPDATE STATISTICS on the table. Both completed within reasonable time.
There is unfortunately a high number of indexes on this particular table. It is the core table in our system, with plenty of columns, and references, both outgoing and incoming. The exact number is 48 indexes + the primary key clustered index.
What else should we look at?
Note also that this table did not have this problem before, this problem occured suddently today. We also have many databases with the same table setup (copies of customer databases), and they behave as expected, it's just this one that is problematic.
|
One piece of information missing is the number of indices on the table you are deleting the data from. As SQL Server uses the Primary Key as a pointer in every index, any change to the primary index requires updating every index. Though, unless we are talking a high number, this shouldn't be an issue.
I am guessing, from your description, that this is a primary table in the database, referenced by many other tables in FK relationships. This would account for the large number of locks as it checks the rest of the tables for references. And, if you have cascading deletes turned on, this could lead to a delete in table a requiring checks several tables deep.
|
Try recreating the index on that table, and try regenerating the statistics.
[DBCC REINDEX](http://msdn.microsoft.com/en-us/library/aa258828(SQL.80).aspx)
[UPDATE STATISTICS](http://msdn.microsoft.com/en-us/library/aa260645(SQL.80).aspx)
|
DELETE Statement hangs on SQL Server for no apparent reason
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I am attempting to devise a system for packing integer values greater than 65535 into a ushort. Let me explain.
We have a system which generates Int32 values using an IDENTITY column from SQL Server and are limited by an in-production client API that overflows our Int32 IDs to ushorts. Fortunately the client only has about 20 or so instances of things with these IDs - let's call them packages - at any given time and it only needs to have them unique amongst local siblings. The generally accepted solution is to translate our Int32 IDs to ushorts (and no I don't mean casting, I mean translating) before transmitting them to the client, however there are barbs with this approach:
1. Some IDs less than 65535 may still be in play on a given client at any time due to non-expiration.
2. We cannot have any ID collisions - that is if package ID 1 goes to the client, an algorithm that tracks how many times 65535 is removed from an Int32 to make a ushort when applied to 65536 would also result in 1 thus causing a collision.
3. We must be able to reconstruct the ushort into the Int32 upon return.
What we have available to solve this problem is a single signed byte field that is echoed to us and gives us 127 values to play with (really 117 because we're using 0-9 for something else). I'll refer to this as the "byte field" from here on out.
We have discussed three different translation routines:
1. Multiplicative: store in the byte field how many times we remove 65535 from our Int32 to make a ushort. This has collision problems as detailed above.
2. Serialized Session State: for each client, generate a session ID based on facts about that client. Then store a 1:1 translation table starting from 1 up to the number of packages delivered so when the client accesses our server again the inventory of packages can be translated back to their known database IDs. This has overhead problems since we'd be backing serialized session state to a database and we want to support hundreds to thousands of transactions a second.
3. Varied algorithmic approach where the byte field is an ID of a transformative algorithm that takes an Int32 and transforms it into a ushort. Obviously many of these are going to be simple Multiplicative (to increase our ceiling of IDs we can transform) but some will have to be multiplicative with a smaller boundry (like 32768) with a number added to/subtracted from to get as close to a number that can be guaranteed unique amongst siblings. This approach is processor intensive but should allow us to avoid collisions while remaining scalable (though with this approach we have a limited ceiling that won't be reached before the ushort problem goes away on its own due to upgrades).
So my question is: is there a better way than my approaches above, and if not, what should I be looking for in terms of algorithms (for approach #3) to generate a number between 1-65535 when a given number is greater than 0 and must not be a one way hash?
Clarification: its not that the ushort ceiling is the greatest problem, its that the client API uses a ushort so I cannot combine the byte field on the client to get bigger values (the client API is non-upgradeable but will eventually phase out of existence).
|
Regarding approach 2:
Your second approach is pretty much how NAT works. Every TCP/UDP client on the local network has up to 65535 ports in use (except port 0) and a private IP. The router knows only a single public IP. Since two clients may both have source port 300, it cannot simply just replace the private IP with a public one, that would cause collisions to appear. Thus it replaces the IP and "translates" the port (NAT: Network Address **Translation**). On return, it translates the port back and replaces the public with a private IP again, before forwarding the package back. You'd be doing nothing else than that. However, routers keep that information in memory - and they are not too slow when doing NAT (companies with hundreds of computers are NATed to the Internet sometimes and the slow down is hardly noticeably in most cases). You say you want up to thousand transactions a second - but how many clients will there be? As this mainly will define the size of memory needed to backup the mappings. If there are not too many clients, you could keep the mapping with a sorted table in memory, in that case, speed will be the smallest problem (table getting to bigger and server running out of memory is the bigger one).
What is a bit unclear to me is that you once say
> Fortunately the client only has about
> 20 or so instances of things with
> these IDs - let's call them packages -
> at any given time and it only needs to
> have them unique amongst local
> siblings.
but then you say
> Some IDs less than 65535 may still be
> in play on a given client at any time
> due to non-expiration.
I guess, what you probably meant by the second statement is, that if a client requests ID 65536, it might still have IDs below 65535 and these can be as low as (let's say) 20. It's not that the client processes IDs in a straight order, right? So you cannot say, just because it now requested 65536, it may have some smaller values, but certainly not in the range 1-1000, correct? It might actually keep a reference to 20, 90, 2005 and 41238 and still go over 65535, that's what you meant?
I personally like your second approach more than the third one, as it is easier to avoid a collision in any case and translating the number back is a plain, simple operation. Although I doubt that your third approach can work in the long run. Okay, you might have a byte to store how often you subtracted 2^16 of the number. However, you can only subtract 117 \* 2^16 as largest numbers. What will you do if numbers go above that? Using a different algorithm, that does not subtract, but does what? Divide? Shift bits? In that case you lose granularity, that means this algorithm can't **hit** any possible number any longer (it will make large jumps). If it was so easy to just apply a magic translation function upon 32 bit to make 16 bit from it (+ one extra byte) and then just transform it back, guess every compression method in this world would use it, as it could, no matter what the 32 bit number was, always compress it down to 24 bit (16 bit + one byte). That would be magic. It is not possible to pack 32 bit into 24 bit and also pack all the logic how to transform it back into it as well. You will need some external storage, which brings us back to your 2nd approach. This is the only approach that will work and it will work for every number in 32 bit number range.
|
I can think of a few other options:
Are there globally fewer than 65536 entries in the database? If so, then you could maintain a mapping table that's not associated with session state, but is a persisted part of the application.
Are the majority of entries at indexes less than, say 50,000? If that's the case you could map such values directly, and use a map associated with the session for the remaining ones.
If persisting such session data is an issue and the number of clients is reasonably small, you could enable client/session affinity and maintain the map local to the server.
If it's not a web application, you could maintain the map on the client itself.
I don't see any algorithmic way that would avoid collisions - I suspect you could always come up with an examples that would collide.
|
Translating Int32 into ushort and back again
|
[
"",
"c#",
"overflow",
"int32",
"ushort",
""
] |
We are currently using Apache 2.2.3 and Tomcat 5 (Embedded in JBoss 4.2.2) using `mod_proxy_jk` as the connector.
Can someone shed some light on the the correct way to calculate / configure the values below (as well as anything else that may be relevant). Both Apache and Tomcat are running on separate machines and have copious amounts of ram (4gb each).
Relevant server.xml portions:
```
<Connector port="8009"
address="${jboss.bind.address}"
protocol="AJP/1.3"
emptySessionPath="true"
enableLookups="false"
redirectPort="8443"
maxThreads="320"
connectionTimeout="45000"
/>
```
Relevant httpd.conf portions:
```
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 0
</IfModule>
```
|
You should consider the workload the servers might get.
The most important factor might be the number of simultaneously connected clients at peak times. Try to determine it and tune your settings in a way where:
* there are enough processing threads in both Apache and Tomcat that they don't need to spawn new threads when the server is heavily loaded
* there are not way more processing threads in the servers than needed as they would waste resources.
With this kind of setup you can minimize the internal maintenance overhead of the servers, that could help a lot, especially when your load is sporadic.
For example consider an application where you have ~300 new requests/second. Each request requires on average 2.5 seconds to serve. It means that at any given time you have ~750 requests that need to be handled simultaneously. In this situation you probably want to tune your servers so that they have ~750 processing threads at startup and you might want to add something like ~1000 processing threads at maximum to handle extremely high loads.
Also consider for exactly what do you require a thread for. In the previous example each request was independent from the others, there was no session tracking used. In a more "web-ish" scenario you might have users logged in to your website, and depending on your software used, Apache and/or Tomcat might need to use the same thread to serve the requests that come in one session. In this case, you might need more threads. However as I know Tomcat at least, you won't really need to consider this as it works with thread pools internally anyways.
|
### MaxClients
This is the fundamental cap of parallel client connections your apache should handle at once.
With prefork, only one request can be handled per process. Therefore the whole apache can process *at most* $MaxClients requests in the time it takes to handle a *single* request. Of course, this ideal maximum can only be reached if the application needs less than 1/$MaxClients resources per request.
If, for example, the application takes a second of cpu-time to answer a single request, setting MaxClients to four will limit your throughput to four requests per second: Each request uses up an apache connection and apache will only handle four at a time. But if the server has only two CPUs, not even this can be reached, because every wall-clock second only has two cpu seconds, but the requests would need four cpu seconds.
### MinSpareServers
This tells apache how many idle processes should hang around. The bigger this number the more burst load apache can swallow before needing to spawn extra processes, which is expensive and thus slows down the current request.
The correct setting of this depends on your workload. If you have pages with many sub-requests (pictures, iframes, javascript, css) then hitting a single page might use up many more processes for a short time.
### MaxSpareServers
Having too many unused apache processes hanging around just wastes memory, thus apache uses the MaxSpareServers number to limit the amount of spare processes it is holding in reserve for bursts of requests.
### MaxRequestsPerChild
This limits the number of requests a single process will handle throughout its lifetime. If you are very concerned about stability, you should put an actual limit here to continually recycle the apache processes to prevent resource leaks from affecting the system.
### StartServers
This is just the amount of processes apache starts by default. Set this to the usual amount of running apache processes to reduce warm-up time of your system. Even if you ignore this setting, apache will use the Min-/MaxSpareServers values to spawn new processes as required.
### More information
See also [the documentation for apache's multi-processing modules](http://httpd.apache.org/docs/2.0/mod/mpm_common.html).
|
Best practices for configuring Apache / Tomcat
|
[
"",
"java",
"apache",
"tomcat",
"jboss",
"mod-proxy",
""
] |
NOTE: I am not set on using VI, it is just the first thing that came to mind that might be able to do what I need. Feel free to suggest any other program.
I have a form with nearly 100 fields that I would like to auto-fill with PHP. I know how to do the autofill, but I would like to avoid manually adding the needed text to 100 fields.
Is there an automated way I can take the text:
```
<input name="riskRating" id="riskRating" type="text" />
```
and change it to:
```
<input name="riskRating" id="riskRating" type="text" value="<?php echo $data['riskRating']; ?>" />
```
Remember that I am wanting to do this to almost 100 fields. I am trying to avoid going to each field, pasting in the PHP code and changing the variable name manually.
I'm hoping some VI guru out there knows off the top of his/her head.
|
Taking some ideas from Zsolt Botykai and Mark Biek:
```
:%s:<input\(.* id="\([^"]*\)".*\) />:<input \1 value="<?php echo $data['\2']; ?> />:g
```
|
`:%s:\(<input name="\([^"]\+\)" id="[^"]\+" type="text" \)/>:\1value="<?php echo $data ['\2']; ?>" />:gci`
That's one line.
HTH.
|
Can I use RegEx and VI (or something similar) to autofill some text for me?
|
[
"",
"php",
"html",
"vi",
""
] |
what configuration needs to be tweaked, and where does it live, in order to increase the maximum allowed post size?
|
Apache Tomcat by default sets a limit on the maximum size of HTTP POST requests it accepts. In Tomcat 5, this limit is set to 2 MB. When you try to upload files larger than 2 MB, this error can occur.
The solution is to reconfigure Tomcat to accept larger POST requests, either by increasing the limit, or by disabling it. This can be done by editing [TOMCAT\_DIR]/conf/server.xml. Set the Tomcat configuration parameter maxPostSize for the HTTPConnector to a larger value (in bytes) to increase the limit. Setting it to 0 in will disable the size check. See the [Tomcat Configuration Reference](http://tomcat.apache.org/tomcat-5.5-doc/config/http.html) for more information.
|
It will be for others persons, I see you are coupling Apache HTTP and Tomcat (tomcat / mod\_jk), in this case edit the Coyote/JK2 AJP 1.3 Connector the same way you do it for the standard connector (Coyote HTTP/1.1), because the AJP1.3 Connector is where Tomcat receive data.
```
<!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 -->
<Connector port="8009"
enableLookups="false" redirectPort="8443" debug="0"
protocol="AJP/1.3" maxPostSize="0"/>
```
|
What causes java.lang.IllegalStateException: Post too large in tomcat / mod_jk
|
[
"",
"java",
"tomcat",
"mod-jk",
""
] |
How can you strip non-ASCII characters from a string? (in C#)
|
```
string s = "søme string";
s = Regex.Replace(s, @"[^\u0000-\u007F]+", string.Empty);
```
---
The `^` is the not operator. It tells the regex to find everything that doesn't match, instead of everything that does match. The `\u####-\u####` says which characters match.`\u0000-\u007F` is the equivalent of the first 128 characters in utf-8 or unicode, which are always the ascii characters. So you match every non ascii character (because of the not) and do a replace on everything that matches.
(as explained in a comment by Gordon Tucker Dec 11, 2009 at 21:11)
|
Here is a pure .NET solution that doesn't use regular expressions:
```
string inputString = "Räksmörgås";
string asAscii = Encoding.ASCII.GetString(
Encoding.Convert(
Encoding.UTF8,
Encoding.GetEncoding(
Encoding.ASCII.EncodingName,
new EncoderReplacementFallback(string.Empty),
new DecoderExceptionFallback()
),
Encoding.UTF8.GetBytes(inputString)
)
);
```
It may look cumbersome, but it should be intuitive. It uses the .NET ASCII encoding to convert a string. UTF8 is used during the conversion because it can represent any of the original characters. It uses an EncoderReplacementFallback to to convert any non-ASCII character to an empty string.
|
How can you strip non-ASCII characters from a string? (in C#)
|
[
"",
"c#",
"ascii",
""
] |
Consider the following 2 queries:
```
select tblA.a,tblA.b,tblA.c,tblA.d
from tblA
where tblA.a not in (select tblB.a from tblB)
select tblA.a,tblA.b,tblA.c,tblA.d
from tblA left outer join tblB
on tblA.a = tblB.a where tblB.a is null
```
Which will perform better? My assumption is that in general the join will be better except in cases where the subselect returns a very small result set.
|
RDBMSs "rewrite" queries to optimize them, so it depends on system you're using, and I would guess they end up giving the same performance on most "good" databases.
I suggest picking the one that is clearer and easier to maintain, for my money, that's the first one. It's much easier to debug the subquery as it can be run independently to check for sanity.
|
non-correlated sub queries are fine. you should go with what describes the data you're wanting. as has been noted, this likely gets rewritten into the same plan, but isn't guaranteed to! what's more, if table A and B are not 1:1 you will get duplicate tuples from the join query (as the IN clause performs an implicit DISTINCT sort), so it's always best to code what you want and actually think about the outcome.
|
subselect vs outer join
|
[
"",
"sql",
"sql-server",
"database",
"performance",
""
] |
I've got a c# assembly which I'm invoking via COM from a Delphi (win32 native) application.
This works on all the machines I've tested it on, except one.
The problem is that the Delphi application gets "Class not registered" when trying to create the COM object.
Now, when I look in the registry under `HKEY_CLASSES_ROOT\DelphiToCSharp\CLSID`, the GUID listed there is not the same as the assembly Guid in AssemblyInfo.cs. It should be the same - it IS the same on all the other computers where it's installed.
I have tried `regasm /unregister delphitocsharp.dll`, and that removes the registry key. Then if I do `regasm delphitocsharp.dll`, the registry key returns, but the GUID is the same as before (ie. wrong), and Delphi still gets "Class not registered".
DelphiToCSharp.dll on the working machine is identical (verified with md5) to the version on the non-working machine.
All I can think of is that an old version of the dll was registered before, and there still exists some remnant of that file which is making regasm confused.
How can I fix or at least further diagnose this issue?
|
The GUID in AssemblyInfo becomes the "Type-Library" GUID and usually is not what you'd be looking for. I'm going to assume you're trying to access a class, and you need to define a Guid attribute and ComVisible for the class. For example:
```
[Guid("00001111-2222-3333-4444-555566667777"), ComVisible(true)]
public class MyCOMRegisteredClass
```
If you don't, then the class either a) won't be registered, or b) if you've defined COMVisible(true) at the assembly level, will be assigned a guid that .NET bakes up for you.
|
Maybe you have an old version of the assembly somewhere? Maybe in the GAC? Regasm is probably picking that up and using it.
|
Why does regasm.exe register my c# assembly with the wrong GUID?
|
[
"",
"c#",
".net",
"delphi",
"com",
"regasm",
""
] |
How can I extract the list of available SQL servers in an SQL server group? I'm planning to put that list in a combo box in VB.NET.
|
The only way I knew to do it was using the command line:
```
osql -L
```
But I found the below article which seems to solve your specific goal filling a combobox:
<http://www.sqldbatips.com/showarticle.asp?ID=45>
|
If you didn't want to be tied to SQL SMO, which is what [Ben's article](https://stackoverflow.com/a/147680/7444103) uses, you can do something like this to discover all SQL servers on your network:
```
Private Sub cmbServer_DropDown(ByVal sender As Object, ByVal e As System.EventArgs) Handles cmbServer.DropDown
Dim oTable As Data.DataTable
Dim lstServers As List(Of String)
Try
If cmbServer.Items.Count = 0 Then
System.Windows.Forms.Cursor.Current = System.Windows.Forms.Cursors.WaitCursor
oTable = System.Data.Sql.SqlDataSourceEnumerator.Instance.GetDataSources
For Each oRow As DataRow In oTable.Rows
If oRow("InstanceName").ToString = "" Then
cmbServer.Items.Add(oRow("ServerName"))
Else
cmbServer.Items.Add(oRow("ServerName").ToString & "\" & oRow("InstanceName").ToString)
End If
Next oRow
End If
Catch ex As Exception
ErrHandler("frmLogin", "cmbServer_DropDown", ex.Source, ex.Message, Ex.InnerException)
Finally
System.Windows.Forms.Cursor.Current = System.Windows.Forms.Cursors.Default
If oTable IsNot Nothing Then
oTable.Dispose()
End If
End Try
End Sub
```
The [SqlDataSourceEnumerator](https://learn.microsoft.com/en-us/dotnet/api/system.data.sql.sqldatasourceenumerator) class is nice because it gives you SQL server discovery right out of the 2.0 framework.
|
Get list of available servers in SQL server group
|
[
"",
"sql",
"vb.net",
""
] |
I've been trying to implement unit testing and currently have some code that does the following:
1. query external database, loading
into a feed table
2. query a view,
which is a delta of my feed and data
tables, updating data table to match
feed table
my unit testing strategy is this:
I have a testing database that I am free to manipulate.
1. in setUP(), load some data into my testing db
2. run my code, using my testing db as the source
3. inspect the data table, checking for counts and the existence/non existence of certain records
4. clear testing db, loading in a different set of data
5. run code again
6. inspect data table again
Obviously I have the data sets that I load into the source db set up such that I know certain records should be added,deleted,updated, etc.
It seems like this is a bit cumbersome and there should be an easier way? any suggestions?
|
Is it your intent to test the view which generates the deltas, or to test that your code correctly adds, deletes and updates in response to the view?
If you want to test the view, you could use a tool like [DBUnit](http://dbunit.sourceforge.net/) to populate your feed and data tables with various data whose delta you've manually calculated. Then, for each test you would verify that the view returns a matching set.
If you want to test how your code responds to diffs detected by the view, I would try to abstract away database access. I imagine an java method to which you can pass a result set (or list of POJO/DTO's) and returns a list of parameter Object arrays (again, or POJO's) to be added. Other methods would parse the diff list for items to be removed and updated. You could then create a mock result set or pojo's, pass them to your code and verify the correct parameters are returned. All without touching a database.
I think the key is to break your process into parts and test each of those as independently as possible.
|
[DbUnit](http://dbunit.sourceforge.net) will meet your needs. One thing to watch out for is that they have switched to using SLF4J as their logging facade instead of JCL. You can configure SLF4J to forward the logging to JCL but be warned if you are using Maven DbUnit sucks in their Nop log provider by default so you will have to use an exclusion, I [blogged](http://btmatthews.com/blog/?p=5) about this conflict recently.
|
JUnit for database code
|
[
"",
"java",
"database",
"unit-testing",
"junit",
""
] |
I'm writing a game in Java, and I want the user to be able to provide input from both the command line and my GUI. Currently, I use this method to get input:
```
static String getInput(){
System.out.println("Your move:");
Scanner sc = new Scanner(System.in);
return sc.nextLine();
}
```
I want to keep using this, but let a mousePressed event emulate the user actually typing in their input as well. It's not that efficient of a solution, but it makes sense in my application. So the question is: how do I simulate a user typing to System.in from the code side?
|
This is possible - the easiest substitution for *System.in* would be a [PipedInputStream](http://java.sun.com/javase/6/docs/api/java/io/PipedInputStream.html). This must be hooked up to a [PipedOutputStream](http://java.sun.com/javase/6/docs/api/java/io/PipedOutputStream.html) that writes from another thread (in this case, the Swing thread).
```
public class GameInput {
private Scanner scanner;
/**CLI constructor*/
public GameInput() {
scanner = new Scanner(System.in);
}
/**GUI constructor*/
public GameInput(PipedOutputStream out) throws IOException {
InputStream in = new PipedInputStream(out);
scanner = new Scanner(in);
}
public String getInput() {
return scanner.nextLine();
}
public static void main(String[] args) throws IOException {
GameInput gameInput;
PipedOutputStream output = new PipedOutputStream();
final PrintWriter writer = new PrintWriter(output);
gameInput = new GameInput(output);
final JTextField textField = new JTextField(30);
final JButton button = new JButton("OK");
button.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
String data = textField.getText();
writer.println(data);
writer.flush();
}
});
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(textField);
frame.getContentPane().add(button);
frame.pack();
frame.setVisible(true);
String data = gameInput.getInput();
System.out.println("Input=" + data);
System.exit(0);
}
}
```
However, it might be better to rethink your game logic to cut out the streams altogether in GUI mode.
|
I made an application once that could run via the command line or using a GUI.
The way I did this was to define an Interface (named IODevice) which defined the following methods:- public String getInput();
- public void showOutput(String output);
I then had two classes which implemented this interface - One used the host computer's terminal (as you are doing now) and one used a JTextArea (output) / JOptionPane (input).
Perhaps you could do something similar - To change the input device used, simply change the instance of the IODevice.
Hope this is of some use.
|
Emulating user input for java.util.Scanner
|
[
"",
"java",
"input",
"java.util.scanner",
""
] |
I'm writing a C# program that acts as a PowerPoint 2007 plugin. On some machines, some calls to the PowerPoint object model throw a `COMException` with the message `RPC_E_SYS_CALL_FAILED`. I couldn't find any specific advice on what to do regarding this error, or how to avoid it. From Googling it looks like something to do with the message queue or Single-Threaded Apartments. Or am I way off?
Example of the error message is:
> System call failed. (Exception from HRESULT: 0x80010100 (RPC\_E\_SYS\_CALL\_FAILED))
> at Microsoft.Office.Interop.PowerPoint.\_Presentation.get\_FullName()
Unfortunately, the problem is occurring on a client's machine, so I have no easy way to debug it! Should I just retry the calls whenever I get this error?
Any advice to help me resolve this problem would be greatly appreciated!
|
I don't know it is related to your problem, but all your COM calls must come from within the same thread your add-in was created on. If you created new threads you must take special care. Details are described in these two articles:
* [Implementing IMessageFilter in an Office add-in](http://blogs.msdn.com/andreww/archive/2008/11/19/implementing-imessagefilter-in-an-office-add-in.aspx) and
* [Why your COMAddIn.Object should derive from StandardOleMarshalObject](http://blogs.msdn.com/andreww/archive/2008/08/11/why-your-comaddin-object-should-derive-from-standardolemarshalobject.aspx)
|
are you making the call from a thread with its ApartmentState set? if not, that might be the culprit - COM interop is pretty finicky about that sort of thing
|
C# COM Office Automation - RPC_E_SYS_CALL_FAILED
|
[
"",
"c#",
"com",
"ms-office",
""
] |
I have 3 tables
1. **Links**
Link ID
Link Name
GroupID (FK into Groups)
SubGroupID (FK into Subgroups)
2. **Groups**
GroupID
GroupName
3. **SubGroup**
SubGroupID
SubGroupName
GroupID (FK into Groups)
Every link needs to have a GroupID but teh SubGroupID is optional. How do i write a SQL query to show:
**Links.LinkName, Groups.GroupName, SubGroup.SubGroupName**
For the records with no subgroup just put a blank entry in that field. If i have 250 link rows, i should get back 250 reecords from this query.
Is there a way to do this in one query or do i need to do multiple queries?
|
This assumes that there is at most only 1 subgroup per group. if there are more, then you have the potential to get additional records.
```
select links.linkname, groups.groupname, subgroup.subgroupname
from links
inner join groups on (links.groupid = groups.groupid)
left outer join subgroup on (links.subgroupid = subgroup.subgroupid)
```
|
```
SELECT
links.linkname
, groups.groupname
, subgroup.groupname
FROM
links
JOIN groups ON links.groupid = groups.groupid
LEFT OUTER JOIN subgroups ON links.subgroupid = subgroup.subgroupid
```
(re-added to address OP's question)
incidentally, why not keep groups and subgroups in the same table, and use a self-referential join?
Akantro:
You'd have something like this:
create table groups(
groupid integer primary key,
parentgroupid integer foreign key references groups (groupid),
groupname varchar(50))
your query would then be
```
SELECT
links.linkname
, groups.groupname
, SUBGROUPS.groupname
FROM
links
JOIN groups ON links.groupid = groups.groupid
LEFT OUTER JOIN groups SUBGROUPS ON links.subgroupid = subgroup.groupid
```
there's no functional difference to keeping the tables like this, but the benefit is you only have to go to one place to edit the groups/subgroups
|
SQL Join question
|
[
"",
"sql",
""
] |
I want to ask how other programmers are producing Dynamic SQL strings for execution as the CommandText of a SQLCommand object.
I am producing parameterized queries containing user-generated WHERE clauses and SELECT fields. Sometimes the queries are complex and I need a lot of control over how the different parts are built.
Currently, I am using many loops and switch statements to produce the necessary SQL code fragments and to create the SQL parameters objects needed. This method is difficult to follow and it makes maintenance a real chore.
Is there a cleaner, more stable way of doing this?
Any Suggestions?
EDIT:
To add detail to my previous post:
1. I cannot really template my query due to the requirements. It just changes too much.
2. I have to allow for aggregate functions, like Count(). This has consequences for the Group By/Having clause. It also causes nested SELECT statements. This, in turn, effects the column name used by
3. Some Contact data is stored in an XML column. Users can query this data AS WELL AS and the other relational columns together. Consequences are that xmlcolumns cannot appear in Group By clauses[sql syntax].
4. I am using an efficient paging technique that uses Row\_Number() SQL Function. Consequences are that I have to use a Temp table and then get the @@rowcount, before selecting my subset, to avoid a second query.
I will show some code (the horror!) so that you guys have an idea of what I'm dealing with.
```
sqlCmd.CommandText = "DECLARE @t Table(ContactId int, ROWRANK int" + declare
+ ")INSERT INTO @t(ContactId, ROWRANK" + insertFields + ")"//Insert as few cols a possible
+ "Select ContactID, ROW_NUMBER() OVER (ORDER BY " + sortExpression + " "
+ sortDirection + ") as ROWRANK" // generates a rowrank for each row
+ outerFields
+ " FROM ( SELECT c.id AS ContactID"
+ coreFields
+ from // sometimes different tables are required
+ where + ") T " // user input goes here.
+ groupBy+ " "
+ havingClause //can be empty
+ ";"
+ "select @@rowcount as rCount;" // return 2 recordsets, avoids second query
+ " SELECT " + fields + ",field1,field2" // join onto the other cols n the table
+" FROM @t t INNER JOIN contacts c on t.ContactID = c.id"
+" WHERE ROWRANK BETWEEN " + ((pageIndex * pageSize) + 1) + " AND "
+ ( (pageIndex + 1) * pageSize); // here I select the pages I want
```
In this example, I am querying XML data. For purely relational data, the query is much more simple. Each of the section variables are StringBuilders. Where clauses are built like so:
```
// Add Parameter to SQL Command
AddParamToSQLCmd(sqlCmd, "@p" + z.ToString(), SqlDbType.VarChar, 50, ParameterDirection.Input, qc.FieldValue);
// Create SQL code Fragment
where.AppendFormat(" {0} {1} {2} @p{3}", qc.BooleanOperator, qc.FieldName, qc.ComparisonOperator, z);
```
|
I had the need to do this on one of my recent projects. Here is the scheme that I am using for generating the SQL:
* Each component of the query is represented by an Object (which in my case is a Linq-to-Sql entity that maps to a table in the DB). So I have the following classes: Query, SelectColumn, Join, WhereCondition, Sort, GroupBy. Each of these classes contains all details relating to that component of the query.
* The last five classes are all related to a Query object. So the Query object itself has collections of each class.
* Each class has a method that can generate the SQL for the part of the query that it represents. So creating the overall query ends up calling Query.GenerateQuery() which in turn enumerates through all of the sub-collections and calls their respective GenerateQuery() methods
It is still a bit complicated, but in the end you know where the SQL generation for each individual part of the query originates (and I don't think that there are any big switch statements). And don't forget to use StringBuilder.
|
We created our own FilterCriteria object that is kind of a *black-box* dynamic query builder. It has collection properties for SelectClause, WhereClause, GroupByClause and OrderByClause. It also contains a properties for CommandText, CommandType, and MaximumRecords.
We then jut pass our FilterCriteria object to our data logic and it executes it against the database server and passes parameter values to a stored procedure that executes the dynamic code.
Works well for us ... and keeps the SQL generation nicely contained in an object.
|
Is there a standard approach to generating sql dynamically?
|
[
"",
".net",
"sql",
""
] |
I usually use C++ stdlib map whenever I need to store some data associated with a specific type of value (a key value - e.g. a string or other object). The stdlib map implementation is based on trees which provides better performance (O(log n)) than the standard array or stdlib vector.
My questions is, do you know of any C++ "standard" hashtable implementation that provides even better performance (O(1))? Something similar to what is available in the Hashtable class from the Java API.
|
If you're using C++11, you have access to the `<unordered_map>` and `<unordered_set>` headers. These provide classes [`std::unordered_map`](http://en.cppreference.com/w/cpp/container/unordered_map) and [`std::unordered_set`](http://en.cppreference.com/w/cpp/container/unordered_set).
If you're using C++03 with TR1, you have access to the classes `std::tr1::unordered_map` and `std::tr1::unordered_set`, using the same headers (unless you're using GCC, in which case the headers are `<tr1/unordered_map>` and `<tr1/unordered_set>` instead).
In all cases, there are corresponding `unordered_multimap` and `unordered_multiset` types too.
|
If you don't already have unordered\_map or unordered\_set, they are part of [boost](http://www.boost.org/).
[Here's the documentation for both](http://www.boost.org/doc/libs/1_36_0/doc/html/unordered.html).
|
Hashtable in C++?
|
[
"",
"c++",
"performance",
"dictionary",
"hashtable",
"complexity-theory",
""
] |
Wanted to convert
```
<br/>
<br/>
<br/>
<br/>
<br/>
```
into
```
<br/>
```
|
You can do this with a regular expression:
```
preg_replace("/(<br\s*\/?>\s*)+/", "<br/>", $input);
```
This if you pass in your source HTML, this will return a string with a single <br/> replacing every run of them.
|
Mine is almost exactly the same as [levik](https://stackoverflow.com/questions/133571/how-to-convert-multiple-br-tag-to-a-single-br-tag-in-php#133600)'s (+1), just accounting for some different br formatting:
```
preg_replace('/(<br[^>]*>\s*){2,}/', '<br/>', $sInput);
```
|
How to convert multiple <br/> tag to a single <br/> tag in php
|
[
"",
"php",
"html",
"regex",
""
] |
> **Edit**: Now I need to solve this problem for real, I did a little more
> investigation and came up with a
> number of things to reduce duplicate
> content. I posted detailed code
> samples on my blog: [Reducing
> Duplicate Content with ASP.NET MVC](http://blog.dantup.me.uk/2009/04/reducing-duplicate-content-with-aspnet.html)
First post - go easy if I've marked this up wrong or tagged it badly :P
In Microsoft's new ASP.NET MVC framework it seems there are two things that could cause your content to be served up at multiple URLs (something which Google penalize for and will cause your PageRank to be split across them):
* Case-insensitive URLs
* Default URL
You can set the default controller/action to serve up for requests to the root of your domain. Let's say we choose HomeController/Index. We end up with the following URLs serving up the same content:
* `example.com/`
* `example.com/Home/Index`
Now if people start linking to both of these then PageRank would be split. Google would also consider it duplicate content and penalize one of them to avoid duplicates in their results.
On top of this, the URLs are not case sensitive, so we actually get the same content for these URLs too:
* `example.com/Home/Index`
* `example.com/home/index`
* `example.com/Home/index`
* `example.com/home/Index`
* (the list goes on)
So, the question... How do I avoid these penalties? I would like:
* All requests for the default action to be redirected (301 status) to the same URL
* All URLs to be case sensitive
Possible?
|
Bump!
**MVC 5** Now Supports producing only lowercase URLs and common trailing slash policy.
```
public static void RegisterRoutes(RouteCollection routes)
{
routes.LowercaseUrls = true;
routes.AppendTrailingSlash = false;
}
```
Also on my application to avoid duplicate content on different Domains/Ip/Letter Casing etc...
> `http://yourdomain.example/en`
>
> `https://yourClientIdAt.YourHostingPacket.example/`
I tend to produce Canonical URLs based on a **PrimaryDomain** - **Protocol** - **Controller** - **Language** - **Action**
```
public static String GetCanonicalUrl(RouteData route,String host,string protocol)
{
//These rely on the convention that all your links will be lowercase!
string actionName = route.Values["action"].ToString().ToLower();
string controllerName = route.Values["controller"].ToString().ToLower();
//If your app is multilanguage and your route contains a language parameter then lowercase it also to prevent EN/en/ etc....
//string language = route.Values["language"].ToString().ToLower();
return String.Format("{0}://{1}/{2}/{3}/{4}", protocol, host, language, controllerName, actionName);
}
```
Then you can use **@Gabe Sumner's** answer to redirect to your action's canonical URL if the current request URL doesn't match it.
|
I was working on this as well. I will obviously defer to ScottGu on this. I humbly offer my solution to this problem as well though.
Add the following code to **global.asax**:
```
protected void Application_BeginRequest(Object sender, EventArgs e)
{
// If upper case letters are found in the URL, redirect to lower case URL.
if (Regex.IsMatch(HttpContext.Current.Request.Url.ToString(), @"[A-Z]") == true)
{
string LowercaseURL = HttpContext.Current.Request.Url.ToString().ToLower();
Response.Clear();
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location",LowercaseURL);
Response.End();
}
}
```
A great question!
|
How can I avoid duplicate content in ASP.NET MVC due to case-insensitive URLs and defaults?
|
[
"",
"c#",
"asp.net",
"asp.net-mvc",
"razor",
""
] |
Is there a way to make a Radio Button enabled/disabled (not checked/unchecked) via CSS?
I've need to toggle some radio buttons on the client so that the values can be read on the server, but setting the 'enabled' property to 'false' then changing this on the client via javascript seems to prevent me from posting back any changes to the radio button after it's been enabled.
See: [ASP.NET not seeing Radio Button value change](https://stackoverflow.com/questions/130165/aspnet-not-seeing-radio-button-value-change)
It was recommended that I use control.style.add("disabled", "true") instead, but this does not seem to disable the radio button for me.
Thanks!
|
To the best of my knowledge CSS cannot affect the functionality of the application. It can only affect the display. So while you can hide it with css (display:none) you can't disable it.
What you could do would be to disable it on page load with javascript. There are a couple ways to do this but an easy way would be to do something like
```
<script>document.getElementById('<%=CONTROLID%>').disabled=true;</script>
```
and put that in your .aspx file at the top below the body tag.
|
Disabled is a html attribute, not a css attribute.
Why can't you just use some jQuery
```
$('#radiobuttonname').attr('disabled', 'true');
```
or plain old javascript
```
document.getElementById(id).disabled = true;
```
|
Setting Radio Button enabled/disabled via CSS
|
[
"",
"asp.net",
"javascript",
"css",
""
] |
I have seen a few mentions of this idiom (including [on SO](https://stackoverflow.com/questions/9033/hidden-features-of-c#9282)):
```
// Deliberately empty subscriber
public event EventHandler AskQuestion = delegate {};
```
The upside is clear - it avoids the need to check for null before raising the event.
**However, I am keen to understand if there are any downsides.** For example, is it something that is in widespread use and is transparent enough that it won't cause a maintenance headache? Is there any appreciable performance hit of the empty event subscriber call?
|
The only downside is a very slight performance penalty as you are calling extra empty delegate. Other than that there is no maintenance penalty or other drawback.
|
Instead of inducing performance overhead, why not [use an extension method](https://stackoverflow.com/questions/248072/evil-use-of-extension-methods) to alleviate both problems:
```
public static void Raise(this EventHandler handler, object sender, EventArgs e)
{
if(handler != null)
{
handler(sender, e);
}
}
```
Once defined, you never have to do another null event check again:
```
// Works, even for null events.
MyButtonClick.Raise(this, EventArgs.Empty);
```
|
Is there a downside to adding an anonymous empty delegate on event declaration?
|
[
"",
"c#",
"coding-style",
"delegates",
"events",
"idioms",
""
] |
I need to get UTF-8 working in my Java webapp (servlets + JSP, no framework used) to support `äöå` etc. for regular Finnish text and Cyrillic alphabets like `ЦжФ` for special cases.
My setup is the following:
* Development environment: Windows XP
* Production environment: Debian
Database used: MySQL 5.x
Users mainly use Firefox2 but also Opera 9.x, FF3, IE7 and Google Chrome are used to access the site.
How to achieve this?
|
*Answering myself as the FAQ of this site encourages it. This works for me:*
Mostly characters äåö are not a problematic as the default character set used by browsers and tomcat/java for webapps is latin1 ie. ISO-8859-1 which "understands" those characters.
To get UTF-8 working under Java+Tomcat+Linux/Windows+Mysql requires the following:
## Configuring Tomcat's server.xml
It's necessary to configure that the connector uses UTF-8 to encode url (GET request) parameters:
```
<Connector port="8080" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"
compression="on"
compressionMinSize="128"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/ javascript,application/x-javascript,application/javascript"
URIEncoding="UTF-8"
/>
```
The key part being **URIEncoding="UTF-8"** in the above example. This quarantees that Tomcat handles all incoming GET parameters as UTF-8 encoded.
As a result, when the user writes the following to the address bar of the browser:
```
https://localhost:8443/ID/Users?action=search&name=*ж*
```
the character ж is handled as UTF-8 and is encoded to (usually by the browser before even getting to the server) as **%D0%B6**.
*POST request are not affected by this.*
## CharsetFilter
Then it's time to force the java webapp to handle all requests and responses as UTF-8 encoded. This requires that we define a character set filter like the following:
```
package fi.foo.filters;
import javax.servlet.*;
import java.io.IOException;
public class CharsetFilter implements Filter {
private String encoding;
public void init(FilterConfig config) throws ServletException {
encoding = config.getInitParameter("requestEncoding");
if (encoding == null) encoding = "UTF-8";
}
public void doFilter(ServletRequest request, ServletResponse response, FilterChain next)
throws IOException, ServletException {
// Respect the client-specified character encoding
// (see HTTP specification section 3.4.1)
if (null == request.getCharacterEncoding()) {
request.setCharacterEncoding(encoding);
}
// Set the default response content type and encoding
response.setContentType("text/html; charset=UTF-8");
response.setCharacterEncoding("UTF-8");
next.doFilter(request, response);
}
public void destroy() {
}
}
```
This filter makes sure that if the browser hasn't set the encoding used in the request, that it's set to UTF-8.
The other thing done by this filter is to set the default response encoding ie. the encoding in which the returned html/whatever is. The alternative is to set the response encoding etc. in each controller of the application.
This filter has to be added to the **web.xml** or the deployment descriptor of the webapp:
```
<!--CharsetFilter start-->
<filter>
<filter-name>CharsetFilter</filter-name>
<filter-class>fi.foo.filters.CharsetFilter</filter-class>
<init-param>
<param-name>requestEncoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CharsetFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
```
The instructions for making this filter are found at the [tomcat wiki (<http://wiki.apache.org/tomcat/Tomcat/UTF-8>)](http://wiki.apache.org/tomcat/Tomcat/UTF-8)
## JSP page encoding
In your **web.xml**, add the following:
```
<jsp-config>
<jsp-property-group>
<url-pattern>*.jsp</url-pattern>
<page-encoding>UTF-8</page-encoding>
</jsp-property-group>
</jsp-config>
```
Alternatively, all JSP-pages of the webapp would need to have the following at the top of them:
```
<%@page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8"%>
```
If some kind of a layout with different JSP-fragments is used, then this is needed in **all** of them.
## HTML-meta tags
JSP page encoding tells the JVM to handle the characters in the JSP page in the correct encoding.
Then it's time to tell the browser in which encoding the html page is:
This is done with the following at the top of each xhtml page produced by the webapp:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="fi">
<head>
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
...
```
## JDBC-connection
When using a db, it has to be defined that the connection uses UTF-8 encoding. This is done in **context.xml** or wherever the JDBC connection is defiend as follows:
```
<Resource name="jdbc/AppDB"
auth="Container"
type="javax.sql.DataSource"
maxActive="20" maxIdle="10" maxWait="10000"
username="foo"
password="bar"
driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ ID_development?useEncoding=true&characterEncoding=UTF-8"
/>
```
## MySQL database and tables
The used database must use UTF-8 encoding. This is achieved by creating the database with the following:
```
CREATE DATABASE `ID_development`
/*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_swedish_ci */;
```
Then, all of the tables need to be in UTF-8 also:
```
CREATE TABLE `Users` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(30) collate utf8_swedish_ci default NULL
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci ROW_FORMAT=DYNAMIC;
```
The key part being **CHARSET=utf8**.
## MySQL server configuration
MySQL serveri has to be configured also. Typically this is done in Windows by modifying **my.ini** -file and in Linux by configuring **my.cnf** -file.
In those files it should be defined that all clients connected to the server use utf8 as the default character set and that the default charset used by the server is also utf8.
```
[client]
port=3306
default-character-set=utf8
[mysql]
default-character-set=utf8
```
## Mysql procedures and functions
These also need to have the character set defined. For example:
```
DELIMITER $$
DROP FUNCTION IF EXISTS `pathToNode` $$
CREATE FUNCTION `pathToNode` (ryhma_id INT) RETURNS TEXT CHARACTER SET utf8
READS SQL DATA
BEGIN
DECLARE path VARCHAR(255) CHARACTER SET utf8;
SET path = NULL;
...
RETURN path;
END $$
DELIMITER ;
```
## GET requests: latin1 and UTF-8
If and when it's defined in tomcat's server.xml that GET request parameters are encoded in UTF-8, the following GET requests are handled properly:
```
https://localhost:8443/ID/Users?action=search&name=Petteri
https://localhost:8443/ID/Users?action=search&name=ж
```
Because ASCII-characters are encoded in the same way both with latin1 and UTF-8, the string "Petteri" is handled correctly.
The Cyrillic character ж is not understood at all in latin1. Because Tomcat is instructed to handle request parameters as UTF-8 it encodes that character correctly as **%D0%B6**.
If and when browsers are instructed to read the pages in UTF-8 encoding (with request headers and html meta-tag), at least Firefox 2/3 and other browsers from this period all encode the character themselves as **%D0%B6**.
The end result is that all users with name "Petteri" are found and also all users with the name "ж" are found.
### But what about äåö?
HTTP-specification defines that by default URLs are encoded as latin1. This results in firefox2, firefox3 etc. encoding the following
```
https://localhost:8443/ID/Users?action=search&name=*Päivi*
```
in to the encoded version
```
https://localhost:8443/ID/Users?action=search&name=*P%E4ivi*
```
In latin1 the character **ä** is encoded as **%E4**. *Even though the page/request/everything is defined to use UTF-8*. The UTF-8 encoded version of ä is **%C3%A4**
The result of this is that it's quite impossible for the webapp to correly handle the request parameters from GET requests as some characters are encoded in latin1 and others in UTF-8.
**Notice: POST requests do work as browsers encode all request parameters from forms completely in UTF-8 if the page is defined as being UTF-8**
## Stuff to read
A very big thank you for the writers of the following for giving the answers for my problem:
* http://tagunov.tripod.com/i18n/i18n.html
* http://wiki.apache.org/tomcat/Tomcat/UTF-8
* http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/
* http://dev.mysql.com/doc/refman/5.0/en/charset-syntax.html
* http://cagan327.blogspot.com/2006/05/utf-8-encoding-fix-tomcat-jsp-etc.html
* http://cagan327.blogspot.com/2006/05/utf-8-encoding-fix-for-mysql-tomcat.html
* http://jeppesn.dk/utf-8.html
* http://www.nabble.com/request-parameters-mishandle-utf-8-encoding-td18720039.html
* http://www.utoronto.ca/webdocs/HTMLdocs/NewHTML/iso\_table.html
* http://www.utf8-chartable.de/
## Important Note
[mysql](/questions/tagged/mysql "show questions tagged 'mysql'") supports the [Basic Multilingual Plane](http://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane) using 3-byte UTF-8 characters. If you need to go outside of that (certain alphabets require more than 3-bytes of UTF-8), then you either need to use a flavor of `VARBINARY` column type or use the [`utf8mb4` character set](http://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html) (which requires MySQL 5.5.3 or later). Just be aware that using the `utf8` character set in MySQL won't work 100% of the time.
## Tomcat with Apache
One more thing If you are using Apache + Tomcat + mod\_JK connector then you also need to do following changes:
1. Add URIEncoding="UTF-8" into tomcat server.xml file for 8009 connector, it is used by mod\_JK connector. `<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8"/>`
2. Goto your apache folder i.e. `/etc/httpd/conf` and add `AddDefaultCharset utf-8` in `httpd.conf file`. **Note:** First check that it is exist or not. If exist you may update it with this line. You can add this line at bottom also.
|
I think you summed it up quite well in your own answer.
In the process of UTF-8-ing(?) from end to end you might also want to make sure java itself is using UTF-8. Use -Dfile.encoding=utf-8 as parameter to the JVM (can be configured in catalina.bat).
|
How to get UTF-8 working in Java webapps?
|
[
"",
"java",
"mysql",
"tomcat",
"encoding",
"utf-8",
""
] |
Updating an old ASP/Access site for a client - I need SQL to add a column to an existing table and set a default value. Doesn't work - any ideas?
This works fine
```
ALTER TABLE documents ADD COLUMN membersOnly NUMBER
```
I want this to work:
```
ALTER TABLE documents ADD COLUMN membersOnly NUMBER DEFAULT 0
```
Have googled and seen instructions for default values work for other field types but I want to add number. Thanks!
|
Tools -> Options -> Tables/Queries -> (At the bottom right:) Sql Server Compatible Syntax - turn option on for this database.
then you can execute your query:
```
ALTER TABLE documents ADD COLUMN membersOnly NUMBER DEFAULT 0
```
|
With ADO, you can execute a DDL statement to create a field and set its default value.
```
CurrentProject.Connection.Execute _
"ALTER TABLE discardme ADD COLUMN membersOnly SHORT DEFAULT 0"
```
|
SQL to add column with default value - Access 2003
|
[
"",
"sql",
"ms-access",
""
] |
Does anyone know if there is a c# Console app, similar to the Python or Ruby console? I know the whole "Compiled versus Interpreted" difference, but with C#'s reflection power I think it could be done.
**UPDATE**
Well, it only took about 200 lines, but I wrote a simple one...It works a lot like osql. You enter commands and then run them with go.
[SharpConsole http://www.gfilter.net/junk/sharpconsole.jpg](http://www.gfilter.net/junk/sharpconsole.jpg)
If anyone wants it, let me know.
|
Given your mention of "C#'s reflection power", I am unsure whether you're looking for an interactive C# console for small code snippets of your own (à la Ruby's `irb`), or a means of interacting with an existing, compiled application currently running as a process.
In the former case:
* **Windows PowerShell** might be your friend
* Another candidate would be the [**C# shell**](http://michael.susens-schurter.com/blog/2006/12/20/cssh-c-shell/)
* Finally, [**CSI**](http://www.codeproject.com/KB/cs/csi.aspx), a Simple C# Interpreter
|
Found this on reddit: <http://tirania.org/blog/archive/2008/Sep-08.html>
Quote from the site:
> The idea was simple: create an interactive C# shell by altering the compiler to generate and execute code dynamically as opposed to merely generating static code.
|
C# Console?
|
[
"",
"c#",
".net",
"console",
"interactive",
""
] |
I got a Function that returns a `Collection<string>`, and that calls itself recursively to eventually return one big `Collection<string>`.
Now, i just wonder what the best approach to merge the lists? `Collection.CopyTo()` only copies to string[], and using a `foreach()` loop feels like being inefficient. However, since I also want to filter out duplicates, I feel like i'll end up with a foreach that calls `Contains()` on the `Collection`.
I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a `Collection`, it can be pretty much any suitable data type.
Only exclusion, I'm bound to Visual Studio 2005 and .net 3.0, so no LINQ.
**Edit:** To clarify: The Function takes a user out of Active Directory, looks at the Direct Reports of the user, and then recursively looks at the direct reports of every user. So the end result is a List of all users that are in the "command chain" of a given user.Since this is executed quite often and at the moment takes 20 Seconds for some users, i'm looking for ways to improve it. Caching the result for 24 Hours is also on my list btw., but I want to see how to improve it before applying caching.
|
If you're using List<> you can use .AddRange to add one list to the other list.
Or you can use yield return to combine lists on the fly like this:
```
public IEnumerable<string> Combine(IEnumerable<string> col1, IEnumerable<string> col2)
{
foreach(string item in col1)
yield return item;
foreach(string item in col2)
yield return item;
}
```
|
I think [`HashSet<T>`](http://msdn.microsoft.com/en-us/library/bb359438.aspx) is a great help.
> The `HashSet<T>` class provides
> high performance set operations. A set
> is a collection that contains no
> duplicate elements, and whose elements
> are in no particular order.
Just add items to it and then use CopyTo.
---
**Update**: `HashSet<T>` is in .Net 3.5
Maybe you can use [`Dictionary<TKey, TValue>`](http://msdn.microsoft.com/en-us/library/xfhwa508.aspx). Setting a duplicate key to a dictionary will not raise an exception.
|
Merging two Collection<T>
|
[
"",
"c#",
"collections",
""
] |
I am coming from an Enterprise Java background which involves a fairly heavyweight software stack, and have recently discovered the
[Stripes framework](http://www.stripesframework.org/); my initial impression is that this seems to do a good job of minimising the unpleasant parts of building a web application in Java.
Has anyone used Stripes for a project that has gone live? And can you share your experiences from the project? Also, did you consider any other technologies and (if so) why did you chose Stripes?
|
We've been using Stripes for about 4 years now. Our stack is Stripes/EJB3/JPA.
Many use Stripes plus Stripernate as a single, full stack solution. We don't because we want our business logic within the EJB tier, so we simply rely on JPA Entities as combined Model and DTO.
Stripes does the binding to our Entities/DTO and we shove them back in to the EJB tier for work. For most of our CRUD stuff this is very thing and straightforward, making our 80% use case trivial to work with. Yet we have the flexibility to do whatever we want for the edge cases that always come up with complicate applications.
We have a very large base Action Bean which encapsulates the bulk of our CRUD operations that makes call backs in to the individual subclasses specific to the entities and forms.
We also have a large internal tag file library to manage our pages, security, navigation, tasks, etc. A simple CRUD edit form is little more than a list of field names, and we get all of the chrome and menus and access controls "for free".
The beauty of this is that we get to keep the HTTP request based metaphor that we like and we get to choose the individual parts of the system rather than use one fat stack. The Stripes layer is lean and mean, and never gets in our way.
We have a bunch of Ajax integrating YUI and JQuery, all working against our Stripes and EJB stack painlessly.
I also ported a lighter version of the stack to GAE for a sample project, basically having to do minor work to our EJB tier. So, the entire stack is very nimble and amicable to change. Stripes is a big factor of that since we let it do the few things that it does, and does very well. Then delegate the rest to other parts of the stack.
As always there are parts folks would rather have different at times, but Stripes would be the last part to go in our stack, frankly. It could be better at supporting the full HTTP verb set, but I'd rather fix Stripes to do that better than switch over to something else.
|
We use stripes now on all our production sites, and have been for about a year now. It is an awesome product compared to struts, which we used to use before that. Just the fact that there are literally no XML config files and that you can set it all up with a minimal amount of classes and annotations is awesome.
In terms of scaling & speed it actually seems to be better than struts, and my guess would be because there are less layers involved. The code you end up with is a lot cleaner as well, because you don't have to go off to seperate XML files to find out where redirects are going.
We use it with an EJB3 backend, and the two seem to work really well together, because you can use your EJB POJO inside your actionBean object, without needing a form object like in struts.
In our evaluation we considered an alpha version of struts (that supported annotations) and a lot of other frameworks, but stripes won because of it's superior documentation, stability and clean-ness.
Couldn't figure out how to leave a comment: so to answer your second question we haven't encountered a single bug in Stripes that I know of. This is quite impressive for an open source framework. I haven't tried the latest version (1.5) yet, but 1.4.x is very stable.
|
Practical Experience using Stripes?
|
[
"",
"java",
"stripes",
""
] |
I am thinking about making a website with some fairly intense JavaScript/canvas usage and I have been looking at [Processing.js](http://ejohn.org/blog/processingjs/) and it seems to me that it would make manipulating the canvas significantly easier. Does anyone know any reasons why I **shouldn't** use Processing.js? I understand that older browsers won't be able to use it, but for now that's ok.
|
As mentioned, IE is not supported by Processing.js (including IE8 beta). I've also found processing.js to be a bit slow in terms of performance, compared to just using canvas (especially if you're parsing a string with Processing language, instead of using the javascript API).
I personally prefer the canvas API over the processing wrapper, because it gives more me control. For example:
The processing line() function is implemented like this (roughly):
```
function line (x1, y1, x2, y2) {
context.beginPath();
context.moveTo(x1, y1);
context.lineTo(x2, y2);
context.closePath();
context.stroke();
};
```
And you'd use it like this (assuming you're using the javascript-exposed API):
```
var p = Processing("canvas")
p.stroke(255)
////Draw lines...///
p.line(0,0,10,10)
p.line(10,10,20,10)
//...and so on
p.line(100,100,200,200)
////End lines////
```
Notice that every line() call has to open and close a new path, whereas with the canvas API you can draw all the lines within a single beginPath/endPath block, improving performance significantly:
```
context.strokeStyle = "#fff";
context.beginPath();
////Draw lines...///
context.moveTo(0, 0);
context.lineTo(10, 10);
context.lineTo(20, 10);
//...so on
context.lineTo(200, 200);
////End lines...///
context.closePath();
context.stroke();
```
|
If you're OK with it not working in IE7, then go for it. I've had it working in Firefox 3. It's a slick way to bring Silverlight/Flash effects to your page.
My hunch is that libraries like Processing.js will change or be upgraded on a fast track path, so get ready to run when they do and keep up with the new features.
|
Is it reasonable to use John Resig's Processing.js?
|
[
"",
"javascript",
"graphics",
"canvas",
""
] |
Consider:
```
int testfunc1 (const int a)
{
return a;
}
int testfunc2 (int const a)
{
return a;
}
```
Are these two functions the same in every aspect or is there a difference?
I'm interested in an answer for the C language, but if there is something interesting in the C++ language, I'd like to know as well.
|
`const T` and `T const` are identical. With pointer types it becomes more complicated:
1. `const char*` is a pointer to a constant `char`
2. `char const*` is a pointer to a constant `char`
3. `char* const` is a constant pointer to a (mutable) `char`
In other words, (1) and (2) are identical. The only way of making the pointer (rather than the pointee) `const` is to use a suffix-`const`.
This is why many people prefer to always put `const` to the right side of the type (“East const” style): it makes its location relative to the type consistent and easy to remember (it also anecdotally seems to make it easier to teach to beginners).
|
The trick is to read the declaration backwards (right-to-left):
```
const int a = 1; // read as "a is an integer which is constant"
int const a = 1; // read as "a is a constant integer"
```
Both are the same thing. Therefore:
```
a = 2; // Can't do because a is constant
```
The reading backwards trick especially comes in handy when you're dealing with more complex declarations such as:
```
const char *s; // read as "s is a pointer to a char that is constant"
char c;
char *const t = &c; // read as "t is a constant pointer to a char"
*s = 'A'; // Can't do because the char is constant
s++; // Can do because the pointer isn't constant
*t = 'A'; // Can do because the char isn't constant
t++; // Can't do because the pointer is constant
```
|
'const int' vs. 'int const' as function parameters in C++ and C
|
[
"",
"c++",
"c",
"constants",
""
] |
I'm dealing with a MySQL table that defines the JobName column as UNIQUE. If somebody tries to save a new Job to the database using a JobName that is already in the database, MySQL throws a warning.
I would like to be able to detect this warning, just like an error, in my PHP script and deal with it appropriately. Ideally I would like to know what kind of warning MySQL has thrown so that I can branch the code to handle it.
Is this possible? If not, is it because MySQL doesn't have this ability, PHP doesn't have this ability, or both?
|
For warnings to be "flagged" to PHP natively would require changes to the mysql/mysqli driver, which is obviously beyond the scope of this question. Instead you're going to have to basically check every query you make on the database for warnings:
```
$warningCountResult = mysql_query("SELECT @@warning_count");
if ($warningCountResult) {
$warningCount = mysql_fetch_row($warningCountResult );
if ($warningCount[0] > 0) {
//Have warnings
$warningDetailResult = mysql_query("SHOW WARNINGS");
if ($warningDetailResult ) {
while ($warning = mysql_fetch_assoc($warningDetailResult) {
//Process it
}
}
}//Else no warnings
}
```
Obviously this is going to be hideously expensive to apply en-mass, so you might need to carefully think about when and how warnings may arise (which may lead you to refactor to eliminate them).
For reference, [MySQL SHOW WARNINGS](http://dev.mysql.com/doc/refman/5.0/en/show-warnings.html)
Of course, you could dispense with the initial query for the `SELECT @@warning_count`, which would save you a query per execution, but I included it for pedantic completeness.
|
First, you should [turn warnings off](http://us.php.net/error_reporting) so that your visitors don't see your **MySQL** errors. Second, when you call [`mysql_query()`](http://us.php.net/mysql_query), you should check to see if it returned false. If it did, call [`mysql_errno()`](http://us.php.net/manual/en/function.mysql-errno.php) to find out what went wrong. Match the number returned to the error codes on [this page](http://dev.mysql.com/doc/refman/4.1/en/error-messages-server.html).
It looks like this is the error number you're looking for:
> Error: 1169 SQLSTATE: 23000 (ER\_DUP\_UNIQUE)
>
> Message: Can't write, because of unique constraint, to table '%s'
|
Can I detect and handle MySQL Warnings with PHP?
|
[
"",
"php",
"mysql",
"error-handling",
"warnings",
""
] |
I want to add a comma in the thousands place for a number.
Would `String.Format()` be the correct path to take? What format would I use?
|
```
$"{1234:n}"; // Output: 1,234.00
$"{9876:n0}"; // No digits after the decimal point. Output: 9,876
```
|
I found this to be the simplest way:
```
myInteger.ToString("N0")
```
|
.NET String.Format() to add commas in thousands place for a number
|
[
"",
"c#",
".net",
"string",
"format",
""
] |
I was given a MySQL database file that I need to restore as a database on my Windows Server 2008 machine.
I tried using MySQL Administrator, but I got the following error:
> The selected file was generated by
> mysqldump and cannot be restored by
> this application.
How do I get this working?
|
It should be as simple as running this:
```
mysql -u <user> -p < db_backup.dump
```
If the dump is of a single database you may have to add a line at the top of the file:
```
USE <database-name-here>;
```
If it was a dump of many databases, the use statements are already in there.
To run these commands, open up a command prompt (in Windows) and `cd` to the directory where the `mysql.exe` executable is (you may have to look around a bit for it, it'll depend on how you installed mysql, i.e. standalone or as part of a package like WAMP). Once you're in that directory, you should be able to just type the command as I have it above.
|
If the database you want to restore doesn't already exist, you need to create it first.
On the command-line, if you're in the same directory that contains the dumped file, use these commands (with appropriate substitutions):
```
C:\> mysql -u root -p
mysql> create database mydb;
mysql> use mydb;
mysql> source db_backup.dump;
```
|
How do I restore a dump file from mysqldump?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I want to insert say 50,000 records into sql server database 2000 at a time. How to accomplish this?
|
You can use the SELECT TOP clause: in MSSQL 2005 it was extended allowing you to use a variable to specify the number of records (older version allowed only a numeric constant)
You can try something like this:
(untested, because I have no access to a MSSQL2005 at the moment)
```
begin
declare @n int, @rows int
select @rows = count(*) from sourcetable
select @n=0
while @n < @rows
begin
insert into desttable
select top 2000 *
from sourcetable
where id_sourcetable not in (select top (@n) id_sourcetable
from sourcetable
order by id_sourcetable)
order by id_sourcetable
select @n=@n+2000
end
end
```
|
Do you mean for a test of some kind?
```
declare @index integer
set @index = 0
while @index < 50000
begin
insert into table
values (x,y,z)
set @index = @index + 1
end
```
But I expect this is not what you mean.
If you mean the best way to do a bulk insert, use `BULK INSERT` or something like `bcp`
|
Insert a fixed number of rows 2000 at a time in sql server
|
[
"",
"sql",
"sql-server",
""
] |
In my free time I started writing a small multiplayer game with a database backend. I was looking to separate player login information from other in game information (inventory, stats, and status) and a friend brought up this might not be the best idea.
Would it be better to lump everything together in one table?
|
Start by ignoring performance, and just create the most logical and easy-to-understand database design that you can. If players and game objects (swords? chess pieces?) are separate things conceptually, then put them in separate tables. If a player can carry things, you put a foreign key in the "things" table that references the "players" table. And so on.
Then, when you have hundreds of players and thousands of things, and the players run around in the game and do things that require database searches, well, your design will *still* be fast enough, if you just add the appropriate indexes.
Of course, if you plan for thousands of simultaneous players, each of them inventorying his things every second, or perhaps some other enormous load of database searches (a search for each frame rendered by the graphics engine?) then you will need to think about performance. But in that case, a typical disk-based relational database will not be fast enough anyway, no matter how you design your tables.
|
Relational databases work on sets, a database query is there to deliver none ("not found") or more results. A relational database is not intended to deliver always exactly one result by executing a query (on relations).
Said that I want to mention that there is real life, too ;-) . A one-to-one relation *might* make sense, i.e. if the column count of the table reaches a critical level it might be faster to split this table. But these kind of conditions really are rare.
If you do not really need to split the table just don't do it. At least I assume that in your case you would have to do a select over two tables to get all data. This probably is slower than storing all in one table. And your programming logic will at least get a little bit less readable by doing so.
**Edith says**: Commenting on normalization: Well, I never saw a database normalized to it's max and no-one is really recommending that as an option. Anyway, if you do not exactly know if any columns will be normalized (i.e. put in other tables) later on, then it too is easier to do this with one table instead of "table"-one-to-one-"other table"
|
One to One database relation?
|
[
"",
"sql",
"mysql",
"database",
"one-to-one",
""
] |
What is the best method for adding options to a `<select>` from a JavaScript object using jQuery?
I'm looking for something that I don't need a plugin to do, but I would also be interested in the plugins that are out there.
This is what I did:
```
selectValues = { "1": "test 1", "2": "test 2" };
for (key in selectValues) {
if (typeof (selectValues[key] == 'string') {
$('#mySelect').append('<option value="' + key + '">' + selectValues[key] + '</option>');
}
}
```
**A clean/simple solution:**
This is a cleaned up and simplified [version of matdumsa's](https://stackoverflow.com/questions/170986/what-is-the-best-way-to-add-options-to-a-select-from-an-array-with-jquery/171007#171007):
```
$.each(selectValues, function(key, value) {
$('#mySelect')
.append($('<option>', { value : key })
.text(value));
});
```
Changes from matdumsa's: (1) removed the close tag for the option inside append() and (2) moved the properties/attributes into an map as the second parameter of append().
|
The same as other answers, in a jQuery fashion:
```
$.each(selectValues, function(key, value) {
$('#mySelect')
.append($("<option></option>")
.attr("value", key)
.text(value));
});
```
|
```
var output = [];
$.each(selectValues, function(key, value)
{
output.push('<option value="'+ key +'">'+ value +'</option>');
});
$('#mySelect').html(output.join(''));
```
In this way you "touch the DOM" only one time.
I'm not sure if the latest line can be converted into $('#mySelect').html(output.join('')) because I don't know jQuery internals (maybe it does some parsing in the html() method)
|
What is the best way to add options to a select from a JavaScript object with jQuery?
|
[
"",
"javascript",
"jquery",
"html-select",
""
] |
We all know what virtual functions are in C++, but how are they implemented at a deep level?
Can the vtable be modified or even directly accessed at runtime?
Does the vtable exist for all classes, or only those that have at least one virtual function?
Do abstract classes simply have a NULL for the function pointer of at least one entry?
Does having a single virtual function slow down the whole class? Or only the call to the function that is virtual? And does the speed get affected if the virtual function is actually overwritten or not, or does this have no effect so long as it is virtual.
|
## How are virtual functions implemented at a deep level?
From ["Virtual Functions in C++"](http://wayback.archive.org/web/20100209040010/http://www.codersource.net/published/view/325/virtual_functions_in.aspx):
> Whenever a program has a virtual function declared, a v - table is constructed for the class. The v-table consists of addresses to the virtual functions for classes that contain one or more virtual functions. The object of the class containing the virtual function contains a virtual pointer that points to the base address of the virtual table in memory. Whenever there is a virtual function call, the v-table is used to resolve to the function address. An object of the class that contains one or more virtual functions contains a virtual pointer called the vptr at the very beginning of the object in the memory. Hence the size of the object in this case increases by the size of the pointer. This vptr contains the base address of the virtual table in memory. Note that virtual tables are class specific, i.e., there is only one virtual table for a class irrespective of the number of virtual functions it contains. This virtual table in turn contains the base addresses of one or more virtual functions of the class. At the time when a virtual function is called on an object, the vptr of that object provides the base address of the virtual table for that class in memory. This table is used to resolve the function call as it contains the addresses of all the virtual functions of that class. This is how dynamic binding is resolved during a virtual function call.
## Can the vtable be modified or even directly accessed at runtime?
Universally, I believe the answer is "no". You could do some memory mangling to find the vtable but you still wouldn't know what the function signature looks like to call it. Anything that you would want to achieve with this ability (that the language supports) should be possible without access to the vtable directly or modifying it at runtime. Also note, the C++ language spec **does not** specify that vtables are required - however that is how most compilers implement virtual functions.
## Does the vtable exist for all objects, or only those that have at least one virtual function?
I *believe* the answer here is "it depends on the implementation" since the spec doesn't require vtables in the first place. However, in practice, I believe all modern compilers only create a vtable if a class has at least 1 virtual function. There is a space overhead associated with the vtable and a time overhead associated with calling a virtual function vs a non-virtual function.
## Do abstract classes simply have a NULL for the function pointer of at least one entry?
The answer is it is unspecified by the language spec so it depends on the implementation. Calling the pure virtual function results in undefined behavior if it is not defined (which it usually isn't) (ISO/IEC 14882:2003 10.4-2). In practice it does allocate a slot in the vtable for the function but does not assign an address to it. This leaves the vtable incomplete which requires the derived classes to implement the function and complete the vtable. Some implementations do simply place a NULL pointer in the vtable entry; other implementations place a pointer to a dummy method that does something similar to an assertion.
Note that an abstract class can define an implementation for a pure virtual function, but that function can only be called with a qualified-id syntax (ie., fully specifying the class in the method name, similar to calling a base class method from a derived class). This is done to provide an easy to use default implementation, while still requiring that a derived class provide an override.
## Does having a single virtual function slow down the whole class or only the call to the function that is virtual?
This is getting to the edge of my knowledge, so someone please help me out here if I'm wrong!
I *believe* that only the functions that are virtual in the class experience the time performance hit related to calling a virtual function vs. a non-virtual function. The space overhead for the class is there either way. Note that if there is a vtable, there is only 1 per *class*, not one per *object*.
## Does the speed get affected if the virtual function is actually overridden or not, or does this have no effect so long as it is virtual?
I don't believe the execution time of a virtual function that is overridden decreases compared to calling the base virtual function. However, there is an additional space overhead for the class associated with defining another vtable for the derived class vs the base class.
## Additional Resources:
[http://www.codersource.net/published/view/325/virtual\_functions\_in.aspx](http://wayback.archive.org/web/20100209040010/http://www.codersource.net/published/view/325/virtual_functions_in.aspx) (via way back machine)
<http://en.wikipedia.org/wiki/Virtual_table>
<http://www.codesourcery.com/public/cxx-abi/abi.html#vtable>
|
* Can the vtable be modified or even directly accessed at runtime?
Not portably, but if you don't mind dirty tricks, sure!
> **WARNING**: This technique is not recommended for use by children, adults under the age of [969](https://en.wikipedia.org/wiki/Methuselah), or small furry creatures from Alpha Centauri. Side effects may include [demons which fly out of your nose](http://www.catb.org/jargon/html/N/nasal-demons.html), the abrupt appearence of [Yog-Sothoth](https://en.wikipedia.org/wiki/Yog-Sothoth) as a required approver on all subsequent code reviews, or the retroactive addition of [`IHuman::PlayPiano()`](http://unsongbook.com/interlude-%D7%90-the-cracks-in-the-sky/) to all existing instances]
In most compilers I've seen, the vtbl \* is the first 4 bytes of the object, and the vtbl contents are simply an array of member pointers there (generally in the order they were declared, with the base class's first). There are of course other possible layouts, but that's what I've generally observed.
```
class A {
public:
virtual int f1() = 0;
};
class B : public A {
public:
virtual int f1() { return 1; }
virtual int f2() { return 2; }
};
class C : public A {
public:
virtual int f1() { return -1; }
virtual int f2() { return -2; }
};
A *x = new B;
A *y = new C;
A *z = new C;
```
Now to pull some shenanigans...
Changing class at runtime:
```
std::swap(*(void **)x, *(void **)y);
// Now x is a C, and y is a B! Hope they used the same layout of members!
```
Replacing a method for all instances (monkeypatching a class)
This one's a little trickier, since the vtbl itself is probably in read-only memory.
```
int f3(A*) { return 0; }
mprotect(*(void **)x,8,PROT_READ|PROT_WRITE|PROT_EXEC);
// Or VirtualProtect on win32; this part's very OS-specific
(*(int (***)(A *)x)[0] = f3;
// Now C::f1() returns 0 (remember we made x into a C above)
// so x->f1() and z->f1() both return 0
```
The latter is rather likely to make virus-checkers and the link wake up and take notice, due to the mprotect manipulations. In a process using the NX bit it may well fail.
|
How are virtual functions and vtable implemented?
|
[
"",
"c++",
"polymorphism",
"virtual-functions",
"vtable",
""
] |
In C I could, for example, zero out bit #10 in a 32 bit unsigned value like so:
```
unsigned long value = 0xdeadbeef;
value &= ~(1<<10);
```
How do I do that in Python ?
|
Bitwise operations on Python ints work much like in C. The `&`, `|` and `^` operators in Python work just like in C. The `~` operator works as for a signed integer in C; that is, `~x` computes `-x-1`.
You have to be somewhat careful with left shifts, since Python integers aren't fixed-width. Use bit masks to obtain the low order bits. For example, to do the equivalent of shift of a 32-bit integer do `(x << 5) & 0xffffffff`.
|
```
value = 0xdeadbeef
value &= ~(1<<10)
```
|
How do I manipulate bits in Python?
|
[
"",
"python",
""
] |
What is the best method for removing a table row with jQuery?
|
You're right:
```
$('#myTableRow').remove();
```
This works fine if your row has an `id`, such as:
```
<tr id="myTableRow"><td>blah</td></tr>
```
If you don't have an `id`, you can use any of jQuery's [plethora of selectors](http://docs.jquery.com/Selectors).
|
```
$('#myTable tr').click(function(){
$(this).remove();
return false;
});
```
Even a better one
```
$("#MyTable").on("click", "#DeleteButton", function() {
$(this).closest("tr").remove();
});
```
|
What is the best way to remove a table row with jQuery?
|
[
"",
"javascript",
"jquery",
"html-table",
""
] |
Assuming String a and b:
```
a += b
a = a.concat(b)
```
Under the hood, are they the same thing?
Here is concat decompiled as reference. I'd like to be able to decompile the `+` operator as well to see what that does.
```
public String concat(String s) {
int i = s.length();
if (i == 0) {
return this;
}
else {
char ac[] = new char[count + i];
getChars(0, count, ac, 0);
s.getChars(0, i, ac, count);
return new String(0, count + i, ac);
}
}
```
|
No, not quite.
Firstly, there's a slight difference in semantics. If `a` is `null`, then `a.concat(b)` throws a `NullPointerException` but `a+=b` will treat the original value of `a` as if it were `null`. Furthermore, the `concat()` method only accepts `String` values while the `+` operator will silently convert the argument to a String (using the `toString()` method for objects). So the `concat()` method is more strict in what it accepts.
To look under the hood, write a simple class with `a += b;`
```
public class Concat {
String cat(String a, String b) {
a += b;
return a;
}
}
```
Now disassemble with `javap -c` (included in the Sun JDK). You should see a listing including:
```
java.lang.String cat(java.lang.String, java.lang.String);
Code:
0: new #2; //class java/lang/StringBuilder
3: dup
4: invokespecial #3; //Method java/lang/StringBuilder."<init>":()V
7: aload_1
8: invokevirtual #4; //Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
11: aload_2
12: invokevirtual #4; //Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
15: invokevirtual #5; //Method java/lang/StringBuilder.toString:()Ljava/lang/ String;
18: astore_1
19: aload_1
20: areturn
```
So, `a += b` is the equivalent of
```
a = new StringBuilder()
.append(a)
.append(b)
.toString();
```
The `concat` method should be faster. However, with more strings the `StringBuilder` method wins, at least in terms of performance.
The source code of `String` and `StringBuilder` (and its package-private base class) is available in src.zip of the Sun JDK. You can see that you are building up a char array (resizing as necessary) and then throwing it away when you create the final `String`. In practice memory allocation is surprisingly fast.
**Update:** As Pawel Adamski notes, performance has changed in more recent HotSpot. `javac` still produces exactly the same code, but the bytecode compiler cheats. Simple testing entirely fails because the entire body of code is thrown away. Summing `System.identityHashCode` (not `String.hashCode`) shows the `StringBuffer` code has a slight advantage. Subject to change when the next update is released, or if you use a different JVM. From [@lukaseder](https://twitter.com/lukaseder), [a list of HotSpot JVM intrinsics](https://gist.github.com/apangin/7a9b7062a4bd0cd41fcc).
|
[Niyaz](https://stackoverflow.com/a/47615/662581) is correct, but it's also worth noting that the special + operator can be converted into something more efficient by the Java compiler. Java has a StringBuilder class which represents a non-thread-safe, mutable String. When performing a bunch of String concatenations, the Java compiler silently converts
```
String a = b + c + d;
```
into
```
String a = new StringBuilder(b).append(c).append(d).toString();
```
which for large strings is significantly more efficient. As far as I know, this does not happen when you use the concat method.
However, the concat method is more efficient when concatenating an empty String onto an existing String. In this case, the JVM does not need to create a new String object and can simply return the existing one. See [the concat documentation](http://java.sun.com/javase/6/docs/api/java/lang/String.html#concat(java.lang.String)) to confirm this.
So if you're super-concerned about efficiency then you should use the concat method when concatenating possibly-empty Strings, and use + otherwise. However, the performance difference should be negligible and you probably shouldn't ever worry about this.
|
String concatenation: concat() vs "+" operator
|
[
"",
"java",
"string",
"concatenation",
""
] |
In a text file, there is a string "I don't like this".
However, when I read it into a string, it becomes "I don\xe2\x80\x98t like this". I understand that \u2018 is the unicode representation of "'". I use
```
f1 = open (file1, "r")
text = f1.read()
```
command to do the reading.
Now, is it possible to read the string in such a way that when it is read into the string, it is "I don't like this", instead of "I don\xe2\x80\x98t like this like this"?
Second edit: I have seen some people use mapping to solve this problem, but really, is there no built-in conversion that does this kind of ANSI to unicode ( and vice versa) conversion?
|
Ref: <http://docs.python.org/howto/unicode>
*Reading Unicode from a file is therefore simple:*
```
import codecs
with codecs.open('unicode.rst', encoding='utf-8') as f:
for line in f:
print repr(line)
```
*It's also possible to open files in update mode, allowing both reading and writing:*
```
with codecs.open('test', encoding='utf-8', mode='w+') as f:
f.write(u'\u4500 blah blah blah\n')
f.seek(0)
print repr(f.readline()[:1])
```
**EDIT**: I'm assuming that your intended goal is just to be able to read the file properly into a string in Python. If you're trying to convert to an ASCII string from Unicode, then there's really no direct way to do so, since the Unicode characters won't necessarily exist in ASCII.
If you're trying to convert to an ASCII string, try one of the following:
1. Replace the specific unicode chars with ASCII equivalents, if you are only looking to handle a few special cases such as this particular example
2. Use the `unicodedata` module's `normalize()` and the `string.encode()` method to convert as best you can to the next closest ASCII equivalent (Ref <https://web.archive.org/web/20090228203858/http://techxplorer.com/2006/07/18/converting-unicode-to-ascii-using-python>):
```
>>> teststr
u'I don\xe2\x80\x98t like this'
>>> unicodedata.normalize('NFKD', teststr).encode('ascii', 'ignore')
'I donat like this'
```
|
It is also possible to read an encoded text file using the python 3 read method:
```
f = open (file.txt, 'r', encoding='utf-8')
text = f.read()
f.close()
```
With this variation, there is no need to import any additional libraries
|
Character reading from file in Python
|
[
"",
"python",
"unicode",
"encoding",
"ascii",
""
] |
Take the following two lines of code:
```
for (int i = 0; i < some_vector.size(); i++)
{
//do stuff
}
```
And this:
```
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end();
some_iterator++)
{
//do stuff
}
```
I'm told that the second way is preferred. Why exactly is this?
|
The first form is efficient only if vector.size() is a fast operation. This is true for vectors, but not for lists, for example. Also, what are you planning to do within the body of the loop? If you plan on accessing the elements as in
```
T elem = some_vector[i];
```
then you're making the assumption that the container has `operator[](std::size_t)` defined. Again, this is true for vector but not for other containers.
The use of iterators bring you closer to **container independence**. You're not making assumptions about random-access ability or fast `size()` operation, only that the container has iterator capabilities.
You could enhance your code further by using standard algorithms. Depending on what it is you're trying to achieve, you may elect to use `std::for_each()`, `std::transform()` and so on. By using a standard algorithm rather than an explicit loop you're avoiding re-inventing the wheel. Your code is likely to be more efficient (given the right algorithm is chosen), correct and reusable.
|
It's part of the modern C++ indoctrination process. Iterators are the only way to iterate most containers, so you use it even with vectors just to get yourself into the proper mindset. Seriously, that's the only reason I do it - I don't think I've ever replaced a vector with a different kind of container.
---
Wow, this is still getting downvoted after three weeks. I guess it doesn't pay to be a little tongue-in-cheek.
I think the array index is more readable. It matches the syntax used in other languages, and the syntax used for old-fashioned C arrays. It's also less verbose. Efficiency should be a wash if your compiler is any good, and there are hardly any cases where it matters anyway.
Even so, I still find myself using iterators frequently with vectors. I believe the iterator is an important concept, so I promote it whenever I can.
|
Why use iterators instead of array indices?
|
[
"",
"c++",
"for-loop",
"stl",
"iterator",
"containers",
""
] |
I've got a C# class with a `Dispose` function via `IDisposable`. It's intended to be used inside a `using` block so the expensive resource it handles can be released right away.
The problem is that a bug occurred when an exception was thrown before `Dispose` was called, and the programmer neglected to use `using` or `finally`.
In C++, I never had to worry about this. The call to a class's destructor would be automatically inserted at the end of the object's scope. The only way to avoid that happening would be to use the new operator and hold the object behind a pointer, but that required extra work for the programmer isn't something they would do by accident, like forgetting to use `using`.
Is there any way to for a `using` block to be automatically used in C#?
Many thanks.
UPDATE:
I'd like to explain why I'm not accepting the finalizer answers. Those answers are technically correct in themselves, but they are not C++ style destructors.
Here's the bug I found, reduced to the essentials...
```
try
{
PleaseDisposeMe a = new PleaseDisposeMe();
throw new Exception();
a.Dispose();
}
catch (Exception ex)
{
Log(ex);
}
// This next call will throw a time-out exception unless the GC
// runs a.Dispose in time.
PleaseDisposeMe b = new PleaseDisposeMe();
```
Using `FXCop` is an excellent suggestion, but if that's my only answer, my question would have to become a plea to the C# people, or use C++. Twenty nested using statements anyone?
|
Unfortunately there isn't any way to do this directly in the code. If this is an issue in house, there are various code analysis solutions that could catch these sort of problems. Have you looked into FxCop? I think that this will catch these situations and in all cases where IDisposable objects might be left hanging. If it is a component that people are using outside of your organization and you can't require FxCop, then documentation is really your only recourse :).
Edit: In the case of finalizers, this doesn't really guarantee when the finalization will happen. So this may be a solution for you but it depends on the situation.
|
Where I work we use the following guidelines:
* Each IDisposable class **must** have a finalizer
* Whenever using an IDisposable object, it must be used inside a "using" block. The only exception is if the object is a member of another class, in which case the containing class must be IDisposable and must call the member's 'Dispose' method in its own implementation of 'Dispose'. This means 'Dispose' should never be called by the developer except for inside another 'Dispose' method, eliminating the bug described in the question.
* The code in each Finalizer must begin with a warning/error log notifying us that the finalizer has been called. This way you have an extremely good chance of spotting such bugs as described above before releasing the code, plus it might be a hint for bugs occuring in your system.
To make our lives easier, we also have a SafeDispose method in our infrastructure, which calls the the Dispose method of its argument within a try-catch block (with error logging), just in case (although Dispose methods are not supposed to throw exceptions).
See also: [Chris Lyon](http://blogs.msdn.com/clyon/archive/2004/09/23/233464.aspx)'s suggestions regarding IDisposable
Edit:
@Quarrelsome: One thing you ought to do is call GC.SuppressFinalize inside 'Dispose', so that if the object was disposed, it wouldn't be "re-disposed".
It is also usually advisable to hold a flag indicating whether the object has already been disposed or not. The follwoing pattern is usually pretty good:
```
class MyDisposable: IDisposable {
public void Dispose() {
lock(this) {
if (disposed) {
return;
}
disposed = true;
}
GC.SuppressFinalize(this);
// Do actual disposing here ...
}
private bool disposed = false;
}
```
Of course, locking is not always necessary, but if you're not sure if your class would be used in a multi-threaded environment or not, it is advisable to keep it.
|
How to do C++ style destructors in C#?
|
[
"",
"c#",
"dispose",
"idisposable",
"using",
""
] |
I'm using jQuery to add an additional row to a table as the last row.
I have done it this way:
```
$('#myTable').append('<tr><td>my data</td><td>more data</td></tr>');
```
Are there limitations to what you can add to a table like this (such as inputs, selects, number of rows)? Is there a different way to do it?
|
The approach you suggest is not guaranteed to give you the result you're looking for - what if you had a `tbody` for example:
```
<table id="myTable">
<tbody>
<tr>...</tr>
<tr>...</tr>
</tbody>
</table>
```
You would end up with the following:
```
<table id="myTable">
<tbody>
<tr>...</tr>
<tr>...</tr>
</tbody>
<tr>...</tr>
</table>
```
I would therefore recommend this approach instead:
```
$('#myTable tr:last').after('<tr>...</tr><tr>...</tr>');
```
You can include anything within the `after()` method as long as it's valid HTML, including multiple rows as per the example above.
**Update:** Revisiting this answer following recent activity with this question. eyelidlessness makes a good comment that there will always be a `tbody` in the DOM; this is true, but only if there is at least one row. If you have no rows, there will be no `tbody` unless you have specified one yourself.
DaRKoN\_ [suggests](https://stackoverflow.com/questions/171027/jquery-add-table-row/468240#468240) appending to the `tbody` rather than adding content after the last `tr`. This gets around the issue of having no rows, but still isn't bulletproof as you could theoretically have multiple `tbody` elements and the row would get added to each of them.
Weighing everything up, I'm not sure there is a single one-line solution that accounts for every single possible scenario. You will need to make sure the jQuery code tallies with your markup.
I think the safest solution is probably to ensure your `table` always includes at least one `tbody` in your markup, even if it has no rows. On this basis, you can use the following which will work however many rows you have (and also account for multiple `tbody` elements):
```
$('#myTable > tbody:last-child').append('<tr>...</tr><tr>...</tr>');
```
|
jQuery has a built-in facility to manipulate DOM elements on the fly.
**You can add anything to your table like this:**
```
$("#tableID").find('tbody')
.append($('<tr>')
.append($('<td>')
.append($('<img>')
.attr('src', 'img.png')
.text('Image cell')
)
)
);
```
The `$('<some-tag>')` thing in jQuery is a tag object that can have several `attr` attributes that can be set and get, as well as `text`, which represents the text between the tag here: `<tag>text</tag>`.
*This is some pretty weird indenting, but it's easier for you to see what's going on in this example.*
|
Adding a table row in jQuery
|
[
"",
"javascript",
"jquery",
"html-table",
""
] |
## Question
Can I build a image database/library that has an e-commerce style checkout system and a powerful search in Oracle/Java? Are there existing elements out there I should be aware of? Or, is this better in another dev environment like PHP/MySQL?
## Overview
I am working on an image database/library dev team on the UI side of things, but I can't help but feel the team is tracking a bit off course on this one.
Ordinarily, I would look to see this entirely within an open-source, license-free environment like a PHP/MySQL/Apache server environment, however strict IT approvals processes have limited the areas of exploration to Oracle as the DB component with Java then making the likely programming environment.
With that noted, I have never worked with either, and don't know how complicated it would be to put the following features together:
* keyword and metadata based search
* e-commerce style checkout system (no payment methods or infrastructure required)
* gallery groupings of images (edited by a central editor)
* tag cloud generation and search
* component to parse metadata from embedded metadata (in XMP) to the database
* web-based back end for database management, maintenance of controlled keyword lists and administration of content
I've uploaded a pdf of screenshots of the UI I've developed for the web side of things.
[UI Sceenshots](http://www.chlorophylldesign.com/mambo/MMLPreviews%5bscreen%5d.pdf)
Sorry for the complicated Q, having a hard time with it myself.
|
Seems your question is really a struggle between Oracle/Java and PHP/MySQL. The details you state are none too difficult to implement using either of these tools sets or using a dozen others that I could think of.
If I am correct (only you could know), then this is a fabulous opportunity for you. You seem to have experience with the free/open tools, but none with the commercial/closed ones. Then take the opportunity to learn on your employer's dime.
|
Thanks **David** and **dacracot** for your insights.
Upon further review and consultation (plus some leveraging) of our IT department, considering that this app and DB would run on their own server, and apparently other areas of the corporation have approved the use of open server technologies like PHP, SQL and Apache, this will likely move to a PHP/SQL build for a few reasons, not the least of them being licensing costs of putting Oracle on a new server and the overall larger availability of PHP/SQL developers in the local dev community.
I did adapt some of **dacracot's** advice however, and convinced admin to send me to some .NET training so I can better assist in managing our intranet. I will also keep the **Oracle Application Express** site in mind for further dev needs across our intranet.
Thanks for your help.
|
Image database build in Oracle - Is an e-commerce style interface workable?
|
[
"",
"java",
"oracle",
""
] |
I'd like to find a way to determine how long each function in PHP, and each file in PHP is taking to run. I've got an old legacy PHP application that I'm trying to find the "rough spots" in and so I'd like to locate which routines and pages are taking a very long time to load, objectively.
Are there any pre-made tools that allow for this, or am I stuck using microtime, and building my own profiling framework?
|
I have actually done some optimisation work last week. XDebug is indeed the way to go.
Just enable it as an extension (for some reason it wouldn't work with ze\_extension on my windows machine) , setup your php.ini with xdebug.profiler\_enable\_trigger=On and call your normal urls with XDEBUG\_PROFILE=1 as either a get or a post variable to profile that very request. There's nothing easier!
Also, i can really reccommend [webgrind](http://code.google.com/p/webgrind/) , a webbased (php) google Summer Of Code project that can read and parse your debug output files!
|
**Here is a nice tip.**
When you use XDebug to profile your PHP, set up the profiler\_trigger and use this in a bookmarklet to trigger the XDebug profiler ;)
```
javascript:if(document.URL.indexOf('XDEBUG_PROFILE')<1){var%20sep=document.URL.indexOf('?');sep%20=%20(sep<1)?'?':'&';window.location.href=document.URL+sep+'XDEBUG_PROFILE';}
```
|
Profiling PHP code
|
[
"",
"php",
"performance",
"profiling",
""
] |
When I use the task, the property is only set to TRUE if the resource (say file) is available. If not, the property is undefined.
When I print the value of the property, it gives true if the resource was available, but otherwise just prints the property name.
Is there a way to set the property to some value if the resource is *not* available? I have tried setting the property explicitly before the available check, but then ant complains:
```
[available] DEPRECATED - used to override an existing property.
[available] Build file should not reuse the same property name for different values.
```
|
You can use a condition in combination with not:
<http://ant.apache.org/manual/Tasks/condition.html>
```
<condition property="fooDoesNotExist">
<not>
<available filepath="path/to/foo"/>
</not>
</condition>
```
|
```
<available filepath="/path/to/foo" property="foosThere" value="true"/>
<property name="foosThere" value="false"/>
```
The assignment of foosThere will only be successful if it has not already been set to true by your availability check.
|
Ant and the available task - what if something is not available?
|
[
"",
"java",
"ant",
"build",
""
] |
I'm running a SQL query on SQL Server 2005, and in addition to 2 columns being queried from the database, I'd also like to return 1 column of random numbers along with them. I tried this:
```
select column1, column2, floor(rand() * 10000) as column3
from table1
```
Which kinda works, but the problem is that this query returns the same random number on every row. It's a different number each time you run the query, but it doesn't vary from row to row. How can I do this and get a new random number for each row?
|
I realize this is an older post... but you don't need a view.
```
select column1, column2,
ABS(CAST(CAST(NEWID() AS VARBINARY) AS int)) % 10000 as column3
from table1
```
|
**WARNING**
[Adam's answer](https://stackoverflow.com/questions/94906/how-do-i-return-random-numbers-as-a-column-in-sql-server-2005/94951#94951) involving the view is very inefficient and for very large sets can take out your database for quite a while, I would strongly recommend against using it on a regular basis or in situations where you need to populate large tables in production.
Instead you could use [this answer](https://stackoverflow.com/questions/94906/how-do-i-return-random-numbers-as-a-column-in-sql-server-2005/491502#491502).
Proof:
```
CREATE VIEW vRandNumber
AS
SELECT RAND() as RandNumber
go
CREATE FUNCTION RandNumber()
RETURNS float
AS
BEGIN
RETURN (SELECT RandNumber FROM vRandNumber)
END
go
create table bigtable(i int)
go
insert into bigtable
select top 100000 1 from sysobjects a
join sysobjects b on 1=1
go
select cast(dbo.RandNumber() * 10000 as integer) as r into #t from bigtable
-- CPU (1607) READS (204639) DURATION (1551)
go
select ABS(CAST(CAST(NEWID() AS VARBINARY) AS int)) % 10000 as r into #t1
from bigtable
-- Runs 15 times faster - CPU (78) READS (809) DURATION (99)
```
Profiler trace:
[alt text http://img519.imageshack.us/img519/8425/destroydbxu9.png](http://img519.imageshack.us/img519/8425/destroydbxu9.png)
This is proof that stuff is random enough for numbers between 0 to 9999
```
-- proof that stuff is random enough
select avg(r) from #t
-- 5004
select STDEV(r) from #t
-- 2895.1999
select avg(r) from #t1
-- 4992
select STDEV(r) from #t1
-- 2881.44
select r,count(r) from #t
group by r
-- 10000 rows returned
select r,count(r) from #t1
group by r
-- 10000 row returned
```
|
How do I return random numbers as a column in SQL Server 2005?
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"random",
""
] |
What tools do you use to find unused/dead code in large java projects? Our product has been in development for some years, and it is getting very hard to manually detect code that is no longer in use. We do however try to delete as much unused code as possible.
Suggestions for general strategies/techniques (other than specific tools) are also appreciated.
**Edit:** Note that we already use code coverage tools (Clover, IntelliJ), but these are of little help. Dead code still has unit tests, and shows up as covered. I guess an ideal tool would identify clusters of code which have very little other code depending on it, allowing for docues manual inspection.
|
I would instrument the running system to keep logs of code usage, and then start inspecting code that is not used for months or years.
For example if you are interested in unused classes, all classes could be instrumented to log when instances are created. And then a small script could compare these logs against the complete list of classes to find unused classes.
Of course, if you go at the method level you should keep performance in mind. For example, the methods could only log their first use. I dont know how this is best done in Java. We have done this in Smalltalk, which is a dynamic language and thus allows for code modification at runtime. We instrument all methods with a logging call and uninstall the logging code after a method has been logged for the first time, thus after some time no more performance penalties occur. Maybe a similar thing can be done in Java with static boolean flags...
|
An Eclipse plugin that works reasonably well is [Unused Code Detector](http://www.ucdetector.org/).
It processes an entire project, or a specific file and shows various unused/dead code methods, as well as suggesting visibility changes (i.e. a public method that could be protected or private).
|
How to find unused/dead code in java projects
|
[
"",
"java",
"refactoring",
"dead-code",
""
] |
I have a table with game scores, allowing multiple rows per account id: `scores (id, score, accountid)`. I want a list of the top 10 scorer ids and their scores.
Can you provide an sql statement to select the top 10 scores, but only one score per account id?
Thanks!
|
First limit the selection to the highest score for each account id.
Then take the top ten scores.
```
SELECT TOP 10 AccountId, Score
FROM Scores s1
WHERE AccountId NOT IN
(SELECT AccountId s2 FROM Scores
WHERE s1.AccountId = s2.AccountId and s1.Score > s2.Score)
ORDER BY Score DESC
```
|
```
select username, max(score) from usertable group by username order by max(score) desc limit 10;
```
|
Fetch one row per account id from list
|
[
"",
"sql",
"mysql",
"database",
""
] |
The Weblogic servers we are using have been configured to allow JNDI datasource names like "appds".
For development (localhost), we might be running Tomcat and when declared in the <context> section of server.xml, Tomcat will hang JNDI datasources on "java:comp/env/jdbc/\*" in the JNDI tree.
**Problem:** in Weblogic, the JNDI lookup is "appds" whilst in Tomcat, it seems that that I must provide the formal "java:comp/env/jdbc/appds". I'm afraid the Tomcat version is an implicit standard but unfortunately, I can't change Weblogic's config ... so that means we end up with two different spring config files (we're using spring 2.5) to facilitate the different environments.
Is there an elegant way to address this. Can I look JNDI names up directly in Tomcat? Can Spring take a name and look in both places? Google searches or suggestions would be great.
|
`JndiLocatorSupport` has a property `resourceRef`. When setting this true, "java:comp/env/" prefix will be prepended automatically. So I believe it would be correct to differentiate this parameter when moving from Tomcat to Weblogic.
|
**How to use a single JNDI name in your web app**
I've struggled with this for a few months myself. The best solution is to make your application portable so you have the same JNDI name in both Tomcat and Weblogic.
In order to do that, you change your `web.xml` and `spring-beans.xml` to point to a single jndi name, and provide a mapping to each vendor specific jndi name.
I've placed each file below.
You need:
* A `<resource-ref />` entry in web.xml for your app to use a single name
* A file `WEB-INF/weblogic.xml` to map your jndi name to the resource managed by WebLogic
* A file `META-INF/context.xml` to map your jndi name to the resource managed by Tomcat
+ This can be either in the Tomcat installation or in your app.
As a general rule, prefer to have your jndi names in your app like `jdbc/MyDataSource` and `jms/ConnFactory` and avoid prefixing them with `java:comp/env/`.
Also, data sources and connection factories are best managed by the container and used with JNDI. It's a [common mistake to instantiate database connection pools in your application](https://stackoverflow.com/questions/3111992/difference-between-configuring-data-source-in-persistence-xml-and-in-spring-confi/4138327#4138327).
*spring*
```
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.0.xsd">
<jee:jndi-lookup jndi-name="jdbc/appds"
id="dataSource" />
</beans>
```
*web.xml*
```
<resource-ref>
<description>My data source</description>
<res-ref-name>jdbc/appds</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>
```
*weblogic.xml*
```
<?xml version="1.0" encoding="UTF-8" ?>
<weblogic-web-app
xmlns="http://xmlns.oracle.com/weblogic/weblogic-web-app"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://xmlns.oracle.com/weblogic/weblogic-web-app http://http://www.oracle.com/technology/weblogic/weblogic-web-app/1.1/weblogic-web-app.xsd">
<resource-description>
<jndi-name>appds</jndi-name>
<res-ref-name>jdbc/appds</res-ref-name>
</resource-description>
</weblogic-web-app>
```
*META-INF/context.xml* (for Tomcat)
```
<Context>
<ResourceLink global="jdbc/appds" name="jdbc/appds" type="javax.sql.DataSource"/>
</Context>
```
|
Tomcat vs Weblogic JNDI Lookup
|
[
"",
"java",
"tomcat",
"jakarta-ee",
"weblogic",
"jndi",
""
] |
I'm looking for a standard way to connect to databases in PHP. We've all been there - first start with some rudimentary code to connect/query/iterate/insert/disconnect, then the code grew as the program grew, and it ended up with a mess that's hardly reusable.
I know there are many PEAR, PECL, and other PHP libraries/classes out there that can fit my description - but which ones are maintained, used, and have proven to be bug-free and efficient?
|
I'm supprised *Zend\_Db* hasn't been mentioned yet...
* PEAR's **MDB2**: very stable, also provides a layer that implements all *MDB2*-supported features in all databases where it can at least be simulated. I've used this one for years with much success.
* Zend Framework's **Zend\_Db**: I've just started using the higher levels of Zend's entire DB infrastructure, but it seems to be quite stable and extremely well thought out.
* PHP5's native **PDO**: I've not used it at all, but I believe it is the simplest of all of these. In fact, both *MDB2* and *Zend\_Db* can use PDO as an underlying layer.
All of the above implement prepare and execute. Of the above, *MDB2* is the most mature, as it's been around for a long time and is based on *DB* and *MDB*. *Zend\_Db* appears to be the most well thought out. I know there are others, but I don't have experience or any knowledge about any of them.
|
if you're using PHP 5 [try out PDO](http://php.net/pdo)
|
Standard connection libraries for MySQL, MSSQL, and Oracle in PHP
|
[
"",
"php",
"database",
""
] |
I've been writing PHP web applications for some time, and have come across very nice Javascript frameworks, such as JQuery, ExtJS, Scriptaculous, etc. I can't say the same about the PHP side - I always coded that part of the client-server dialog from scratch.
I've used CodeIgniter (<http://codeigniter.com/>) and it is nice, but doesn't deal with AJAX as a whole - rather providing input checking, image manipulation, and some output helpers.
Is there a standard PHP library/class/framework out there that deals/integrates with Javascript frameworks? Something that can catch users' responses/requests, validate identity and input, provide progress status, keep track of sessions, be aware of asynchronous events, etc.
|
The [Zend Framework](http://framework.zend.com/) is integrated with [Dojo Toolkit](http://dojotoolkit.org/). I haven't used the latest Zend Framework yet, but I do know that in the past, it has proven to be reliable.
|
There might be one but I can't imagine why. An AJAX request looks and acts just like an HTTP request from the perspective of the server. You can get and set cookies. All the environment variables that you would expect from an HTTP request are there. All of the HTTP verbs work as do any of the header fields.
|
Integrating AJAX and PHP
|
[
"",
"php",
"ajax",
"frameworks",
"integration",
""
] |
I've got a class that I'm using as a settings class that is serialized into an XML file that administrators can then edit to change settings in the application. (The settings are a little more complex than the `App.config` allows for.)
I'm using the `XmlSerializer` class to deserialize the XML file, and I want it to be able to set the property class but I don't want other developers using the class/assembly to be able to set/change the property through code. Can I make this happen with the XmlSerializer class?
To add a few more details: This particular class is a Collection and according to FxCop the `XmlSerializer` class has special support for deserializing read-only collections, but I haven't been able to find any more information on it. The exact details on the rule this violates is:
> Properties that return collections should be read-only so that users cannot entirely replace the backing store. Users can still modify the contents of the collection by calling relevant methods on the collection. Note that the XmlSerializer class has special support for deserializing read-only collections. See the XmlSerializer overview for more information.
This is exactly what I want, but how do it do it?
**Edit:** OK, I think I'm going a little crazy here. In my case, all I had to do was initialize the Collection object in the constructor and then remove the property setter. Then the XmlSerializable object actually knows to use the Add/AddRange and indexer properties in the Collection object. The following actually works!
```
public class MySettings
{
private Collection<MySubSettings> _subSettings;
public MySettings()
{
_subSettings = new Collection<MySubSettings>();
}
public Collection<MySubSettings> SubSettings
{
get { return _subSettings; }
}
}
```
|
You have to use a mutable list type, like ArrayList (or IList IIRC).
|
I dont think you can use the automatic serialization since the property is read only.
My course of action would be to implement the [ISerializable](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.iserializable.aspx) interface and do it manually. You will be able to set the internal values from here.
However, if your sub-objects (that are exposed as read only) can take care of serializing themselves, it should all just work..
I think the rule FxCop is moaning about is that you have something like:
```
public List<MyObject> Collection
{
get { return _collection; }
set { _collection = value; }
}
```
Is it not? If not, can you paste some code so I can see what exactly it is you are doing? There are several ways to do all of the above :)
|
How do I deserialize an XML file into a class with a read only property?
|
[
"",
"c#",
".net",
"xml",
"xml-serialization",
""
] |
I'm trying to perform a bitwise NOT in SQL Server. I'd like to do something like this:
```
update foo
set Sync = NOT @IsNew
```
Note: I started writing this and found out the answer to my own question before I finished. I still wanted to share with the community, since this piece of documentation was lacking on MSDN (until I added it to the Community Content there, too).
|
Yes, the ~ operator will work.
```
update foo
set Sync = ~@IsNew
```
|
**Bitwise NOT: ~**
Bitwise AND: &
Bitwise OR: |
Bitwise XOR: ^
|
How do I flip a bit in SQL Server?
|
[
"",
"sql",
"sql-server",
"bit-manipulation",
""
] |
Whilst refactoring some legacy C++ code I found that I could potentially remove some code duplication by somehow defining a variable that could point to any class method that shared the same signature. After a little digging, I found that I could do something like the following:
```
class MyClass
{
protected:
bool CaseMethod1( int abc, const std::string& str )
{
cout << "case 1:" << str;
return true;
}
bool CaseMethod2( int abc, const std::string& str )
{
cout << "case 2:" << str;
return true;
}
bool CaseMethod3( int abc, const std::string& str )
{
cout << "case 3:" << str;
return true;
}
public:
bool TestSwitch( int num )
{
bool ( MyClass::*CaseMethod )( int, const std::string& );
switch ( num )
{
case 1: CaseMethod = &MyClass::CaseMethod1;
break;
case 2: CaseMethod = &MyClass::CaseMethod2;
break;
case 3: CaseMethod = &MyClass::CaseMethod3;
break;
}
...
bool res = CaseMethod( 999, "hello world" );
...
reurn res;
}
};
```
My question is - **is this the correct way to go about this?** Should I consider anything that *Boost* has to offer?
Edit...
Ok, my mistake - I should be calling the method like so:
```
bool res = ( (*this).*CaseMethod )( 999, "Hello World" );
```
|
What you have there is a pointer-to-member-function. It will solve your problem. I am surprised that your "TestSwitch" function compiles, as the calling syntax is slightly different to what you might expect. It should be:
```
bool res = (this->*CaseMethod)( 999, "hello world" );
```
However, you might find a combination of boost::function and boost::bind makes things a little easier, as you can avoid the bizarre calling syntax.
```
boost::function<bool(int,std::string)> f=
boost::bind(&MyClass::CaseMethod1,this,_1,_2);
```
Of course, this will bind it to the current `this` pointer: you can make the `this` pointer of the member function an explicit third parameter if you like:
```
boost::function<bool(MyClass*,int,std::string)> f=
boost::bind(&MyClass::CaseMethod1,_1,_2,_3);
```
Another alternative might be to use virtual functions and derived classes, but that might require major changes to your code.
|
You could also build a lookup (if your key range is reasonable) so that you end up writing:
```
this->*Methods[num]( 999, "hello world" );
```
This removes the switch as well, and makes the cleanup a bit more worthwhile.
|
Pointers to C++ class methods
|
[
"",
"c++",
""
] |
I am using RRDTool (<http://oss.oetiker.ch/rrdtool/>) as a graphing back-end for storing performance metrics. This is done via the RRDTool CLI from a Python script.
My problem is that the script is multithreaded and each thread updates the RRD at a pretty rapid pace. Sometimes an update fails because one thread is accessing the RRD file while another one tries to access it also.
I was under the impression that this is OK to try since RRDTool uses its own locking mechanism, but I guess that isn't true.
Does anyone have a good approach for concurrent access to an RRD?
I can think of a few ways to go:
1. have 1 thread create a queue and only feed the RRD from a single thread.
2. create my own locking mechanism inside the Python script. (how would I do this?)
got anything better or have you run into this issue before?
|
An exclusive lock ought to be enough for this problem :
* [Python doc page](http://docs.python.org/lib/lock-objects.html)
* [Use example](http://effbot.org/zone/thread-synchronization.htm)
Define your lock object at the main level, not at the thread level, and you're done.
Edit in Response to comment :
if you define your lock (`lock = new Lock()`) at the thread level, you will have one lock object per running thread, and you really want a single lock for the file rrdtool updates, so this definition must be at the main level.
|
You could also try using rrdcached to do the updates. Then all write updates will be serialised through rrdcached. When you want to read the RRD to generate graphs you tell the daemon to flush it and the on-disk RRD will then represent the latest state.
All the RRD tools will do this transparently if pointed at the cached daemon via an environment variable.
|
Concurrent Access to RRD (RRDTool)
|
[
"",
"python",
"rrdtool",
""
] |
I've implemented a basic search for a research project. I'm trying to make the search more efficient by building a [suffix tree](http://en.wikipedia.org/wiki/Suffix_tree). I'm interested in a C# implementation of the [Ukkonen](http://en.wikipedia.org/wiki/Ukkonen%27s_algorithm) algorith. I don't want to waste time rolling my own if such implementation exists.
|
Hei, just finished implementing .NET (c#) library containing different trie implementations. Among them:
* Classical trie
* Patricia trie
* Suffix trie
* A trie using **Ukkonen's** algorithm
I tried to make source code easy readable. Usage is also very straight forward:
```
using Gma.DataStructures.StringSearch;
...
var trie = new UkkonenTrie<int>(3);
//var trie = new SuffixTrie<int>(3);
trie.Add("hello", 1);
trie.Add("world", 2);
trie.Add("hell", 3);
var result = trie.Retrieve("hel");
```
The library is well tested and also published as [TrieNet](https://www.nuget.org/packages/TrieNet/) NuGet package.
See [github.com/gmamaladze/trienet](https://github.com/gmamaladze/trienet)
|
Hard question. Here's the closest to match I could find: <http://www.codeproject.com/KB/recipes/ahocorasick.aspx>, which is an implementation of the Aho-Corasick string matching algorithm. Now, the algorithm uses a suffix-tree-like structure per: <http://en.wikipedia.org/wiki/Aho-Corasick_algorithm>
Now, if you want a prefix tree, this article claims to have an implementation for you: <http://www.codeproject.com/KB/recipes/prefixtree.aspx>
<*HUMOR*> Now that I did your homework, how about you mow my lawn. (Reference: <http://flyingmoose.org/tolksarc/homework.htm>) <*/HUMOR*>
*Edit*: I found a C# suffix tree implementation that was a port of a C++ one posted on a blog: <http://code.google.com/p/csharsuffixtree/source/browse/#svn/trunk/suffixtree>
*Edit*: There is a new project at Codeplex that is focused on suffix trees: <http://suffixtree.codeplex.com/>
|
Looking for the suffix tree implementation in C#?
|
[
"",
"c#",
"data-structures",
"suffix-tree",
""
] |
I am trying to implement position-sensitive zooming inside a `JScrollPane`. The `JScrollPane` contains a component with a customized `paint` that will draw itself inside whatever space it is allocated - so zooming is as easy as using a `MouseWheelListener` that resizes the inner component as required.
But I also want zooming into (or out of) a point to keep that point as central as possible within the resulting zoomed-in (or -out) view (this is what I refer to as 'position-sensitive' zooming), similar to how zooming works in google maps. I am sure this has been done many times before - does anybody know the "right" way to do it under Java Swing?. Would it be better to play with `Graphic2D`'s transformations instead of using `JScrollPanes`?
Sample code follows:
```
package test;
import java.awt.*;
import java.awt.event.*;
import java.awt.geom.*;
import javax.swing.*;
public class FPanel extends javax.swing.JPanel {
private Dimension preferredSize = new Dimension(400, 400);
private Rectangle2D[] rects = new Rectangle2D[50];
public static void main(String[] args) {
JFrame jf = new JFrame("test");
jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jf.setSize(400, 400);
jf.add(new JScrollPane(new FPanel()));
jf.setVisible(true);
}
public FPanel() {
// generate rectangles with pseudo-random coords
for (int i=0; i<rects.length; i++) {
rects[i] = new Rectangle2D.Double(
Math.random()*.8, Math.random()*.8,
Math.random()*.2, Math.random()*.2);
}
// mouse listener to detect scrollwheel events
addMouseWheelListener(new MouseWheelListener() {
public void mouseWheelMoved(MouseWheelEvent e) {
updatePreferredSize(e.getWheelRotation(), e.getPoint());
}
});
}
private void updatePreferredSize(int n, Point p) {
double d = (double) n * 1.08;
d = (n > 0) ? 1 / d : -d;
int w = (int) (getWidth() * d);
int h = (int) (getHeight() * d);
preferredSize.setSize(w, h);
getParent().doLayout();
// Question: how do I keep 'p' centered in the resulting view?
}
public Dimension getPreferredSize() {
return preferredSize;
}
private Rectangle2D r = new Rectangle2D.Float();
public void paint(Graphics g) {
super.paint(g);
g.setColor(Color.red);
int w = getWidth();
int h = getHeight();
for (Rectangle2D rect : rects) {
r.setRect(rect.getX() * w, rect.getY() * h,
rect.getWidth() * w, rect.getHeight() * h);
((Graphics2D)g).draw(r);
}
}
}
```
|
Tested this, seems to work...
```
private void updatePreferredSize(int n, Point p) {
double d = (double) n * 1.08;
d = (n > 0) ? 1 / d : -d;
int w = (int) (getWidth() * d);
int h = (int) (getHeight() * d);
preferredSize.setSize(w, h);
int offX = (int)(p.x * d) - p.x;
int offY = (int)(p.y * d) - p.y;
setLocation(getLocation().x-offX,getLocation().y-offY);
getParent().doLayout();
}
```
**Update**
Here is an explanation: the point `p` is the location of the mouse relative to the `FPanel`. Since you are scaling the size of the panel, the location of `p` (relative to the size of the panel) will scale by the same factor. By subtracting the current location from the scaled location, you get how much the point 'shifts' when the panel is resized. Then it is simply a matter of shifting the panel location in the scroll pane by the same amount in the opposite direction to put `p` back under the mouse cursor.
|
Here's a minor refactoring of @Kevin K's solution:
```
private void updatePreferredSize(int wheelRotation, Point stablePoint) {
double scaleFactor = findScaleFactor(wheelRotation);
scaleBy(scaleFactor);
Point offset = findOffset(stablePoint, scaleFactor);
offsetBy(offset);
getParent().doLayout();
}
private double findScaleFactor(int wheelRotation) {
double d = wheelRotation * 1.08;
return (d > 0) ? 1 / d : -d;
}
private void scaleBy(double scaleFactor) {
int w = (int) (getWidth() * scaleFactor);
int h = (int) (getHeight() * scaleFactor);
preferredSize.setSize(w, h);
}
private Point findOffset(Point stablePoint, double scaleFactor) {
int x = (int) (stablePoint.x * scaleFactor) - stablePoint.x;
int y = (int) (stablePoint.y * scaleFactor) - stablePoint.y;
return new Point(x, y);
}
private void offsetBy(Point offset) {
Point location = getLocation();
setLocation(location.x - offset.x, location.y - offset.y);
}
```
|
How do you implement position-sensitive zooming inside a JScrollPane?
|
[
"",
"java",
"swing",
"user-interface",
"zooming",
""
] |
I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
```
java -Xmx1400m ...
```
Today I tried the same option on a new Windows XP machine using Java 1.5\_16 and 1.6.0\_07 and got the error:
```
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
```
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
|
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous *in its address space*. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You *could* try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have *much* more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
|
This has to do with contiguous memory.
[Here's some info I found online](http://www.unixville.com/~moazam/) for somebody asking that before, supposedly from a "VM god":
> The reason we need a contiguous memory
> region for the heap is that we have a
> bunch of side data structures that are
> indexed by (scaled) offsets from the
> start of the heap. For example, we
> track object reference updates with a
> "card mark array" that has one byte
> for each 512 bytes of heap. When we
> store a reference in the heap we have
> to mark the corresponding byte in the
> card mark array. We right shift the
> destination address of the store and
> use that to index the card mark array.
> Fun addressing arithmetic games you
> can't do in Java that you get to (have
> to :-) play in C++.
>
> Usually we don't have trouble getting
> modest contiguous regions (up to about
> 1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
> problem is mostly that there are some
> libraries that get loaded before the
> JVM starts up that break up the
> address space. Using the /3GB switch
> won't rebase those libraries, so they
> are still a problem for us.
>
> We know how to make chunked heaps, but
> there would be some overhead to using
> them. We have more requests for faster
> storage management than we do for
> larger heaps in the 32-bit JVM. If you
> really want large heaps, switch to the
> 64-bit JVM. We still need contiguous
> memory, but it's much easier to get in
> a 64-bit address space.
|
Java maximum memory on Windows XP
|
[
"",
"java",
"windows",
"memory",
""
] |
I need to make sure that user can run only one instance of my program at a time.
Which means, that I have to check programatically, whether the same program is already running, and quit in such case.
The first thing that came to my mind was to create a file somewhere, when the program starts. Then, each other instance of the program would check for this file and exit if it found it.
The trouble is, that the program must always exit gracefully and be able to delete the file it created, for this to work.
In case of, say, power outage, the lock file remains in place and the program can't be started again.
To solve this, I decided to store the first program's process ID into the lock file and when another instance starts, it checks if the PID from the file is attached to some running process.
If the file doesn't exist, is empty, or the PID doesn't correspond to any existing process, the program continues to run and writes its own PID to the file.
This seems to work quite fine - even after an unexpected shutdown, the chance that the (now obsolete) process ID will be associated with some other program, seems to be quite low.
But it still doesn't feel right (there *is* a chance of getting locked by some unrelated process) and working with process IDs seems to go beyond the standard C++ and probably isn't very portable either.
So, is there another (more clean and secure) way of doing this? Ideally one that would work with the ISO 98 C++ standard and on Windows and \*nix alike.
If it cannot be done platform-independently, Linux/Unix is a priority for me.
|
There are several methods you can use to accomplish only allowing one instance of your application:
**Method 1: Global synchronization object or memory**
It's usually done by creating a named global mutex or event. If it is already created, then you know the program is already running.
For example in windows you could do:
```
#define APPLICATION_INSTANCE_MUTEX_NAME "{BA49C45E-B29A-4359-A07C-51B65B5571AD}"
//Make sure at most one instance of the tool is running
HANDLE hMutexOneInstance(::CreateMutex( NULL, TRUE, APPLICATION_INSTANCE_MUTEX_NAME));
bool bAlreadyRunning((::GetLastError() == ERROR_ALREADY_EXISTS));
if (hMutexOneInstance == NULL || bAlreadyRunning)
{
if(hMutexOneInstance)
{
::ReleaseMutex(hMutexOneInstance);
::CloseHandle(hMutexOneInstance);
}
throw std::exception("The application is already running");
}
```
**Method 2: Locking a file, second program can't open the file, so it's open**
You could also exclusively open a file by locking it on application open. If the file is already exclusively opened, and your application cannot receive a file handle, then that means the program is already running. On windows you'd simply not specify sharing flags `FILE_SHARE_WRITE` on the file you're opening with `CreateFile` API. On linux you'd use `flock`.
**Method 3: Search for process name:**
You could enumerate the active processes and search for one with your process name.
|
Your method of writing the process pid to a file is a common one that is used in many different established applications. In fact, if you look in your `/var/run` directory right now I bet you'll find several `*.pid` files already.
As you say, it's not 100% robust because there is chance of the pids getting confused. I have heard of programs using `flock()` to lock an application-specific file that will automatically be unlocked by the OS when the process exits, but this method is more platform-specific and less transparent.
|
How to block running two instances of the same program?
|
[
"",
"c++",
""
] |
I'm writing a log file viewer for a web application and I want to paginate through the lines of the log file. The items in the file are line based with the newest item at the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and support an offset. This is what I came up with:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets?
|
The code I ended up using. I think this is the best so far:
```
def tail(f, n, offset=None):
"""Reads a n lines from f with an offset of offset lines. The return
value is a tuple in the form ``(lines, has_more)`` where `has_more` is
an indicator that is `True` if there are more lines in the file.
"""
avg_line_length = 74
to_read = n + (offset or 0)
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None], \
len(lines) > to_read or pos > 0
avg_line_length *= 1.3
```
|
This may be quicker than yours. Makes no assumptions about line length. Backs through the file one block at a time till it's found the right number of '\n' characters.
```
def tail( f, lines=20 ):
total_lines_wanted = lines
BLOCK_SIZE = 1024
f.seek(0, 2)
block_end_byte = f.tell()
lines_to_go = total_lines_wanted
block_number = -1
blocks = [] # blocks of size BLOCK_SIZE, in reverse order starting
# from the end of the file
while lines_to_go > 0 and block_end_byte > 0:
if (block_end_byte - BLOCK_SIZE > 0):
# read the last block we haven't yet read
f.seek(block_number*BLOCK_SIZE, 2)
blocks.append(f.read(BLOCK_SIZE))
else:
# file too small, start from begining
f.seek(0,0)
# only read what was not read
blocks.append(f.read(block_end_byte))
lines_found = blocks[-1].count('\n')
lines_to_go -= lines_found
block_end_byte -= BLOCK_SIZE
block_number -= 1
all_read_text = ''.join(reversed(blocks))
return '\n'.join(all_read_text.splitlines()[-total_lines_wanted:])
```
I don't like tricky assumptions about line length when -- as a practical matter -- you can never know things like that.
Generally, this will locate the last 20 lines on the first or second pass through the loop. If your 74 character thing is actually accurate, you make the block size 2048 and you'll tail 20 lines almost immediately.
Also, I don't burn a lot of brain calories trying to finesse alignment with physical OS blocks. Using these high-level I/O packages, I doubt you'll see any performance consequence of trying to align on OS block boundaries. If you use lower-level I/O, then you might see a speedup.
---
**UPDATE**
*for Python 3.2 and up, follow the process on bytes as In text files (those opened without a **"b"** in the mode string), only seeks relative to the beginning of the file are allowed (the exception being seeking to the very file end with seek(0, 2)).:*
eg: `f = open('C:/.../../apache_logs.txt', 'rb')`
```
def tail(f, lines=20):
total_lines_wanted = lines
BLOCK_SIZE = 1024
f.seek(0, 2)
block_end_byte = f.tell()
lines_to_go = total_lines_wanted
block_number = -1
blocks = []
while lines_to_go > 0 and block_end_byte > 0:
if (block_end_byte - BLOCK_SIZE > 0):
f.seek(block_number*BLOCK_SIZE, 2)
blocks.append(f.read(BLOCK_SIZE))
else:
f.seek(0,0)
blocks.append(f.read(block_end_byte))
lines_found = blocks[-1].count(b'\n')
lines_to_go -= lines_found
block_end_byte -= BLOCK_SIZE
block_number -= 1
all_read_text = b''.join(reversed(blocks))
return b'\n'.join(all_read_text.splitlines()[-total_lines_wanted:])
```
|
Get last n lines of a file, similar to tail
|
[
"",
"python",
"file",
"file-io",
"tail",
"logfile",
""
] |
What might be the best way to start programming using boost lambda libraries.
|
Remaining within the boundaries of the C++ language and libraries, I would suggest first getting used to programming using STL algorithm function templates, as one the most common use you will have for boost::lambda is to replace functor classes with inlined expressions inlined.
The library documentation itself gives you an up-front example of what it is there for:
```
for_each(a.begin(), a.end(), std::cout << _1 << ' ');
```
where `std::cout << _1 << ' '` produces a function object that, when called, writes its first argument to the `cout` stream. This is something you could do with a custom functor class, `std::ostream_iterator` or an explicit loop, but boost::lambda wins in conciseness and probably clarity -- at least if you are used to the functional programming concepts.
When you (over-)use the STL, you find yourself gravitating towards boost::bind and boost::lambda. It comes in really handy for things like:
```
std::sort( c.begin(), c.end(), bind(&Foo::x, _1) < bind(&Foo::x, _2) );
```
Before you get to that point, not so much. So use STL algorithms, write your own functors and then translate them into inline expressions using boost::lambda.
From a professional standpoint, I believe the best way to get started with boost::lambda is to get usage of boost::bind understood and accepted. Use of placeholders in a boost::bind expression looks much less magical than "naked" boost::lambda placeholders and finds easier acceptance during code reviews. Going beyond basic boost::lambda use is quite likely to get you grief from your coworkers unless you are in a bleeding-edge C++ shop.
Try not to go overboard - there *are* times when and places where a `for`-loop *really* is the right solution.
|
Depends. Are you already well versed in functional programming concepts? If not, I'd suggest that you start in a language that is designed for functional programming instead of a procedural language with functional features stapled on. If you aren't used to coding in a functional style (it's not *harder* necessarily, but it's definitely different), then you'll spend more time fighting the syntax and not enough time learning to get stuff done.
As for where to start, I cut my functional teeth on Scheme, but there are plenty of good options.
|
c++ boost lambda libraries
|
[
"",
"c++",
"functional-programming",
"boost-lambda",
""
] |
I've executed a JDBC query to obtain a resultset. Before iterating over it, I'd like to quickly find out how many rows were returned. How can I do this with high performance?
I'm using Java 6, Oracle 11g, and the latest Oracle JDBC drivers.
|
You're going to have to do this as a separate query, for example:
```
SELECT COUNT(1) FROM table_name
```
Some JDBC drivers might tell you but this is optional behaviour and, more to the point, the driver may not know yet. This can be due to how the query is optimised eg two example execution strategies in Oracle are to get all rows as quickly as possible or to get the first row as quickly as possible.
If you do two separate queries (one a count and the other the query) then you'll need to do them within the same transaction. This will work well on Oracle but can be problematic on other databases (eg SQL Server will either show you uncommitted data or block on an external uncommitted update depending on your isolation level whereas Oracle supports an isolation level that gives you a transactionally consistent view of the data without blocking on external updates).
Normally though it doesn't really matter how many rows there are. Typically this sort of query is either batch processed or paged and either way you have progress information in the form of rows loaded/processed and you can detect the end of the result set (obviously).
|
```
ResultSet rs = stmt.executeQuery(sql);
int rowCount = rs.last() ? rs.getRow() : 0; // Number of rows in result set. Don't forget to set cyrsor to beforeFirst() row! :)
```
|
How do I get the row count in JDBC?
|
[
"",
"java",
"jdbc",
"resultset",
""
] |
I've been battling PHP's email reading functions for the better part of two days. I'm writing a script to read emails from a mailbox and save any attachments onto the server. If you've ever done something similar, you might understand my pain: **PHP doesn't play well with email!**
I've connected to the POP3 server and I can iterate the files. Here's a rough outline of the code:
```
if (!$mbox = imap_open ("{myserver.com:110/pop3/notls}INBOX", "u", "p"))
die ('Cannot connect/check mail! Exiting');
if ($hdr = imap_check($mbox))
$msgCount = $hdr->Nmsgs;
else
die ("Failed to get mail");
foreach ($overview as $message) {
$msgStruct = imap_fetchstructure($mbox, $message->msgno);
// if it has parts, there are attachments that need reading
if ($msgStruct->parts) {
foreach ($msgStruct->parts as $key => $part) {
switch (strtoupper($part->subtype)) {
case 'GIF': case 'JPEG':case 'PNG':
//do something - but what?!
break;
}
}
}
}
```
I've marked where I'm stuck. I can use `imap_fetchbody($mbox, $message->msgno, $key+1)` but that gets me a bunch of data like this:
```
/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8S
EhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEU
Hh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAAR
CAHiAi0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAA
AgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkK
FhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWG
h4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl
5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREA
...
```
I'm lead to believe that this is MIME data. I need it as an image! I've seen several classes bombing around the internet that claim to do the required wizardry. I can't get any of them to work. I don't understand why I'm finding this so hard!
**In short**, I'm looking for something that can turn a raw MIME string into real data.
|
I found a quick guide how to treat emails with PHP: [here](http://www.linuxscope.net/articles/mailAttachmentsPHP.html).
At the bottom of that page there's a attachment-body `echo`:
```
if (substr($ContentType,0,4) == "text") {
echo imap_qprint($fileContent);
} else {
echo imap_base64($fileContent);
}
```
I guess this is what you might need...
(edit: in your case if it's image always you can skip the `if` part. And of course, save the file instead of echoing it:)
|
MIME data is base-64 encoded, so I think you should be able to decode it using base64\_decode
|
How can I save email attachments to the server in PHP?
|
[
"",
"php",
"email",
"mime",
""
] |
Yesterday, I asked [this](https://stackoverflow.com/questions/119107/how-do-i-generate-a-list-of-n-unique-random-numbers-in-ruby) question and never really got an answer I was really happy with. I really would like to know how to generate a list of N unique random numbers using a functional language such as Ruby without having to be extremely imperative in style.
Since I didn't see anything I really liked, I've written the solution I was looking for in LINQ:
```
static void Main(string[] args)
{
var temp = from q in GetRandomNumbers(100).Distinct().Take(5) select q;
}
private static IEnumerable GetRandomNumbers(int max)
{
Random r = new Random();
while (true)
{
yield return r.Next(max);
}
}
```
Can you translate my LINQ to Ruby? Python? Any other functional programming language?
**Note:** Please try not to use too many loops and conditionals - otherwise the solution is trivial. Also, I'd rather see a solution where you don't have to generate an array much bigger than N so you can then just remove the duplicates and trim it down to N.
I know I'm being picky, but I'd really like to see some elegant solutions to this problem.
Thanks!
**Edit:**
Why all the downvotes?
Originally my code sample had the Distinct() after the Take() which, as many pointed out, could leave me with an empty list. I've changed the order in which those methods are called to reflect what I meant in the first place.
**Apology:**
I've been told this post came across as rather snobbish. I wasn't trying to imply that LINQ is better than Ruby/Python; or that my solution is much better than everyone else's. My intent is just to learn how to do this (with certain constraints) in Ruby. I'm sorry if I came across as a jerk.
|
In Ruby:
```
a = (0..100).entries.sort_by {rand}.slice! 0, 5
```
**Update**: Here is a slightly different way:
a = (0...100).entries.sort\_by{rand}[0...5]
**EDIT:**
and In Ruby 1.9 you can do this:
```
Array(0..100).sample(5)
```
|
```
>>> import random
>>> print random.sample(xrange(100), 5)
[61, 54, 91, 72, 85]
```
This should yield 5 unique values in the range `0 — 99`. The `xrange` object generates values as requested so no memory is used for values that aren't sampled.
|
How do I write this in Ruby/Python? Or, can you translate my LINQ to Ruby/Python?
|
[
"",
"python",
"ruby",
"linq",
"functional-programming",
""
] |
I need to be able to merge two (very simple) JavaScript objects at runtime. For example I'd like to:
```
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog' }
obj1.merge(obj2);
//obj1 now has three properties: food, car, and animal
```
Is there a built in way to do this? I do not need recursion, and I do not need to merge functions, just methods on flat objects.
|
**ECMAScript 2018 Standard Method**
You would use [object spread](https://github.com/tc39/proposal-object-rest-spread):
```
let merged = {...obj1, ...obj2};
```
`merged` is now the union of `obj1` and `obj2`. Properties in `obj2` will overwrite those in `obj1`.
```
/** There's no limit to the number of objects you can merge.
* Later properties overwrite earlier properties with the same name. */
const allRules = {...obj1, ...obj2, ...obj3};
```
Here is also the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) for this syntax. If you're using babel you'll need the [@babel/plugin-proposal-object-rest-spread](https://babeljs.io/docs/en/babel-plugin-proposal-object-rest-spread) plugin for it to work (This plugin is included in `@babel/preset-env`, in [ES2018](https://github.com/tc39/proposals/blob/main/finished-proposals.md)).
**ECMAScript 2015 (ES6) Standard Method**
```
/* For the case in question, you would do: */
Object.assign(obj1, obj2);
/** There's no limit to the number of objects you can merge.
* All objects get merged into the first object.
* Only the object in the first argument is mutated and returned.
* Later properties overwrite earlier properties with the same name. */
const allRules = Object.assign({}, obj1, obj2, obj3, etc);
```
(see [MDN JavaScript Reference](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign#Browser_compatibility))
---
**Method for ES5 and Earlier**
```
for (var attrname in obj2) { obj1[attrname] = obj2[attrname]; }
```
Note that this will simply add all attributes of `obj2` to `obj1` which might not be what you want if you still want to use the unmodified `obj1`.
If you're using a framework that craps all over your prototypes then you have to get fancier with checks like `hasOwnProperty`, but that code will work for 99% of cases.
Example function:
```
/**
* Overwrites obj1's values with obj2's and adds obj2's if non existent in obj1
* @param obj1
* @param obj2
* @returns obj3 a new object based on obj1 and obj2
*/
function merge_options(obj1,obj2){
var obj3 = {};
for (var attrname in obj1) { obj3[attrname] = obj1[attrname]; }
for (var attrname in obj2) { obj3[attrname] = obj2[attrname]; }
return obj3;
}
```
---
additionally :- [check this program](https://jsfiddle.net/fmcajhsv/) for see differnce between Object.assign & spread syntax object literals
|
jQuery also has a utility for this: <http://api.jquery.com/jQuery.extend/>.
Taken from the jQuery documentation:
```
// Merge options object into settings object
var settings = { validate: false, limit: 5, name: "foo" };
var options = { validate: true, name: "bar" };
jQuery.extend(settings, options);
// Now the content of settings object is the following:
// { validate: true, limit: 5, name: "bar" }
```
The above code will mutate the **existing object** named *`settings`*.
---
If you want to create a **new object** without modifying either argument, use this:
```
var defaults = { validate: false, limit: 5, name: "foo" };
var options = { validate: true, name: "bar" };
/* Merge defaults and options, without modifying defaults */
var settings = $.extend({}, defaults, options);
// The content of settings variable is now the following:
// {validate: true, limit: 5, name: "bar"}
// The 'defaults' and 'options' variables remained the same.
```
|
How can I merge properties of two JavaScript objects?
|
[
"",
"javascript",
"javascript-objects",
""
] |
Does anyone have experience using makefiles for Visual Studio C++ builds (under VS 2005) as opposed to using the project/solution setup. For us, the way that the project/solutions work is not intuitive and leads to configuruation explosion when you are trying to tweak builds with specific compile time flags.
Under Unix, it's pretty easy to set up a makefile that has its default options overridden by user settings (or other configuration setting). But doing these types of things seems difficult in Visual Studio.
By way of example, we have a project that needs to get build for 3 different platforms. Each platform might have several configurations (for example debug, release, and several others). One of my goals on a newly formed project is to have a solution that can have all platform build living together, which makes building and testing code changes easier since you aren't having to open 3 different solutions just to test your code. But visual studio will require 3 \* (number of base configurations) configurations. i.e. PC Debug, X360 Debug, PS3 Debug, etc.
It seems like a makefile solution is much better here. Wrapped with some basic batchfiles or scripts, it would be easy to keep the configuration explotion to a minimum and only maintain a small set of files for all of the different builds that we have to do.
However, I have no experience with makefiles under visual studio and would like to know if others have experiences or issues that they can share.
Thanks.
(post edited to mention that these are C++ builds)
|
I've found some benefits to makefiles with large projects, mainly related to unifying the location of the project settings. It's somewhat easier to manage the list of source files, include paths, preprocessor defines and so on, if they're all in a makefile or other build config file. With multiple configurations, adding an include path means you need to make sure you update every config manually through Visual Studio's fiddly project properties, which can get pretty tedious as a project grows in size.
Projects which use a lot of custom build tools can be easier to manage too, such as if you need to compile pixel / vertex shaders, or code in other languages without native VS support.
You'll still need to have various different project configurations however, since you'll need to differentiate the invocation of the build tool for each config (e.g. passing in different command line options to make).
Immediate downsides that spring to mind:
* Slower builds: VS isn't particularly quick at invoking external tools, or even working out whether it needs to build a project in the first place.
* Awkward inter-project dependencies: It's fiddly to set up so that a dependee causes the base project to build, and fiddlier to make sure that they get built in the right order. I've had some success getting SCons to do this, but it's always a challenge to get working well.
* Loss of some useful IDE features: Edit & Continue being the main one!
In short, you'll spend less time managing your project configurations, but more time coaxing Visual Studio to work properly with it.
|
Visual studio is being built on top of the [MSBuild](http://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx) configurations files. You can consider \*proj and \*sln files as makefiles. They allow you to fully customize build process.
|
Using Makefile instead of Solution/Project files under Visual Studio (2005)
|
[
"",
"c++",
"visual-studio",
"makefile",
""
] |
Has anyone ever heard of a UNIX shell written in a reasonable language, like Python?
|
* [Eshell](http://www.gnu.org/software/emacs/manual/html_node/eshell/index.html) is a Bash-like shell in Emacs Lisp.
* IPython can be [used as a system shell](http://ipython.org/ipython-doc/stable/interactive/shell.html), though the syntax is a bit weird (supporting all of Python plus basic sh constructs).
* [fish](http://fishshell.com/) has a core written in C, but much of its functionality is implemented in itself. Unlike many rare shells, it can be used as your login shell.
* [Hotwire](https://code.google.com/p/hotwire-shell/) deserves another mention. Its basic design appears to be "PowerShell in Python," but it also does some clever things with UI. The last release was in 2008.
* [Zoidberg](http://www.pardus.nl/projects/zoidberg/) is written in Perl and uses Perl syntax. A nice-looking project, shame it seems to have stalled.
* [Scsh](http://www.scsh.net/) would be a pain to use as a login shell (an example command from the docs: `(run/strings (find "." -name *.c -print))`), but it looks like a good "Perl in Scheme."
|
[iPython](http://ipython.scipy.org/moin/) (Python) and [Rush](http://rush.heroku.com/) (Ruby) are shells that are designed for more advanced languages. There's also Hotwire, which is sort of a weird integrated shell/terminal emulator.
|
UNIX shell written in a reasonable language?
|
[
"",
"python",
"unix",
"shell",
""
] |
Is there an easy way to marshal a PHP associative array to and from XML? For example, I have the following array:
```
$items = array("1", "2",
array(
"item3.1" => "3.1",
"item3.2" => "3.2"
"isawesome" => true
)
);
```
How would I turn it into something similar to the following XML in as few lines as possible, then back again?
```
<items>
<item>1</item>
<item>2</item>
<item>
<item3_1>3.1</item3_1>
<item3_2>3.2</item3_2>
<isawesome>true</isawesome>
</item>
</items>
```
I don't really care if I have to change the array structure a bit or if the XML that comes out is different to the above example. I've been trying to work with PHP's [XMLReader](http://php.net/manual/en/book.xmlreader.php) and [XMLWriter](http://php.net/manual/en/book.xmlwriter.php), but the documentation is so poor and the code I've produced as a consequence looks nothing like what I feel it should look like:
```
$xml = SomeXMLWriter::writeArrayToXml($items);
$array = SomeXMLWriter::writeXmlToArray($xml);
```
Does it really have to be any harder than that to get a basic, raw XML dump of a PHP array without writing my own custom class?
I try to avoid PEAR. In addition to the configuration headaches I've had with it, I've never stuck with any of the packages I've ever used from it.
|
For those of you not using the PEAR packages, but you've got PHP5 installed. This worked for me:
```
/**
* Build A XML Data Set
*
* @param array $data Associative Array containing values to be parsed into an XML Data Set(s)
* @param string $startElement Root Opening Tag, default fx_request
* @param string $xml_version XML Version, default 1.0
* @param string $xml_encoding XML Encoding, default UTF-8
* @return string XML String containig values
* @return mixed Boolean false on failure, string XML result on success
*/
public function buildXMLData($data, $startElement = 'fx_request', $xml_version = '1.0', $xml_encoding = 'UTF-8') {
if(!is_array($data)) {
$err = 'Invalid variable type supplied, expected array not found on line '.__LINE__." in Class: ".__CLASS__." Method: ".__METHOD__;
trigger_error($err);
if($this->_debug) echo $err;
return false; //return false error occurred
}
$xml = new XmlWriter();
$xml->openMemory();
$xml->startDocument($xml_version, $xml_encoding);
$xml->startElement($startElement);
/**
* Write XML as per Associative Array
* @param object $xml XMLWriter Object
* @param array $data Associative Data Array
*/
function write(XMLWriter $xml, $data) {
foreach($data as $key => $value) {
if(is_array($value)) {
$xml->startElement($key);
write($xml, $value);
$xml->endElement();
continue;
}
$xml->writeElement($key, $value);
}
}
write($xml, $data);
$xml->endElement();//write end element
//Return the XML results
return $xml->outputMemory(true);
}
```
|
[SimpleXML](http://php.net/simplexml) works great for your use.
|
Passing PHP associative arrays to and from XML
|
[
"",
"php",
"xml",
"xml-serialization",
""
] |
My app keeps track of the state of about 1000 objects. Those objects are read from and written to a persistent store (serialized) in no particular order.
Right now the app uses the registry to store each object's state. This is nice because:
* It is simple
* It is very fast
* Individual object's state can be read/written without needing to read some larger entity (like pulling out a snippet from a large XML file)
* There is a decent editor (RegEdit) which allow easily manipulating individual items
Having said that, I'm wondering if there is a better way. SQLite seems like a possibility, but you don't have the same level of multiple-reader/multiple-writer that you get with the registry, and no simple way to edit existing entries.
Any better suggestions? A bunch of flat files?
|
If you do begin to experiment with SQLite, you should know that "out of the box" it might not seem as fast as you would like, but it can quickly be made to be much faster by applying some established optimization tips:
[SQLite optimization](http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html "sqlite speed tuning")
Depending on the size of the data and the amount of RAM available, one of the best performance gains will occur by setting sqlite to use an all-in-memory database rather than writing to disk.
For in-memory databases, pass NULL as the filename argument to `sqlite3_open` and [make sure that TEMP\_STORE is defined appropriately](http://www.sqlite.org/pragma.html#pragma_temp_store)
**On the other hand**, if you tell sqlite to use the harddisk, then you will get a similar benefit to your current usage of RegEdit to manipulate the program's data "on the fly."
The way you could simulate your current RegEdit technique with sqlite would be to use the sqlite command-line tool to connect to the on-disk database. You can run UPDATE statements on the sql data from the command-line while your main program is running (and/or while it is paused in break mode).
|
If what you mean by 'multiple-reader/multiple-writer' is that you keep a lot of threads writing to the store concurrently, SQLite is threadsafe (you can have concurrent SELECTs and concurrent writes are handled transparently). See the [FAQ [1]] and grep for 'threadsafe'
[1]: <http://www.sqlite.org/faq.html/> FAQ
|
Fastest small datastore on Windows
|
[
"",
"c++",
"windows",
"data-structures",
"caching",
"registry",
""
] |
We have a Java listener that reads text messages off of a queue in JBossMQ. If we have to reboot JBoss, the listener will not reconnect and start reading messages again. We just get messages in the listener's log file every 2 minutes saying it can't connect. Is there something we're not setting in our code or in JBossMQ? I'm new to JMS so any help will be greatly appreciated. Thanks.
|
You should implement in your client code javax.jms.ExceptionListener. You will need a method called onException. When the client's connection is lost, you should get a JMSException, and this method will be called automatically. The only thing you have to look out for is if you are intentionally disconnecting from JBossMQ-- that will also throw an exception.
Some code might look like this:
```
public void onException (JMSException jsme)
{
if (!closeRequested)
{
this.disconnect();
this.establishConnection(connectionProps, queueName, uname, pword, clientID, messageSelector);
}
else
{
//Client requested close so do not try to reconnect
}
}
```
In your "establishConnection" code, you would then implement a `while(!initialized)` construct that contains a try/catch inside of it. Until you are sure you have connected and subscribed properly, stay inside the while loop catching all JMS/Naming/etc. exceptions.
We've used this method for years with JBossMQ and it works great. We have never had a problem with our JMS clients not reconnecting after bouncing JBossMQ or losing our network connection.
|
I'd highly recommend you use the [Spring abstractions for JMS such as the MessageListenerContainer](http://static.springframework.org/spring/docs/2.5.x/reference/jms.html#jms-mdp) to deal with reconnection, transactions and pooling for you. You just need to supply a MessageListener and configure the MessageListenerContainer with the ConnectionFactory and the container does the rest.
|
Reconnecting JMS listener to JBossMQ
|
[
"",
"java",
"jboss",
"jms",
"jbossmq",
""
] |
I'm generating a ton of XML that is to be passed to an API as a post variable when a user click on a form button. I also want to be able to show the user the XML before hand.
The code is sorta like the following in structure:
```
<?php
$lots of = "php";
?>
<xml>
<morexml>
<?php
while(){
?>
<somegeneratedxml>
<?php } ?>
<lastofthexml>
<?php ?>
<html>
<pre>
The XML for the user to preview
</pre>
<form>
<input id="xml" value="theXMLagain" />
</form>
</html>
```
My XML is being generated with a few while loops and stuff. It then needs to be shown in the two places (the preview and the form value).
My question is. How do I capture the generated XML in a variable or whatever so I only have to generate it once and then just print it out as apposed to generating it inside the preview and then again inside the form value?
|
```
<?php ob_start(); ?>
<xml/>
<?php $xml = ob_get_clean(); ?>
<input value="<?php echo $xml ?>" />͏͏͏͏͏͏
```
|
Put this at your start:
```
ob_start();
```
And to get the buffer back:
```
$value = ob_get_contents();
ob_end_clean();
```
See <https://www.php.net/manual/en/ref.outcontrol.php> and the individual functions for more information.
|
How do I capture PHP output into a variable?
|
[
"",
"php",
"xml",
""
] |
I'm trying to debug a deadlock in a multi-threaded Python application after it has locked up. Is there a way to attach a debugger to inspect the state of the process?
Edit: I'm attempting this on Linux, but it would be great if there were a cross-platform solution. It's Python after all :)
|
Yeah, gdb is good for lower level debugging.
You can change threads with the *thread* command.
e.g
```
(gdb) thr 2
[Switching to thread 2 (process 6159 thread 0x3f1b)]
(gdb) backtrace
....
```
You could also check out Python specific debuggers like [Winpdb](http://winpdb.org/about/), or [pydb](http://bashdb.sourceforge.net/pydb/). Both platform independent.
|
Use [Winpdb](http://winpdb.org/). It is a **platform independent** graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.
Features:
* GPL license. Winpdb is Free Software.
* Compatible with CPython 2.3 through 2.6 and Python 3000
* Compatible with wxPython 2.6 through 2.8
* Platform independent, and tested on Ubuntu Gutsy and Windows XP.
* User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.
[](https://i.stack.imgur.com/HKYjO.jpg)
(source: [winpdb.org](http://winpdb.org/images/screenshot_winpdb_small.jpg))
|
Is there a way to attach a debugger to a multi-threaded Python process?
|
[
"",
"python",
"debugging",
""
] |
Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
```
|
**In Python 2.x:**
* `range` creates a list, so if you do `range(1, 10000000)` it creates a list in memory with `9999999` elements.
* `xrange` is a sequence object that evaluates lazily.
**In Python 3:**
* `range` does the equivalent of Python 2's `xrange`. To get the list, you have to explicitly use `list(range(...))`.
* `xrange` no longer exists.
|
> range creates a list, so if you do `range(1, 10000000)` it creates a list in memory with `9999999` elements.
>
> `xrange` ~~is a generator, so it~~ is a sequence object ~~is a~~ that evaluates lazily.
This is true, but in Python 3, `range()` will be implemented by the Python 2 `xrange()`. If you need to actually generate the list, you will need to do:
```
list(range(1,100))
```
|
What is the difference between range and xrange functions in Python 2.X?
|
[
"",
"python",
"loops",
"range",
"python-2.x",
"xrange",
""
] |
I'm trying to Unit Test a class that has many internal functions. These obviously need testing too, but my Tests project is seperate, mainly because it covers many small, related projects. What I have so far is:
```
FieldInfo[] _fields =
typeof(ButtonedForm.TitleButton).GetFields(
BindingFlags.NonPublic | BindingFlags.Instance |
BindingFlags.DeclaredOnly);
Console.WriteLine("{0} fields:", _fields.Length);
foreach (FieldInfo fi in _fields)
{
Console.WriteLine(fi.Name);
}
```
This spits out all the private members nicely, but still doesn't display internals. I know this is possible, because when I was messing around with the autogenerated tests that Visual Studio can produce, it asked about something to do with displaying internals to the Test project. Well, now I'm using NUnit and really liking it, but how can I achieve the same thing with it?
|
It would be more appropriate to use the [`InternalsVisibleTo`](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx) attribute to grant access to the internal members of the assembly to your unit test assembly.
Here is a link with some helpful additional info and a walk through:
* [The Wonders Of InternalsVisibleTo](https://jason.whitehorn.us/blog/2007/11/08/the-wonders-of-internalsvisibleto/)
To actually answer your question... Internal and protected are not recognized in the .NET Reflection API. Here is a quotation from [MSDN](http://msdn.microsoft.com/en-us/library/ms173183.aspx):
> The C# keywords protected and internal have no meaning in IL and are not used in the Reflection APIs. The corresponding terms in IL are Family and Assembly. To identify an internal method using Reflection, use the [IsAssembly](http://msdn.microsoft.com/en-us/library/system.reflection.methodbase.isassembly.aspx) property. To identify a protected internal method, use the [IsFamilyOrAssembly](http://msdn.microsoft.com/en-us/library/system.reflection.methodbase.isfamilyorassembly.aspx).
|
Adding the [InternalsVisibleTo](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx) assembly level attribute to your main project, with the Assembly name of thre test project should make internal members visible.
For example add the following to your assembly outside any class:
```
[assembly: InternalsVisibleTo("AssemblyB")]
```
Or for a more specific targeting:
```
[assembly:InternalsVisibleTo("AssemblyB, PublicKey=32ab4ba45e0a69a1")]
```
Note, if your application assembly has a strong name, your test assembly will also need to be strongly named.
|
Accessing internal members via System.Reflection?
|
[
"",
"c#",
".net",
"reflection",
"internal",
""
] |
Is there an easy method to store a person's user settings in a sql 2000 database. Ideally all settings in one field so I don't keep having to edit the table every time I add a setting. I am thinking along the lines of serialize a settings class if anyone has an example.
The reason I don't want to use the built in .NET user settings stored in persistent storage is work uses super mandatory profiles so upon a users log off the settings are cleared which is a pain. I posted asking for any solutions to this previously but didn't get much of a response.
|
The VS designer keeps property settings in the [ApplicationSettingsBase](http://msdn.microsoft.com/en-us/library/system.configuration.applicationsettingsbase.aspx) class. By default, these properties are serialized/deserialized into a per user XML file. You can override this behavior by using a custom [SettingsProvider](http://msdn.microsoft.com/en-us/library/system.configuration.settingsprovider.aspx) which is where you can add your database functionality. Just add the `SettingsProvider` attribute to the VS generated `Settings` class:
```
[SettingsProvider(typeof(CustomSettingsProvider))]
internal sealed partial class Settings {
...
}
```
A good example of this is the [RegistrySettingsProvider](http://msdn.microsoft.com/en-us/library/ms181001.aspx).
I answered another similar question the same way [here](https://stackoverflow.com/questions/170825/how-to-serialize-systemconfigurationsettingsproperty#170932).
|
you can easily serialize classes in C#: <http://www.google.com/search?q=c%23+serializer>. You can either store the XML in a varchar field, or if you want to use the binary serializer, you can store it in an "Image" datatype, which is really just binary.
|
c# store user settings in database
|
[
"",
"c#",
".net",
"database",
"repost",
""
] |
Anyone have a quick method for de-duplicating a generic List in C#?
|
Perhaps you should consider using a [HashSet](http://msdn.microsoft.com/en-us/library/bb359438.aspx).
From the MSDN link:
```
using System;
using System.Collections.Generic;
class Program
{
static void Main()
{
HashSet<int> evenNumbers = new HashSet<int>();
HashSet<int> oddNumbers = new HashSet<int>();
for (int i = 0; i < 5; i++)
{
// Populate numbers with just even numbers.
evenNumbers.Add(i * 2);
// Populate oddNumbers with just odd numbers.
oddNumbers.Add((i * 2) + 1);
}
Console.Write("evenNumbers contains {0} elements: ", evenNumbers.Count);
DisplaySet(evenNumbers);
Console.Write("oddNumbers contains {0} elements: ", oddNumbers.Count);
DisplaySet(oddNumbers);
// Create a new HashSet populated with even numbers.
HashSet<int> numbers = new HashSet<int>(evenNumbers);
Console.WriteLine("numbers UnionWith oddNumbers...");
numbers.UnionWith(oddNumbers);
Console.Write("numbers contains {0} elements: ", numbers.Count);
DisplaySet(numbers);
}
private static void DisplaySet(HashSet<int> set)
{
Console.Write("{");
foreach (int i in set)
{
Console.Write(" {0}", i);
}
Console.WriteLine(" }");
}
}
/* This example produces output similar to the following:
* evenNumbers contains 5 elements: { 0 2 4 6 8 }
* oddNumbers contains 5 elements: { 1 3 5 7 9 }
* numbers UnionWith oddNumbers...
* numbers contains 10 elements: { 0 2 4 6 8 1 3 5 7 9 }
*/
```
|
If you're using .Net 3+, you can use Linq.
```
List<T> withDupes = LoadSomeData();
List<T> noDupes = withDupes.Distinct().ToList();
```
|
Remove duplicates from a List<T> in C#
|
[
"",
"c#",
"list",
"generics",
"duplicates",
""
] |
Using the Python Documentation I found the [HTML parser](http://docs.python.org/lib/module-HTMLParser.html) but I have no idea which library to import to use it, how do I find this out (bearing in mind it doesn't say on the page).
|
Try:
```
import HTMLParser
```
In Python 3.0, the HTMLParser module has been renamed to html.parser
you can check about this [here](http://docs.python.org/library/htmlparser.html)
Python 3.0
```
import html.parser
```
Python 2.2 and above
```
import HTMLParser
```
|
You probably really want [BeautifulSoup](https://stackoverflow.com/questions/55391/python-regular-expression-for-html-parsing-beautifulsoup#55424), check the link for an example.
But in any case
```
>>> import HTMLParser
>>> h = HTMLParser.HTMLParser()
>>> h.feed('<html></html>')
>>> h.get_starttag_text()
'<html>'
>>> h.close()
```
|
HTML parser in Python
|
[
"",
"python",
"import",
""
] |
I have a webapp development problem that I've developed one solution for, but am trying to find other ideas that might get around some performance issues I'm seeing.
problem statement:
* a user enters several keywords/tokens
* the application searches for matches to the tokens
* need one result for each token
+ ie, if an entry has 3 tokens, i need the entry id 3 times
* rank the results
+ assign X points for token match
+ sort the entry ids based on points
+ if point values are the same, use date to sort results
What I want to be able to do, but have not figured out, is to send 1 query that returns something akin to the results of an in(), but returns a duplicate entry id for each token matches for each entry id checked.
Is there a better way to do this than what I'm doing, of using multiple, individual queries running one query per token? If so, what's the easiest way to implement those?
**edit**
I've already tokenized the entries, so, for example, "see spot run" has an entry id of 1, and three tokens, 'see', 'spot', 'run', and those are in a separate token table, with entry ids relevant to them so the table might look like this:
```
'see', 1
'spot', 1
'run', 1
'run', 2
'spot', 3
```
|
you could achive this in one query using 'UNION ALL' in MySQL.
Just loop through the tokens in PHP creating a UNION ALL for each token:
e.g if the tokens are 'x', 'y' and 'z' your query may look something like this
```
SELECT * FROM `entries`
WHERE token like "%x%" union all
SELECT * FROM `entries`
WHERE token like "%y%" union all
SELECT * FROM `entries`
WHERE token like "%z%" ORDER BY score ect...
```
The order clause should operate on the entire result set as one, which is what you need.
In terms of performance it won't be all that fast (I'm guessing), however with databases the main overhead in terms of speed is often sending the query to the database engine from PHP and receiving the results. With this technique this only happens once instead of once per token, so performance will increase, I just don't know if it'll be enough.
|
I know this isn't strictly an answer to the question you're asking **but if your table is thousands rather than millions of rows**, then a FULLTEXT solution might be the best way to go here.
In MySQL when you use MATCH on your indexed column, each keyword you supply will be given a relevance score (calculated roughly by the number of times each keyword was mentioned) that will be more accurate than your method and certainly more effecient for multiple keywords.
See here:
<http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html>
|
How-to: Ranking Search Results
|
[
"",
"php",
"mysql",
"search",
""
] |
When should you use generator expressions and when should you use list comprehensions in Python?
```
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
```
|
[John's answer](https://stackoverflow.com/a/47792/4518341) is good (that list comprehensions are better when you want to iterate over something multiple times). However, it's also worth noting that you should use a list if you want to use any of the list methods. For example, the following code won't work:
```
def gen():
return (something for something in get_some_stuff())
print gen()[:2] # generators don't support indexing or slicing
print [5,6] + gen() # generators can't be added to lists
```
Basically, use a generator expression if all you're doing is iterating once. If you want to store and use the generated results, then you're probably better off with a list comprehension.
Since performance is the most common reason to choose one over the other, my advice is to not worry about it and just pick one; if you find that your program is running too slowly, then and only then should you go back and worry about tuning your code.
|
Iterating over the *generator expression* or the *list comprehension* will do the same thing. However, the *list comprehension* will create the entire list in memory first while the *generator expression* will create the items on the fly, so you are able to use it for very large (and also infinite!) sequences.
|
Generator expressions vs. list comprehensions
|
[
"",
"python",
"list-comprehension",
"generator-expression",
""
] |
Using C#
How do you remove a specific node from an XMLDocument using XPATH?
|
XPath can only select nodes from a document, not modify the document.
|
If you want to delete nodes, that are not direct children of the documents root, you can do this:
```
XmlDocument doc = new XmlDocument();
// ... fill or load the XML Document
XmlNode childNode = doc.SelectSingleNode("/rootnode/childnode/etc"); // apply your xpath here
childNode.ParentNode.RemoveChild(childNode);
```
|
.NET : How do you remove a specific node from an XMLDocument using XPATH?
|
[
"",
"c#",
".net",
"xpath",
""
] |
I have an immutable class with some private fields that are set during the constructor execution. I want to unit test this constructor but I'm not sure the "best practice" in this case.
**Simple Example**
This class is defined in Assembly1:
```
public class Class2Test
{
private readonly string _StringProperty;
public Class2Test()
{
_StringProperty = ConfigurationManager.AppSettings["stringProperty"];
}
}
```
This class is defined in Assembly2:
```
[TestClass]
public class TestClass
{
[TestMethod]
public void Class2Test_Default_Constructor()
{
Class2Test x = new Class2Test();
//what do I assert to validate that the field was set properly?
}
}
```
**EDIT 1**: I have answered this question with a potential solution but I'm not sure if it's the "right way to go". So if you think you have a better idea please post it.
This example isn't really worth testing, but assume the constructor has some more complex logic. Is the best approach to avoid testing the constructor and to just assume it works if all the tests for the methods on the class work?
**EDIT 2**: Looks like I made the sample a little to simple. I have updated it with a more reasonable situation.
|
Nothing, unless you are using that field. You don't want over-specification via tests. In other words, there is no need to test that the assignment operator works.
If you are using that field in a method or something, call that method and assert on that.
Edit:
> assume the constructor has some more complex logic
You shouldn't be performing any logic in constructors.
Edit 2:
> ```
> public Class2Test()
> {
> _StringProperty = ConfigurationManager.AppSettings["stringProperty"];
> }
> ```
Don't do that! =) Your simple unit test has now become an integration test because it depends on the successful operation of more than one class. Write a class that handles configuration values. WebConfigSettingsReader could be the name, and it should encapsulate the `ConfigurationManager.AppSettings` call. Pass an instance of that SettingsReader class into the constructor of `Class2Test`. Then, in your *unit test*, you can mock your `WebConfigSettingsReader` and stub out a response to any calls you might make to it.
|
I have properly enabled `[InternalsVisibleTo]` on Assembly1 (code) so that there is a trust relationship with Assembly2 (tests).
```
public class Class2Test
{
private readonly string _StringProperty;
internal string StringProperty { get { return _StringProperty; } }
public Class2Test(string stringProperty)
{
_StringProperty = stringProperty;
}
}
```
Which allows me to assert this:
```
Assert.AreEqual(x.StringProperty, "something");
```
The only thing I don't really like about this is that it's not clear (without a comment) when you are just looking at `Class2Test` what the purpose of the internal property is.
Additional thoughts would be greatly appreciated.
|
How to unit test immutable class constructors?
|
[
"",
"c#",
"unit-testing",
""
] |
I basically have an xml column, and I need to find and replace one tag value in each record.
|
For anything real, I'd go with xpaths, but sometimes you just need a quick and dirty solution:
You can use CAST to turn that xml column into a regular varchar, and then do your normal replace.
`UPDATE xmlTable SET xmlCol = REPLACE( CAST( xmlCol as varchar(max) ), '[search]', '[replace]')`
That same technique also makes searching XML a snap when you need to just run a quick query to find something, and don't want to deal with xpaths.
`SELECT * FROM xmlTable WHERE CAST( xmlCol as varchar(max) ) LIKE '%found it!%'`
Edit: Just want to update this a bit, if you get a message along the lines of *Conversion of one or more characters from XML to target collation impossible*, then you only need to use nvarchar which supports unicode.
`CAST( xmlCol as nvarchar(max) )`
|
To find a content in an XML column, look into the exist() method, as described in MSDN [here](http://msdn.microsoft.com/en-us/library/ms189869(SQL.90).aspx).
```
SELECT * FROM Table
WHERE XMLColumn.exist('/Root/MyElement') = 1
```
...to replace, use the modify() method, as described [here](http://msdn.microsoft.com/en-us/library/ms190675(SQL.90).aspx).
```
SET XMLColumn.modify('
replace value of (/Root/MyElement/text())[1]
with "new value"
')
```
..all assuming SqlServer 2005 or 2008. This is based on XPath, which you'll need to know.
|
Can I do a find/replace in t-sql?
|
[
"",
"sql",
"xml",
"t-sql",
""
] |
What tools and techniques do you use to find dead code in .NET?
In the past, I've decorated methods with the Obsolete attribute (passing true so the compiler will issue an error, as described in [MSDN](http://msdn.microsoft.com/en-us/library/aa664623.aspx)).
I'd be interested in seeing the suggestions of others (beyond tools like FxCop or ReSharper). I want to make sure I'm not missing out on other tools that would be helpful.
|
TDD + NCover
|
Why do you need other answers? FxCop and Resharper do the trick, especially seeing as FxCop is now integrated into VS through "Code Analysis".
|
What tools and techniques do you use to find dead code?
|
[
"",
"c#",
".net",
"refactoring",
"code-analysis",
""
] |
Why is it so hard to do this in Java? If you want to have any kind of module system you need to be able to load JAR files dynamically. I'm told there's a way of doing it by writing your own `ClassLoader`, but that's a lot of work for something that should (in my mind at least) be as easy as calling a method with a JAR file as its argument.
Any suggestions for simple code that does this?
|
The reason it's hard is security. Classloaders are meant to be immutable; you shouldn't be able to willy-nilly add classes to it at runtime. I'm actually very surprised that works with the system classloader. Here's how you do it making your own child classloader:
```
URLClassLoader child = new URLClassLoader(
new URL[] {myJar.toURI().toURL()},
this.getClass().getClassLoader()
);
Class classToLoad = Class.forName("com.MyClass", true, child);
Method method = classToLoad.getDeclaredMethod("myMethod");
Object instance = classToLoad.newInstance();
Object result = method.invoke(instance);
```
Painful, but there it is.
|
The following solution is hackish, as it uses reflection to bypass encapsulation, but it works flawlessly:
```
File file = ...
URL url = file.toURI().toURL();
URLClassLoader classLoader = (URLClassLoader)ClassLoader.getSystemClassLoader();
Method method = URLClassLoader.class.getDeclaredMethod("addURL", URL.class);
method.setAccessible(true);
method.invoke(classLoader, url);
```
|
How to load JAR files dynamically at Runtime?
|
[
"",
"java",
"jar",
"classloader",
""
] |
I know that the #warning directive is not standard **C**/C++, but several compilers support it, including gcc/g++. But for those that don't support it, will they silently ignore it or will it result in a compile failure? In other words, can I safely use it in my project without breaking the build for compilers that don't support it?
|
It is likely that if a compiler doesn't support #warning, then it will issue an error. Unlike #pragma, there is no recommendation that the preprocessor ignore directives it doesn't understand.
Having said that, I've used compilers on various different (reasonably common) platforms and they have all supported #warning.
|
It should be noted that MSVC uses the syntax:
```
#pragma message ( "your warning text here" )
```
The usual #warning syntax generates a fatal error
```
C1021: invalid preprocessor command 'warning'
```
so it is not portable to those compilers.
|
Portability of #warning preprocessor directive
|
[
"",
"c++",
"compiler-construction",
"warnings",
"c-preprocessor",
"portability",
""
] |
Lexical analyzers are quite easy to write when you have regexes. Today I wanted to write a simple general analyzer in Python, and came up with:
```
import re
import sys
class Token(object):
""" A simple Token structure.
Contains the token type, value and position.
"""
def __init__(self, type, val, pos):
self.type = type
self.val = val
self.pos = pos
def __str__(self):
return '%s(%s) at %s' % (self.type, self.val, self.pos)
class LexerError(Exception):
""" Lexer error exception.
pos:
Position in the input line where the error occurred.
"""
def __init__(self, pos):
self.pos = pos
class Lexer(object):
""" A simple regex-based lexer/tokenizer.
See below for an example of usage.
"""
def __init__(self, rules, skip_whitespace=True):
""" Create a lexer.
rules:
A list of rules. Each rule is a `regex, type`
pair, where `regex` is the regular expression used
to recognize the token and `type` is the type
of the token to return when it's recognized.
skip_whitespace:
If True, whitespace (\s+) will be skipped and not
reported by the lexer. Otherwise, you have to
specify your rules for whitespace, or it will be
flagged as an error.
"""
self.rules = []
for regex, type in rules:
self.rules.append((re.compile(regex), type))
self.skip_whitespace = skip_whitespace
self.re_ws_skip = re.compile('\S')
def input(self, buf):
""" Initialize the lexer with a buffer as input.
"""
self.buf = buf
self.pos = 0
def token(self):
""" Return the next token (a Token object) found in the
input buffer. None is returned if the end of the
buffer was reached.
In case of a lexing error (the current chunk of the
buffer matches no rule), a LexerError is raised with
the position of the error.
"""
if self.pos >= len(self.buf):
return None
else:
if self.skip_whitespace:
m = self.re_ws_skip.search(self.buf[self.pos:])
if m:
self.pos += m.start()
else:
return None
for token_regex, token_type in self.rules:
m = token_regex.match(self.buf[self.pos:])
if m:
value = self.buf[self.pos + m.start():self.pos + m.end()]
tok = Token(token_type, value, self.pos)
self.pos += m.end()
return tok
# if we're here, no rule matched
raise LexerError(self.pos)
def tokens(self):
""" Returns an iterator to the tokens found in the buffer.
"""
while 1:
tok = self.token()
if tok is None: break
yield tok
if __name__ == '__main__':
rules = [
('\d+', 'NUMBER'),
('[a-zA-Z_]\w+', 'IDENTIFIER'),
('\+', 'PLUS'),
('\-', 'MINUS'),
('\*', 'MULTIPLY'),
('\/', 'DIVIDE'),
('\(', 'LP'),
('\)', 'RP'),
('=', 'EQUALS'),
]
lx = Lexer(rules, skip_whitespace=True)
lx.input('erw = _abc + 12*(R4-623902) ')
try:
for tok in lx.tokens():
print tok
except LexerError, err:
print 'LexerError at position', err.pos
```
It works just fine, but I'm a bit worried that it's too inefficient. Are there any regex tricks that will allow me to write it in a more efficient / elegant way ?
Specifically, is there a way to avoid looping over all the regex rules linearly to find one that fits?
|
You can merge all your regexes into one using the "|" operator and let the regex library do the work of discerning between tokens. Some care should be taken to ensure the preference of tokens (for example to avoid matching a keyword as an identifier).
|
I suggest using the re.Scanner class, it's not documented in the standard library, but it's well worth using. Here's an example:
```
import re
scanner = re.Scanner([
(r"-?[0-9]+\.[0-9]+([eE]-?[0-9]+)?", lambda scanner, token: float(token)),
(r"-?[0-9]+", lambda scanner, token: int(token)),
(r" +", lambda scanner, token: None),
])
>>> scanner.scan("0 -1 4.5 7.8e3")[0]
[0, -1, 4.5, 7800.0]
```
|
Efficiently match multiple regexes in Python
|
[
"",
"python",
"regex",
"lexical-analysis",
""
] |
I am using REPLACE in an SQL view to remove the spaces from a property number. The function is setup like this REPLACE(pin, ' ', ''). On the green-screen the query looked fine. In anything else we get the hex values of the characters in the field. I am sure it is an encoding thing, but how do I fix it?
Here is the statement I used to create the view:
```
CREATE VIEW RLIC2GIS AS SELECT REPLACE(RCAPIN, ' ', '') AS
RCAPIN13 , RLICNO, RONAME, ROADR1, ROADR2, ROCITY, ROSTAT, ROZIP1,
ROZIP2, RGRID, RRADR1, RRADR2, RANAME, RAADR1, RAADR2, RACITY,
RASTAT, RAZIP1, RAZIP2, REGRES, RPENDI, RBLDGT, ROWNOC, RRCODE,
RROOMS, RUNITS, RTUNIT, RPAID, RAMTPD, RMDYPD, RRFUSE, RNUMCP,
RDATCP, RINSP, RCAUKY, RCAPIN, RAMTYR, RYREXP, RDELET, RVARIA,
RMDYIN, RDTLKI, ROPHN1, ROPHN2, ROCOM1, ROCOM2, RAPHN1, RAPHN2,
RACOM1, RACOM2, RNOTES FROM RLIC2
```
UPDATE: I posted the answer below.
|
We ended up using concat and substring to get the results we wanted.
```
CREATE VIEW RLIC2GIS AS
SELECT CONCAT(SUBSTR(RCAPIN,1,3),CONCAT(SUBSTR(RCAPIN,5,2),
CONCAT(SUBSTR(RCAPIN,8,2), CONCAT(SUBSTR(RCAPIN,11,3),
SUBSTR(RCAPIN, 15,3))))) AS CAPIN13, RLICNO, RONAME, ROADR1,
ROADR2, ROCITY, ROSTAT, ROZIP1, ROZIP2, RGRID, RRADR1, RRADR2,
RANAME, RAADR1, RAADR2, RACITY, RASTAT, RAZIP1, RAZIP2, REGRES,
RPENDI, RBLDGT, ROWNOC, RRCODE, RROOMS, RUNITS, RTUNIT, RPAID,
RAMTPD, RMDYPD, RRFUSE, RNUMCP, RDATCP, RINSP, RCAUKY, RCAPIN,
RAMTYR, RYREXP, RDELET, RVARIA, RMDYIN, RDTLKI, ROPHN1, ROPHN2,
ROCOM1, ROCOM2, RAPHN1, RAPHN2, RACOM1, RACOM2, RNOTES FROM RLIC2
```
|
The problem here might be that what you think is the blank character in that field is actually some other unprintable character.
You can use the following SQL to see what ASCII character is at the 4th position:
```
SELECT ascii(substr(RCAPIN,4,1))
FROM YOUR-TABLE
```
Then you would be able to use a replace for that character instead of the blank space:
```
SELECT replace(RCAPIN,chr(9))
FROM YOUR-TABLE
```
|
Weird Output on SQL REPLACE
|
[
"",
"sql",
"ibm-midrange",
""
] |
In the Java collections framework, the Collection interface declares the following method:
> [`<T> T[] toArray(T[] a)`](http://java.sun.com/javase/6/docs/api/java/util/Collection.html#toArray(T%5B%5D))
>
> Returns an array containing all of the elements in this collection; the runtime type of the returned array is that of the specified array. If the collection fits in the specified array, it is returned therein. Otherwise, a new array is allocated with the runtime type of the specified array and the size of this collection.
If you wanted to implement this method, how would you create an array of the type of **a**, known only at runtime?
|
Use the static method
```
java.lang.reflect.Array.newInstance(Class<?> componentType, int length)
```
A tutorial on its use can be found here:
<http://java.sun.com/docs/books/tutorial/reflect/special/arrayInstance.html>
|
By looking at how ArrayList does it:
```
public <T> T[] toArray(T[] a) {
if (a.length < size)
a = (T[])java.lang.reflect.Array.newInstance(a.getClass().getComponentType(), size);
System.arraycopy(elementData, 0, a, 0, size);
if (a.length > size)
a[size] = null;
return a;
}
```
|
How to instantiate a Java array given an array type at runtime?
|
[
"",
"java",
"arrays",
"collections",
""
] |
I've got a page with a normal form with a submit button and some jQuery which binds to the form submit event and overrides it with `e.preventDefault()` and runs an AJAX command. This works fine when the submit button is clicked but when a link with `onclick='document.formName.submit();'` is clicked, the event is not caught by the AJAX form submit event handler. Any ideas why not or how to get this working without binding to all the a elements?
|
A couple of suggestions:
* Overwrite the submit function to do your evil bidding
```
var oldSubmit = form.submit;
form.submit = function() {
$(form).trigger("submit");
oldSubmit.call(form, arguments);
}
```
* Why not bind to all the <a> tags? Then you don't have to do any monkey patching, and it could be as simple as (assuming all the links are inside the form tag):
```
$("form a").click(function() {
$(this).parents().filter("form").trigger("submit");
});
```
|
If you are using jQuery, you should be attaching events via it's own [event mechanism](http://docs.jquery.com/Events) and not by using "on" properties (onclick etc.). It also has its own event triggering method, aptly named '[trigger](http://docs.jquery.com/Events/trigger#typedata)', which you should use to activate the form submission event.
|
jQuery override form submit not working when submit called by javascript on a element
|
[
"",
"javascript",
"jquery",
""
] |
I'm trying to direct a browser to a different page. If I wanted a GET request, I might say
```
document.location.href = 'http://example.com/q=a';
```
But the resource I'm trying to access won't respond properly unless I use a POST request. If this were not dynamically generated, I might use the HTML
```
<form action="http://example.com/" method="POST">
<input type="hidden" name="q" value="a">
</form>
```
Then I would just submit the form from the DOM.
But really I would like JavaScript code that allows me to say
```
post_to_url('http://example.com/', {'q':'a'});
```
What's the best cross browser implementation?
I need a solution that changes the location of the browser, just like submitting a form. If this is possible with [XMLHttpRequest](http://en.wikipedia.org/wiki/XMLHttpRequest), it is not obvious. And this should not be asynchronous, nor use XML, so Ajax is not the answer.
|
## Dynamically create `<input>`s in a form and submit it
```
/**
* sends a request to the specified url from a form. this will change the window location.
* @param {string} path the path to send the post request to
* @param {object} params the parameters to add to the url
* @param {string} [method=post] the method to use on the form
*/
function post(path, params, method='post') {
// The rest of this code assumes you are not using a library.
// It can be made less verbose if you use one.
const form = document.createElement('form');
form.method = method;
form.action = path;
for (const key in params) {
if (params.hasOwnProperty(key)) {
const hiddenField = document.createElement('input');
hiddenField.type = 'hidden';
hiddenField.name = key;
hiddenField.value = params[key];
form.appendChild(hiddenField);
}
}
document.body.appendChild(form);
form.submit();
}
```
Example:
```
post('/contact/', {name: 'Johnny Bravo'});
```
**EDIT**: Since this has gotten upvoted so much, I'm guessing people will be copy-pasting this a lot. So I added the `hasOwnProperty` check to fix any inadvertent bugs.
|
This would be a version of the selected answer using [jQuery](http://en.wikipedia.org/wiki/JQuery).
```
// Post to the provided URL with the specified parameters.
function post(path, parameters) {
var form = $('<form></form>');
form.attr("method", "post");
form.attr("action", path);
$.each(parameters, function(key, value) {
var field = $('<input></input>');
field.attr("type", "hidden");
field.attr("name", key);
field.attr("value", value);
form.append(field);
});
// The form needs to be a part of the document in
// order for us to be able to submit it.
$(document.body).append(form);
form.submit();
}
```
|
JavaScript post request like a form submit
|
[
"",
"javascript",
"forms",
"post",
"submit",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.