text stringlengths 8 267k | meta dict |
|---|---|
Q: Tool: ETL from an ODBC to SQL 05? I just want a simple tool that will help me quickly write scripts/packages that I can schedule to run. My transform requirements are pretty simple (changing column names, merging/splitting tables, and date conversions). I'll sacrifice flexibility for speed of development on this one.
Any come to mind?
A: I like the Pentaho Data Integration tool (also known as kettle) - although that might be a bit "too big for you .. but worth checking out!
A: SQL Server 2005? SSIS (but I wouldn't call it simple, but then again out of the list of things you gave, only the naming is really simple)
You can actually script package creation using C# (.NET/whatever, presumably even PowerShell).
Resources here and here
EDIT: (hope you don't mind me butting in here Cade) Check that your ODBC driver plays nicely with SSIS. Some drivers (DB2/400 and some versions of Sybase for example) don't work all that well.
A: At least with the older versions of SQL Server, they shipped with DTS. Not the simplest, but it does work with ODBC and SQL Server, and you may already have it.
A: Some SQL development tools, like Advanced Query Tool (AQT), let you do lightweight ETL and schedule with command files...
We have Informatica, but I used AQT to write from Oracle to files, for a short term request..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Typing generic values (C#) When I try this with a generic class where this.value is T:
if (this.value.GetType() == typeof(int))
{
((int)this.value)++;
}
else
{
throw new InvalidOperationException
("T must be an int to perform this operation");
}
I get a compile-time error: "Cannot convert type 'T' to 'int'"
What should I do to perform an integral operation on this.value when it's an int?
Note that this is just an example. The code does type conversions with generics, and "int" is just an example of one type for T.
A: Static typing of C# will not let you do so, but you can fool it with casting to object. I would not recommend doing this, it probably shows architectural problem, but anyway:
using System;
class Foo<T>
{
public T value;
public void Increment()
{
if (value is int) value = (T)(object)(((int)(object)value)+1);
}
}
static class Program
{
static void Main()
{
Foo<int> x = new Foo<int>();
x.Increment();
x.Increment();
Console.WriteLine(x.value);
}
}
A: Unfortunately, it is very hard to convince the compiler about specific T implementations. One (nasty) approach is to cast to object in the middle (note this will box and unbox value-types):
int i = (int)(object)this.value;
i++;
this.value = (T)(object)i;
Ugly but it works. In .NET 3.5 I have some better wrappers for generic arithmetic, here. The Operator class is part of MiscUtil; at the simplest level, I suspect AddAlternative would work very well:
this.value = Operator.AddAlternative(this.value, 1);
This should infer the implicit <T,int> automatically, or you can add them yourself:
this.value = Operator.AddAlternative<T,int>(this.value, 1);
Benefit: This is preferable to the original code as it doesn't actually care about the original T - it will work for any type (even your own) that supports "T +(T,int)".
I think there is also a ChangeType hiding around somewhere in there...
[edit] Collin K and others make a valid remark about the architectural implications - but being pragmatic there are times when the T really does matter that much... but I'd agree with avoiding this type of specialization unless really necessary. That said (as per my comment on Collin's post), the ability to perform things like basic arithmetic (increment, Int32 division, etc) on (for example) a Matrix<T> [for T in decimal/float/int/double/etc] is often highly valuable.
A: In my opinion, type-specific code in a generic class is a code smell. I would refactor it to get something like this:
public class MyClass<T>
{
...
}
public class IntClass : MyClass<int>
{
public void IncrementMe()
{
this.value++;
}
}
A: I don't think I understand what you are after. If you are requiring that something be a specific type, then you probably shouldn't use Generics. You could, it just seems silly. This will do what you are asking, but I don't recommend it.
namespace GenericsOne
{
using System;
class Program
{
static void Main(string[] args)
{
Sample<int> one = new Sample<int>();
one.AddFive(10);
// yes, this will fail, it is to show why the approach is generally not a good one.
Sample<DateTime> two = new Sample<DateTime>();
two.AddFive(new DateTime());
}
}
}
namespace GenericsOne
{
using System;
public class Sample<T>
{
public int AddFive(T number)
{
int junk = 0;
try
{
junk = Convert.ToInt32(number);
}
catch (Exception)
{
Console.WriteLine("Nope");
}
return junk + 5;
}
}
}
A: This is why I really want numeric or operation constraints in C#4.
@Marc Gravell's answer is the best way round this (+1), but it is frustating how this is an issue for generics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Does Objective-C compile to native code or byte-code? On OS X, does Objective-C compile to native code or byte-code?
Can Objective-C programs be compiled on Linux?
A: It's native.
There is GNUstep which an Open Source implementation of the NeXT OpenStep specification written in Objective-C. Their implementation of the Cocoa API is not a direct match so a direct compilation without porting is not possible.
A: Yes, Objective-C compiles to machine code.
Objective-C compilers exist for Linux, but Cocoa is an OS X-only technology. I've heard of an open replacement called GNUstep, but don't know much about it.
A: Objective-C is a variant of C. It compiles to native code.
A: Objective-C is compiled to native code by either GCC or LLVM[*]. You can compile ObjC programs on Linux (the generic GCC will happily support ObjC, though it uses a different runtime library than either of the Apple ones). For a cross-platform API similar to Cocoa (i.e. derived from Cocoa) which will happily work on Linux and let you port some code between OS X and Linux, check out GNUstep: http://www.gnustep.org
[*]In fact, LLVM internally compiles the Objective-C to an internal bitcode representation, then to code for the target machine, so perhaps the answer is "both"…
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: A platform for easy creation of custom websites? I'm in a small business of hosting my clients' websites, and a lot of times they ask me to create one from scratch. I'm no designer, but I can code CSS/HTML, AJAX, PHP. However, I'm not willing to create sites from scratch, knowing how much is involved.
In the past, I've tried using design templates, but they proved to be inefficient - customers would request changes and I'd be unable to assist them, since I didn't write the template. Customers themselves would not be able to make changes.
So, the question is: Is there web-based software that allows for easy creation of custom websites, with skins/layouts/templates? It should allow non-technical person to add content and make basic modifications. I've seen a few websites use Wordpress for that purpose, but don't know if it's a good choice.
A: If it has to be dead easy, then look at something like Joomla. If you want a bit more control, try Drupal. I think that both will suit you better than WordPress if my hunch is right.
A: No, Joomla is huge and heavy, is not a reliable solution. I recommend you Movable Type. It's written in PERL but have native support for multi-sites and multi-domains. Which is kinda cool :) Also, You can try Expression Engine whch also have some nice features for multi-domain users with one setup.
Anyhow, the best solution is to build a custom solution for this, but of course, you won't be only a hosting company anymore :D
A: I think you are basically looking for a CMS. Now the problem is that there a so many CMS'es out there that it can be overwhelming to pick one. Another thing I've noticed over the years is that with the vast amount of CMS systems to pick from there is a lot of personal taste involved. To pick the one that suits you best you need to try out a bunch of them. I suggest you head over to CMSMatrix and browse through the assortment and familiarize yourself with some of them.
(PS: Don't forget to try out Drupal)
A: I've used Wordpress for a very simple site which proved a right decision for it. You can check out that site at: Chhobi.net. If you don't have many complex requirements, I think Wordpress is not a bad idea. If you need to expand the site later on, and add different functionality (like forums, gallery, polls, etc.) you better go for a proper CMS solution like Joomla, Drupal, and likewise.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I store an XML value in my .NET App.Config file I am trying to store an xml value in my app.config file.
The app.config does not like this and I cannot use the <![CDATA[ construct to ignore the XML'ness of my value.
Is there a way to do it?
Value example:<FieldRef Name='LinkfileName' Nullable='True'/><FieldRef Name='Web' Nullable='True'/>
A: You can save an XML document as text in an attribute value if you escape the character entities:
<FieldRef Name="Linkfilename" ...
You can then use XmlDocument.Load() to parse the text value.
Note that this won't work for your example because your value is an XML document fragment and not a well-formed XML document. You either need to wrap it in an enclosing document element (whose markup will still be escaped) or use a properly-initialized XmlReader to process the value once you've retrieved it from the configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Is there an agreed ideal schema for tagging I have a photo website and i want to support tags as my original category bucketing is starting to fail (some pictures are family and vacations, or school and friends). Is there an agreed tagging db schema?
I still want to support having photos as part of an album.
Right now i have a few tables:
Photos
*
*PhotoID
*PhotoAlbumID
*Caption
*Date
Photo Album
*
*AlbumID
*AlbumName
*AlbumDate
A: There are various schemas which are effective, each with their own performance implications for the common queries you'll need as the number of tagged items grows:
*
*http://howto.philippkeller.com/2005/04/24/Tags-Database-schemas/
*http://howto.philippkeller.com/2005/06/19/Tagsystems-performance-tests/
Personally, I like having a tag table and a link table which associates tags with items, as it's denormalized (no duplication of tag names) and I can store additional information in the link table (such as when the item was tagged) when necessary.
You can also add some denormalised data if you're feeling frisky and want simple selects at the cost of the additional data maintenance required by storing usage counts in the tag table, or storing tag names which were used in the item table itself to avoid hitting the link table and tag table for each item, which is useful for displaying multiple items with all their tags and for simple tag versioning... if you're into that sort of thing ;)
A: A quick note on how to handle tags:
Tagging systems can vary from very rigidly defined tags, where creating new ones require explicit extra work (think gmail labels) to very loose systems where adding as many tags as possible is encouraged (think flickr, or tagging audio content where a transcription may be applied directly as tags).
In general, an easily-indexable media (text!) should have a more rigid system, since the content is already searchable. Additional tags exist more for categorization only, and categorization is only helpful when different users are broadly assigning things into the same categories. If you have raw text, it should take nearly an act of God to create a new tag.
On the other hand, media that's more difficult to index (images, video, audio) should have a flexible system that encourages many tags, since they and other metadata are your only hope when searching.
This is important because the database schema you want could change somewhat depending on which end of that spectrum you find yourself.
A: Something like this comes to my mind: add those two tables
Tags
*
*TagID
*TagName
*TagDescription
PhotoTags
*
*PhotoID
*TagID
You can extend this to albums too, having an intersection table between Photo Albums and Tags.
A: I suggest looking to see how established open-source software does it. For example, Gallery stores its meta-data in a database like you do, and is pretty rich.
I don't think you'll find a "standard" schema, though. The closest thing I can think of is the EXIF meta-data format, which is embedded within image files themselves (by cameras, etc).
A: if you want real performance with millions of records, you could store tags in one field, comma separated and retreive records with a full-text index/search daemon like sphinxsearch. All you have to add, is a table listing all tags with a count value to know how often they are attached to an item.
I know it's not the usual way and a little more complicated than a pure database solution, but it's really really fast to search tag related items.
You could use full-text search fonctionnality of your database engine too, but when there's lots of records, most engines tend to be slow.
If it's for a small project, you can go your way, seams good and proper way to do. But I would just share with you this other solution. What do you think of ?
A: I've done this in a small system without very many users, but I've wondered before if there was an "accepted" way to manage tags. After reading through the links posted by insin and plenty of other blog posts on tagging, it seems that the accepted way is to store it fully normalized and cache certain things if your data set gets too big.
Since it's a many-many relationship (each tag can belong to any number of photos - each photo can have many tags), relational database theory has you create a photo table, a tag table and a cross-reference table to link them.
photos
photoid
caption
filename
date
tags
tagid
tagname
phototags
photoid
tagid
This has scaling problems selecting from really large datasets, but so do all the less-normalized schemas (sorting and filtering by a text field will probably always be slower than using an integer, for example). If you grow as large as delicious or maybe even StackOverflow you'll probably have to do some caching of your tag sets.
Another issue you'll have to face is the issue of tag normalization. This doesn't have anything to do with database normalization - it's just making sure that (for example) the "StackOverflow", "stackoverflow" and "stack overflow" tags are the same. Lots of places disallow whitespace or automatically strip it out. Sometimes you'll see the same thing for punctuation - making "StackOverflow" the same as "Stack-Overflow". Auto-lowercasing is pretty standard. You'll even see special case normalization - like making "c#" the same as "csharp".
Happy tagging!
A: In my app BugTracker.NET, I make an assumption that there won't be TOO many bugs. Maybe tens of thousands, but not tens of millions. That assumption allows me to cache the tags and the ids of the items they reference.
In the database, the tags are stored as they are entered, with the bugs, in a comma delimited text field.
When a tag field is added or changed, that kicks off a background thread that selects all bugids and their tags, parses the text, building a map where the key is the tag and the value is a list of all the ids that have that tag. I then cache that map in the Asp.Net Application object.
Below is the code I've just described.
The code could be optimized so that instead of going through all the bugs it just incrementally modified the cached map, but even unoptimized, it works fine.
When somebody does a search using a tag, I look up the value in the map, get the list of ids, and then fetch those bugs using SQL with "where id in (1, 2, 3...)" clause.
public static void threadproc_tags(object obj)
{
System.Web.HttpApplicationState app = (System.Web.HttpApplicationState)obj;
SortedDictionary<string,List<int>> tags = new SortedDictionary<string,List<int>>();
// update the cache
DbUtil dbutil = new DbUtil();
DataSet ds = dbutil.get_dataset("select bg_id, bg_tags from bugs where isnull(bg_tags,'') <> ''");
foreach (DataRow dr in ds.Tables[0].Rows)
{
string[] labels = btnet.Util.split_string_using_commas((string) dr[1]);
// for each tag label, build a list of bugids that have that label
for (int i = 0; i < labels.Length; i++)
{
string label = normalize_tag(labels[i]);
if (label != "")
{
if (!tags.ContainsKey(label))
{
tags[label] = new List<int>();
}
tags[label].Add((int)dr[0]);
}
}
}
app["tags"] = tags;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Design Tab Control with Visual Studio 2008 (without SP1) Is there any way (maybe directly editing resource files) to configure a Tab Control (add/remove tabs and their captions and contents) at design time with Visual Studio 2008 without SP1 (I heard that SP1 has such feature)?
P.S.: I use c++ with wtl
A: The form designer for Windows Forms applications allows you to configure the tab control at design time, but the resource editor for native C++ applications doesn't allow this. You can only add tabs to the tab control at run time. (I'm using VS2008 SP1.)
A: I don't think it's possible. The dialog script does not support the Tab control directly, instead, it inserts a generic "CONTROL" statement in which the control "SysTabControl32" is inserted. You need to assign the pages in code.
The new feature in VS 2008 SP1 has to do with the WPF controls, but since you mention that you work with WTL I assume you work with win32 dialog resources.
Dave
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Operator Overloading with C# Extension Methods I'm attempting to use extension methods to add an operater overload to the C# StringBuilder class. Specifically, given StringBuilder sb, I'd like sb += "text" to become equivalent to sb.Append("text").
Here's the syntax for creating an extension method for StringBuilder:
public static class sbExtensions
{
public static StringBuilder blah(this StringBuilder sb)
{
return sb;
}
}
It successfully adds the blah extension method to the StringBuilder.
Unfortunately, operator overloading does not seem to work:
public static class sbExtensions
{
public static StringBuilder operator +(this StringBuilder sb, string s)
{
return sb.Append(s);
}
}
Among other issues, the keyword this is not allowed in this context.
Are adding operator overloads via extension methods possible? If so, what's the proper way to go about it?
A: It appears this isn't currently possible - there's an open feedback issue requesting this very feature on Microsoft Connect:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=168224
suggesting it might appear in a future release but isn't implemented for the current version.
A: If you control the places where you want to use this "extension operator" (which you normally do with extension methods anyway), you can do something like this:
class Program {
static void Main(string[] args) {
StringBuilder sb = new StringBuilder();
ReceiveImportantMessage(sb);
Console.WriteLine(sb.ToString());
}
// the important thing is to use StringBuilderWrapper!
private static void ReceiveImportantMessage(StringBuilderWrapper sb) {
sb += "Hello World!";
}
}
public class StringBuilderWrapper {
public StringBuilderWrapper(StringBuilder sb) { StringBuilder = sb; }
public StringBuilder StringBuilder { get; private set; }
public static implicit operator StringBuilderWrapper(StringBuilder sb) {
return new StringBuilderWrapper(sb);
}
public static StringBuilderWrapper operator +(StringBuilderWrapper sbw, string s) {
sbw.StringBuilder.Append(s);
return sbw;
}
}
The StringBuilderWrapper class declares an implicit conversion operator from a StringBuilder and declares the desired + operator. This way, a StringBuilder can be passed to ReceiveImportantMessage, which will be silently converted to a StringBuilderWrapper, where the + operator can be used.
To make this fact more transparent to callers, you can declare ReceiveImportantMessage as taking a StringBuilder and just use code like this:
private static void ReceiveImportantMessage(StringBuilder sb) {
StringBuilderWrapper sbw = sb;
sbw += "Hello World!";
}
Or, to use it inline where you're already using a StringBuilder, you can simply do this:
StringBuilder sb = new StringBuilder();
StringBuilderWrapper sbw = sb;
sbw += "Hello World!";
Console.WriteLine(sb.ToString());
I created a post about using a similar approach to make IComparable more understandable.
A: Though it's not possible to do the operators, you could always just create Add (or Concat), Subtract, and Compare methods....
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Whatever.Test
{
public static class Extensions
{
public static int Compare(this MyObject t1, MyObject t2)
{
if(t1.SomeValueField < t2.SomeValueField )
return -1;
else if (t1.SomeValueField > t2.SomeValueField )
{
return 1;
}
else
{
return 0;
}
}
public static MyObject Add(this MyObject t1, MyObject t2)
{
var newObject = new MyObject();
//do something
return newObject;
}
public static MyObject Subtract(this MyObject t1, MyObject t2)
{
var newObject= new MyObject();
//do something
return newObject;
}
}
}
A: Hah! I was looking up "extension operator overloading" with exactly the same desire, for sb += (thing).
After reading the answers here (and seeing that the answer is "no"), for my particular needs, I went with an extension method which combines sb.AppendLine and sb.AppendFormat, and looks tidier than either.
public static class SomeExtensions
{
public static void Line(this StringBuilder sb, string format, params object[] args)
{
string s = String.Format(format + "\n", args);
sb.Append(s);
}
}
And so,
sb.Line("the first thing is {0}", first);
sb.Line("the second thing is {0}", second);
Not a general answer, but may be of interest to future seekers looking at this kind of thing.
A: This is not currently possible, because extension methods must be in static classes, and static classes can't have operator overloads. But the feature is being discussed for some future release of C#. Mads talked a bit more about implementing it in this video from 2017.
On why it isn't currently implemented, Mads Torgersen, C# Language PM says:
...for the Orcas release we decided to
take the cautious approach and add
only regular extension methods, as
opposed to extention properties,
events, operators, static methods, etc
etc. Regular extension methods were
what we needed for LINQ, and they had
a syntactically minimal design that
could not be easily mimicked for some
of the other member kinds.
We are becoming increasingly aware
that other kinds of extension members
could be useful, and so we will return
to this issue after Orcas. No
guarantees, though!
Further below in the same article:
I am sorry to report that we will not
be doing this in the next release. We
did take extension members very
seriously in our plans, and spent a
lot of effort trying to get them
right, but in the end we couldn't get
it smooth enough, and decided to give
way to other interesting features.
This is still on our radar for future
releases. What will help is if we get
a good amount of compelling scenarios
that can help drive the right design.
A: Its possible to rigg it with a wrapper and extensions but impossible to do it properly. You end with garbage which totally defeats the purpose.
I have a post up somewhere on here that does it, but its worthless.
Btw
All numeric conversions create garbage in string builder that needs to be fixed. I had to write a wrapper for that which does work and i use it. That is worth perusing.
A: The previous answers still seem to be correct, this is not implemented in C#. You can, however, get very close with a little more code.
Option 1, wrapper class around StringBuilder
Something like this:
public static class ExtStringBuilder
{
public class WStringBuilder
{
private StringBuilder _sb { get; }
public WStringBuilder(StringBuilder sb) => _sb = sb;
public static implicit operator StringBuilder(WStringBuilder sbw) => sbw._sb;
public static implicit operator WStringBuilder(StringBuilder sb) => new WStringBuilder(sb);
public static WStringBuilder operator +(WStringBuilder sbw, string s) => sbw._sb.Append(s);
}
public static WStringBuilder wrap(this StringBuilder sb) => sb;
}
usage like this:
StringBuilder sb = new StringBuilder();
// requires a method call which is inconvenient
sb = sb.wrap() + "string1" + "string2";
Or:
WStringBuilder sbw = new StringBuilder();
sbw = sbw + "string1" + "string2";
// you can pass sbw as a StringBuilder parameter
methodAcceptsStringBuilder(sbw);
// but you cannot call methods on it
// this fails: sbw.Append("string3");
You can make a wrapper for string instead of StringBuilder in the exact same fashion, which might be a better fit, but in this case, you'll be wrapping possibly many string objects instead of 1 StringBuilder object.
As another answer points out, this strategy seems to defeat a lot of the purpose of having an operator since depending on the use case, the way you use it changes. So it feels a little awkward.
In this case, I agree, this is far too much overhead just to replace a method like StringBuilder's Append method. However, if you are putting something more complicated behind the operator (something requiring a for loop, for instance), you may find this is a convenient solution for your project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "189"
} |
Q: Creating a News Ticker which is updated from an RSS Feed - Javascript/any language I need to create a news ticker that is updated via an RSS feed. Any ideas on how to implement this?
I would prefer Javascript but any language is acceptable.
A: There are are several good examples of this on this DynamicDrive page, although one of the requirements is that you can run PHP on your site. PHP here is used to fetch the actual feeds and allow your ticker to access them locally via an AJAX call.
There are several other projects out there built on JQuery, and the basic approach taken by each is:
*
*use a PHP (or ASP.net) script to fetch the feed to your server.
*access this local file via repeated AJAX calls, making use of setTimeout
*update the display (ticker) with latest data fetched
The file is fetched to your local server for the AJAX calls due to the Same Origin Policy:
It prevents a document or script loaded from one "origin" from getting or setting properties of a document from a different "origin".
Further examples include:
*
*JTicker from Jason's Toolbox
*JQuery feed plugin - need to read the comments on this one, as code originally posted doesn't seem to work out of the box
A: If you really have a nice niche market where your news ticker could be very popular, you might want to 'be on their desktop' and develop a widget with Adobe Air.
You can create a nice scrolling ticker then with any javascript you like (or flash/flex, that's supported too)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: WCF DataContracts I have a WCF service hosted for internal clients - we have control of all the clients. We will therefore be using a data contracts library to negate the need for proxy generation. I would like to use some readonly properties and have some datacontracts without default constructors.
Thanks for your help...
A: Readonly properties are fine as long as you mark the (non-readonly) field as the [DataMember], not the property. Unlike XmlSerializer, IIRC DataContractSerializer doesn't use the default ctor - it uses a separate reflection mechanism to create uninitialized instances. Except on mono's "Olive" (WCF port), where it does use the default ctor (at the moment, or at some point in the recent past).
Example:
using System;
using System.IO;
using System.Runtime.Serialization;
[DataContract]
class Foo
{
[DataMember(Name="Bar")]
private string bar;
public string Bar { get { return bar; } }
public Foo(string bar) { this.bar = bar; }
}
static class Program
{
static void Main()
{
DataContractSerializer dcs = new DataContractSerializer(typeof(Foo));
MemoryStream ms = new MemoryStream();
Foo orig = new Foo("abc");
dcs.WriteObject(ms, orig);
ms.Position = 0;
Foo clone = (Foo)dcs.ReadObject(ms);
Console.WriteLine(clone.Bar);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Getting quickly up to speed on ASP.NET for an experienced coder I have a contract in the offering from a client to develop an intranet application for capturing/manipulating/displaying a fairly complex set of marketing data. I've done this sort of thing before so the analysis, database etc. holds no issues for me, but the client would most likely prefer ASP.NET as they have some (small) amount of experience with this.
My current default language for web apps is PHP, although I have coded some ASP some time ago. I've been coding for 20 years and have a stack of languages and technologies under my belt - including Perl, Java, VB, Delphi and C - so learning a new environment doesn't worry me, but nevertheless I'd like to get up to speed with the least effort and as quickly as possible.
What books, websites or other resources would you recommend to most efficiently achieve this?
A: Since you're an experienced coder, I would suggest simply diving in and you'll pick it up as you go. From your list of skills, it is most like Delphi (Event-Driven programming).
http://www.codeproject.com is very helpful and has many articles on ASP.NET, including ones for beginners and experts.
I would also suggest an alternative: There is another implementation of ASP.NET called ASP.NET MVC (http://www.asp.net/mvc/). With your background, I have a feeling this will make more sense, and I personally believe it is a much better platform that WebForms. I also believe this site (stackoverflow) was built using it.
Hope this helps...
A: The thing with learning ASP.NET is that you should first gain an understanding of the .NET framework and how it works, as there is a lot more going on behind the scenes than in the other interpreted languages you're familiar with. After this you'll need to decide what version of the runtime (1.1 or 2.0 but only do 1.1 for legacy reasons) you'll be coding in .
Once you're ready to get your feet wet, Set yourself up with a book (for reference) and all the website references and you'll be good to go. For a book I would recommend either O'Reily's ASP.NET book or the ASP.Net 2.0 Anthology (if you're going to do ASP.NET 2.0 and code in C#).
A: Official ASP.NET forums (http://forums.asp.net/) will be a big help and for quick references and helps.
Then i always like to go to w3schools for quick tutorials (http://www.w3schools.com/ASPNET/).These code snippets will let you quickly become familiar with ASP.NET concepts.
Pro ASP.NET 3.5 in C# 2008 is a good book you should use as a reference and you should get familiar with ASP.NET AJAX components. For that i suggest Beginning ASP.NET 2.0 AJAX
A: If you must use web forms for asp.net, the event model will be the source of much pain and suffering for you since you are coming from an web developer mindset thinking request/response simplicity (which is a good thing).
First thing to get deep into is the lifecycle of an asp.net request. Learn about IHttpModules, IHttpHandlers, then the asp.net Page (just an implementation of IHttpHandler).
Otherwise I would also recommend looking into the MVC implementation as well.
A: asp.net is a great site with tons of resources and videos for getting started. I'd download VS Express and start cranking through the videos.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to tell if the user's using 24 hour time using Cocoa I'm trying to determine if the user is using 24 hour or 12 hour time, and there doesn't seem to be a good way to figure this out other than creating an NSDateFormatter and searching the format string for the period field ('a' character)
Here's what I'm doing now:
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setTimeStyle:NSDateFormatterShortStyle];
NSRange range = [[formatter dateFormat] rangeOfString:@"a"];
BOOL is24HourFormat = range.location == NSNotFound && range.length == 0;
[formatter release];
Which works, but feels kinda fragile. There has to be a better way, right?
A: This information is provided in NSUserDefaults. Maybe under the NSShortTimeDateFormatString key? (Still requires parsing of course).
(Use
NSLog(@"%@", [[NSUserDefaults standardUserDefaults] dictionaryRepresentation]);
to dump all the pre-defined user defaults).
Not quite sure why you want to do this - I bet you have a good reason - but maybe it's best just to use the defaults in NSDateFormatter instead?
A: NSShortTimeDateFormatString is deprecated, and I believe it also effectively doesn't work in 10.5 and later. See the 10.4 and 10.5 Foundation release notes. Dumping all defaults is a good way to depend on stuff that is not guaranteed by the interface, though it can be interesting to sort of see what's going on under the hood.
Why do you need this? As schwa says, everything works much better if you can get away from needing to know anything about the localized presentation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: In a project, is there a choice of database systems? In almost all projects, the choice of the database system is 'almost' automatic .. if you're coding "in the Microsoft stack" you'll go with SQLServer, in the Linux world the default is MySQL, and for corporate in-house project most shops have decided on some default like Oracle or IBM DB2.
What are your thoughts?
A: I tried all three approaches. I still use MySQL for web sites, but for in-house projects I rather choose Firebird or Postgres.
The reason is that they are free (both as in beer and speech), much less bloated (Firebird installer is just few MB, for example) and still do the job very well.
The main benefit you get is that the same thing scales from embedded to enterprise level, so there's no "first try is free, but you'll pay a lot later" kind of story behind it. I've seen Firebird databases of 200+ GB working just as fine as 1MB one in an embedded application I make.
A: My customers often stipulate what database engine we will be using. We write .net apps against non-SQLServer dbs regularly. In the long run, it's better for the customer because they get to maintain what they know.
A: Those seem to be the logical choices if the client or shop has no preference. If you're not 100% sure of the deployment environment it's a good idea to test your application against multiple databases to insure what you're not using any special features of one particular database. If you do decide that you must use a special feature that 'locks' you into a database it's best that you know about it early and have made an informed decision rather fall into the trap of using a feature that unknowing locks you into a particular database.
A: I don't know that I agree that the choice is so cut and dry in the Unix world. Postgres and MySQL go toe to toe all the time and the choice is not as clear as you made it seem. That said, their are plenty of other db's on Unix that are used (for example, sqlite powers many embedded systems and even has a place on the desktop (in Fedora's YUM package configuration utitlity, for example)).
A: I work on a gov contract, and while we code in the "Microsoft Stack", we can't use SQL Server because the government gets Oracle for free and wants us to use that instead.
A: Most environments in which I have worked have used a variety of stacks. I have not seen any "stack lock in" effect to any great degree. The Microsoft stack does favor SQL Server when no other spec is given and LAMP favors MySQL but I don't see it as being that strong of a bond. The three main firms where I have worked this past several years:
Medical Software Firm: ASP.NET C# Stack on IIS with MySQL
Investment Bank: Java on *NIX with Sybase, Oracle and DB2
Major Software Vendor (and one of the major DB vendors mentioned here!!): Java on RHEL with PostgreSQL
I think that most good shops evaluate their needs on a project by project basis and don't choose database products solely on their stack integration. If this were so, Oracle would not be the largest DB maker and DB2 would be much smaller than it is.
A: Maybe its about comfot. MS SQL Server Express edition comes attached with visual studio so it's easier to set up an application to work with SQL Server. People using linux are used to install MySQL, SQLite or PostgreSQL in a second, and maybe that's beacuse they don't need windows to run a DBMS. Corporate projects is another story, there should be no comfort and look for DBMS features.
A: It will depend on what types of projects you pursue, and what platforms are acceptable to the enterprise using the technology.
Good database design - 3rd normal form - will be the criteria against which you will be judged in most cases. Larger enterprises may force you to use Oracle with the MS stack. Mid-sized businesses will more than likely use SQL Server, but if they are consumers of demographic data from sources such as Claritas you'll get Oracle bundled with the application.
From the perspective of an employer locating someone with a skill set, MS SQL server is more prevalent with the businesses that have an in-house development dept.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Calculating time diff across midnight This is the one thing I could never get to work.
My problem is to detect the end of one day and the start of the next and then splitting the diff into each day.
Imagine you want to calculate a pay rate but it has to span across midnight.
It also applies to calculating time to run on timed system, or time diff it should've run.
A: I prefer to record all times in Unix Epoch, that way it's as simple as
hoursWorked = ((stopTime - startTime)/60)/60
Here is a full solution it's in PHP but should be easy enough to build to any language:
<?php
$startTime = "12/31/2008 22:02"; //No AM/PM we'll use 24hour system
$endTime = "01/01/2009 06:27"; //Again no AM/PM and we spaned the midnight time gap.
/*
Use this to test for a normal shift not ocurring durring midnight.
$startTime = "01/01/2008 06:02"; //No AM/PM we'll use 24hour system
$endTime = "01/01/2008 14:27"; //Again no AM/PM and we spaned the midnight time gap.
*/
$startTime = split(' ', $startTime);
$endTime = split(' ', $endTime);
$startTime[1] = split(':', $startTime[1]);
$endTime[1] = split(':', $endTime[1]);
/*
$startTime[0] contains the date
$startTime[1][0] contains the hours
$startTime[1][1] contains the minutes
same is true for endTime
*/
if($startTime[0] != $endTime[0])
{
if(date2epoch($endTime[0]) > date2epoch($startTime[0]))
{
$minutesWorked1 = (59 - $startTime[1][1]); //Calculate how many minutes have occured from begining of shift to midnight
$minutesWorked2 = $endTime[1][1]; //Number of minute from midnight to end of shift
$hoursWorked1 = (23 - $startTime[1][0]);//Number of hours from start of shift to midnight
$hoursWorked2 = $endTime[1][0];//Number of minutes from midnight to end of shift
echo 'Before midnight you worked ' . $hoursWorked1 . ':' . $minutesWorked1 . "\nAfter midnight you worked " . $hoursWorked2 . ':' . $minutesWorked2 . '.';
}
else
{
//SOMETHING MAJOR BAD HAS HAPPENED WHILE LOGGING THE CLOCKINS AND CLOCKOUTS
}
}
else
{
echo 'Today you worked ' . ($endTime[1][0] - $startTime[1][0]) . ':' . ($endTime[1][1] - $startTime[1][1]);
}
function date2epoch($date, $format='m/d/Y')
{
$date = split('/', $date);
return mktime('0', '0', '0', $date[0], $date[1], $date[2]);
}
?>
A: This is obviously a trivial problem in systems where the starting and ending times are data structures that contain both date and time. But there are plenty of systems that don't do this. It's very common for timekeeping systems to have a single record that contains a date, a start time, and an end time. In this case, the problem is less trivial.
There are, I think, two questions you're asking:
*
*How can I tell if a time span crosses midnight?
*If a time span crosses midnight, how much of the time happened on the first day, and how much on the second?
The first question is easily answered: if the end time is before the start time, the time span has crossed midnight.
If this is the case, then you have two amounts to calculate. The amount of the time span that occurred on the date is the time between the start time and midnight. The amount of the time span that occurred on the next date is the time between midnight and the end time (i.e. just the end time).
A: If you have a date available, you should be working with the combination of date+time, and it won't be a problem.
If you know the time span will be less than 24 hours:
if endtime < starttime then endtime = endtime + 24 hours
A: In most languages, the Date/DateTime/etc. classes have methods for things like this. When using these, it's best-practice to convert all dates to UTC before performing arithmetic.
e.g., C#:
// In UTC, preferably
DateTime ClockIn;
DateTime ClockOut;
ClockIn = ...;
ClockOut = ...;
TimeSpan TimeWorked = ClockOut.Subtract(ClockIn);
float HoursWorked = TimeWorked.TotalHours();
A: and if both times are not in the same time zone you'll need to convert them to UTC before doing the calculation :) May not be applicable here but it's a common issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Speeding Up Python This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together:
*
*Firstly: Given an established python project, what are some decent ways to speed it up beyond just plain in-code optimization?
*Secondly: When writing a program from scratch in python, what are some good ways to greatly improve performance?
For the first question, imagine you are handed a decently written project and you need to improve performance, but you can't seem to get much of a gain through refactoring/optimization. What would you do to speed it up in this case short of rewriting it in something like C?
A: Cython and pyrex can be used to generate c code using a python-like syntax. Psyco is also fantastic for appropriate projects (sometimes you'll not notice much speed boost, sometimes it'll be as much as 50x as fast).
I still reckon the best way is to profile your code (cProfile, etc.) and then just code the bottlenecks as c functions for python.
A: I'm surprised no one mentioned ShedSkin: http://code.google.com/p/shedskin/, it automagically converts your python program to C++ and in some benchmarks yields better improvements than psyco in speed.
Plus anecdotal stories on the simplicity: http://pyinsci.blogspot.com/2006/12/trying-out-latest-release-of-shedskin.html
There are limitations though, please see: http://tinyurl.com/shedskin-limitations
A: I hope you've read: http://wiki.python.org/moin/PythonSpeed/PerformanceTips
Resuming what's already there are usualy 3 principles:
*
*write code that gets transformed in better bytecode, like, use locals, avoid unnecessary lookups/calls, use idiomatic constructs (if there's natural syntax for what you want, use it - usually faster. eg: don't do: "for key in some_dict.keys()", do "for key in some_dict")
*whatever is written in C is considerably faster, abuse whatever C functions/modules you have available
*when in doubt, import timeit, profile
A: Regarding "Secondly: When writing a program from scratch in python, what are some good ways to greatly improve performance?"
Remember the Jackson rules of optimization:
*
*Rule 1: Don't do it.
*Rule 2 (for experts only): Don't do it yet.
And the Knuth rule:
*
*"Premature optimization is the root of all evil."
The more useful rules are in the General Rules for Optimization.
*
*Don't optimize as you go. First get it right. Then get it fast. Optimizing a wrong program is still wrong.
*Remember the 80/20 rule.
*Always run "before" and "after" benchmarks. Otherwise, you won't know if you've found the 80%.
*Use the right algorithms and data structures. This rule should be first. Nothing matters as much as algorithm and data structure.
Bottom Line
You can't prevent or avoid the "optimize this program" effort. It's part of the job. You have to plan for it and do it carefully, just like the design, code and test activities.
A: This won't necessarily speed up any of your code, but is critical knowledge when programming in Python if you want to avoid slowing your code down. The "Global Interpreter Lock" (GIL), has the potential to drastically reduce the speed of your multi-threaded program if its behavior is not understood (yes, this bit me ... I had a nice 4 processor machine that wouldn't use more than 1.2 processors at a time). There's an introductory article with some links to get you started at SmoothSpan.
A: Run your app through the Python profiler.
Find a serious bottleneck.
Rewrite that bottleneck in C.
Repeat.
A: People have given some good advice, but you have to be aware that when high performance is needed, the python model is: punt to c. Efforts like psyco may in the future help a bit, but python just isn't a fast language, and it isn't designed to be. Very few languages have the ability to do the dynamic stuff really well and still generate very fast code; at least for the forseeable future (and some of the design works against fast compilation) that will be the case.
So, if you really find yourself in this bind, your best bet will be to isolate the parts of your system that are unacceptable slow in (good) python, and design around the idea that you'll rewrite those bits in C. Sorry. Good design can help make this less painful. Prototype it in python first though, then you've easily got a sanity check on your c, as well.
This works well enough for things like numpy, after all. I can't emphasize enough how much good design will help you though. If you just iteratively poke at your python bits and replace the slowest ones with C, you may end up with a big mess. Think about exactly where the C bits are needed, and how they can be minimized and encapsulated sensibly.
A: It's often possible to achieve near-C speeds (close enough for any project using Python in the first place!) by replacing explicit algorithms written out longhand in Python with an implicit algorithm using a built-in Python call. This works because most Python built-ins are written in C anyway. Well, in CPython of course ;-) https://www.python.org/doc/essays/list2str/
A: Just a note on using psyco: In some cases it can actually produce slower run-times. Especially when trying to use psyco with code that was written in C. I can't remember the the article I read this, but the map() and reduce() functions were mentioned specifically. Luckily you can tell psyco not to handle specified functions and/or modules.
A: This is the procedure that I try to follow:
*
* import psyco; psyco.full()
* If it's not fast enough, run the code through a profiler, see where the bottlenecks are. (DISABLE psyco for this step!)
* Try to do things such as other people have mentioned to get the code at those bottlenecks as fast as possible.
*
*Stuff like [str(x) for x in l] or [x.strip() for x in l] is much, much slower than map(str, x) or map(str.strip, x).
* After this, if I still need more speed, it's actually really easy to get PyRex up and running. I first copy a section of python code, put it directly in the pyrex code, and see what happens. Then I twiddle with it until it gets faster and faster.
A: Rather than just punting to C, I'd suggest:
Make your code count. Do more with fewer executions of lines:
*
*Change the algorithm to a faster one. It doesn't need to be fancy to be faster in many cases.
*Use python primitives that happens to be written in C. Some things will force an interpreter dispatch where some wont. The latter is preferable
*Beware of code that first constructs a big data structure followed by its consumation. Think the difference between range and xrange. In general it is often worth thinking about memory usage of the program. Using generators can sometimes bring O(n) memory use down to O(1).
*Python is generally non-optimizing. Hoist invariant code out of loops, eliminate common subexpressions where possible in tight loops.
*If something is expensive, then precompute or memoize it. Regular expressions can be compiled for instance.
*Need to crunch numbers? You might want to check numpy out.
*Many python programs are slow because they are bound by disk I/O or database access. Make sure you have something worthwhile to do while you wait on the data to arrive rather than just blocking. A weapon could be something like the Twisted framework.
*Note that many crucial data-processing libraries have C-versions, be it XML, JSON or whatnot. They are often considerably faster than the Python interpreter.
If all of the above fails for profiled and measured code, then begin thinking about the C-rewrite path.
A: The usual suspects -- profile it, find the most expensive line, figure out what it's doing, fix it. If you haven't done much profiling before, there could be some big fat quadratic loops or string duplication hiding behind otherwise innocuous-looking expressions.
In Python, two of the most common causes I've found for non-obvious slowdown are string concatenation and generators. Since Python's strings are immutable, doing something like this:
result = u""
for item in my_list:
result += unicode (item)
will copy the entire string twice per iteration. This has been well-covered, and the solution is to use "".join:
result = "".join (unicode (item) for item in my_list)
Generators are another culprit. They're very easy to use and can simplify some tasks enormously, but a poorly-applied generator will be much slower than simply appending items to a list and returning the list.
Finally, don't be afraid to rewrite bits in C! Python, as a dynamic high-level language, is simply not capable of matching C's speed. If there's one function that you can't optimize any more in Python, consider extracting it to an extension module.
My favorite technique for this is to maintain both Python and C versions of a module. The Python version is written to be as clear and obvious as possible -- any bugs should be easy to diagnose and fix. Write your tests against this module. Then write the C version, and test it. Its behavior should in all cases equal that of the Python implementation -- if they differ, it should be very easy to figure out which is wrong and correct the problem.
A: The canonical reference to how to improve Python code is here: PerformanceTips. I'd recommend against optimizing in C unless you really need to though. For most applications, you can get the performance you need by following the rules posted in that link.
A: First thing that comes to mind: psyco. It runs only on x86, for the time being.
Then, constant binding. That is, make all global references (and global.attr, global.attr.attr…) be local names inside of functions and methods. This isn't always successful, but in general it works. It can be done by hand, but obviously is tedious.
You said apart from in-code optimization, so I won't delve into this, but keep your mind open for typical mistakes (for i in range(10000000) comes to mind) that people do.
A: If using psyco, I'd recommend psyco.profile() instead of psyco.full(). For a larger project it will be smarter about the functions that got optimized and use a ton less memory.
I would also recommend looking at iterators and generators. If your application is using large data sets this will save you many copies of containers.
A: Besides the (great) psyco and the (nice) shedskin, I'd recommend trying cython a great fork of pyrex.
Or, if you are not in a hurry, I recommend to just wait. Newer python virtual machines are coming, and unladen-swallow will find its way into the mainstream.
A: A couple of ways to speed up Python code were introduced after this question was asked:
*
*Pypy has a JIT-compiler, which makes it a lot faster for CPU-bound code.
*Pypy is written in Rpython, a subset of Python that compiles to native code, leveraging the LLVM tool-chain.
A: For an established project I feel the main performance gain will be from making use of python internal lib as much as possible.
Some tips are here: http://blog.hackerearth.com/faster-python-code
A: There is also Python → 11l → C++ transpiler, which can be downloaded from here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: Create/Use User-defined functions in System.Data.SQLite?
User-Defined Functions & Collating Sequences
Full support for user-defined functions and collating sequences means that in many cases if SQLite doesn't have a feature, you can write it yourself in your favorite .NET language. Writing UDF's and collating sequences has never been easier
I spotted this bit on the C# SQLite ADO.NET provider l found here, and was having problems understanding the documentation on how to implement/use user-defined functions.
Could anyone explain how to, or provide any working examples for this lost newbie?
A: Robert Simpson has a great example of a REGEX function you can use in your sqlite queries:
// taken from http://sqlite.phxsoftware.com/forums/p/348/1457.aspx#1457
[SQLiteFunction(Name = "REGEXP", Arguments = 2, FuncType = FunctionType.Scalar)]
class MyRegEx : SQLiteFunction
{
public override object Invoke(object[] args)
{
return System.Text.RegularExpressions.Regex.IsMatch(Convert.ToString(args[1]),Convert.ToString(args[0]));
}
}
// example SQL: SELECT * FROM Foo WHERE Foo.Name REGEXP '$bar'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
} |
Q: Looking for DES algorithm tutorial I am studying for cryptography and I somehow stuck on understanding how DES works. Because it is around for a long time there should be nice tutorials like fancy diagrams, videos etc around the net. I searched but with no luck. Has anyone spotted anything "easy-to-digest" for the brain?
A: have you checked Wikipedia?
It also points to the FIPS standard.
Note that modern cryptography and "easy-on-the-brain" don't necessarily go hand in hand...
I am certain there are open source implementations you could check out if that is what you are interested in.
A: Bruce Schneier's Applied Cryptography is probably the funniest analysis you will find of it, but it is certainly not easy.
A: des-3des encription decryption simulated with strings in java:
public class Filippakoc_Des {
static int[][] sb1={
{14 , 4 , 13 , 1 , 2 , 15 , 11 , 8 , 3 , 10 , 6 , 12 , 5 , 9 , 0 , 7 },
{0 , 15, 7 , 4 , 14, 2 , 13, 1, 10, 6, 12, 11, 9, 5, 3, 8},
{4 , 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0},
{15 , 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13}
};
static int[][] sb2={
{15 , 1 , 8 , 14, 6 , 11, 3, 4, 9 , 7 , 2, 13 , 12, 0, 5 , 10},
{3 , 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5},
{0 , 14, 7, 11, 10, 4, 13, 1, 5 , 8, 12, 6, 9, 3, 2, 15},
{13 , 8 , 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0 , 5, 14, 9}
};
static int[][] sb3={
{10 , 0 , 9,14, 6, 3, 15, 5, 1, 13, 12, 7 , 11, 4, 2, 8},
{13 , 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1},
{13 , 6, 4, 9, 8, 15, 3, 0,11, 1, 2, 12, 5, 10, 14, 7},
{1 , 10 ,13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12}
};
static int[][] sb4={
{ 7 , 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15},
{13 , 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9},
{10 , 6, 9, 0, 12, 11, 7, 13,15, 1, 3, 14, 5, 2, 8, 4},
{3 , 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14}
};
static int[][] sb5={
{ 2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9},
{14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6},
{4 , 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14},
{11, 8, 12, 7 , 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3}
};
static int[][] sb6={
{ 12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11},
{10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8},
{9 , 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13,11, 6},
{4, 3, 12, 12, 9, 5, 15, 10,11, 14, 1, 7, 6, 0, 8, 13}
};
static int[][] sb7={
{ 4,11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1},
{13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6},
{1 , 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2},
{6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12}
};
static int[][] sb8={
{13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3,14, 5, 0, 12, 7},
{ 1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6,11, 0, 14, 9, 2},
{7 , 11, 4, 1, 9, 12, 14, 2, 0, 6, 10,13, 15, 3, 5, 8},
{ 2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11}
};
public String Filippakoc_Des(String ptxt,String key,boolean encdes){
//String a=String.format("%64s", Integer.toBinaryString(255)).replace(' ', '0');
//String b="1011011111001001100001110000101110110000010001110101111101001010";
String ipa=IPpermutation(ptxt);
String ipb=IPpermutation(ptxt);
String[] keys=Keygenerate(key);
String[] rounds=new String[16];
String cripttext="";
if(encdes==true){
for(int i=0;i<rounds.length;i++){
if(i<1){
rounds[i]=round(ipa,keys[i]);
}else{
rounds[i]=round(rounds[i-1],keys[i]);
}
}
cripttext=IPpermutation_(rounds[15]);
//System.out.println(ptxt);
//System.out.println(cripttext);
}else{
for(int i=rounds.length-1;i>=0;i--){
if(i>14){
rounds[i]=dround(ipb,keys[i]);
}else{
rounds[i]=dround(rounds[i+1],keys[i]);
}
}
cripttext=IPpermutation_(rounds[0]);
//System.out.println(ptxt);
//System.out.println(cripttext);
}
return cripttext;
//String a=String.format("%32s", Integer.toBinaryString(255)).replace(' ', '0');
//String b=String.format("%48s", Integer.toBinaryString(21)).replace(' ', '0');
//System.out.println(functionf(a, b));
//System.out.println(xor(a,b));
}
public String Filippakoc_3des(String ptxt,String key1,String key2,String key3,boolean encdes){
String En="";
if(encdes){
En=Filippakoc_Des(ptxt, key1, true);
En=En=Filippakoc_Des(En, key2, false);
En=En=Filippakoc_Des(En, key3, true);
}else{
En=Filippakoc_Des(ptxt, key1, false);
En=En=Filippakoc_Des(En, key2, true);
En=En=Filippakoc_Des(En, key3, false);
}
return En;
}
public String round(String plain64,String keyi){
String li=plain64.substring(0,32);
String ri=plain64.substring(32);
String ro=xor(li,functionf(ri,keyi));
String lo=ri;
String resultround=lo+ro;
return resultround;
}
public String dround(String plain64,String keyi){
String li=plain64.substring(0,32);
String ri=plain64.substring(32);
String lo=xor(ri,functionf(li,keyi));
String ro=li;
String resultround=lo+ro;
return resultround;
}
public String xor(String a,String b){
String res="";
for(int i=0;i<a.length();i++){
if(a.charAt(i)=='0' && b.charAt(i)=='0'){
res+='0';
}else if(a.charAt(i)=='0' && b.charAt(i)=='1'){
res+='1';
}else if(a.charAt(i)=='1' && b.charAt(i)=='0'){
res+='1';
}else if(a.charAt(i)=='1' && b.charAt(i)=='1'){
res+='0';
}
}
return res;
}
public String sboxesrtn(int ff){
String sb=Integer.toBinaryString(ff);
if(ff<2){
sb="000"+sb;
}else if(ff<4){
sb="00"+sb;
}else if(ff<8){
sb="0"+sb;
}
return sb;
}
public String functionf(String r32,String k48){
String r48=Epermutation(r32);
String xorout=xor(r48, k48);
String[] splitto6=new String[8];
splitto6[0]=xorout.substring(0,6);
splitto6[1]=xorout.substring(6,12);
splitto6[2]=xorout.substring(12,18);
splitto6[3]=xorout.substring(18,24);
splitto6[4]=xorout.substring(24,30);
splitto6[5]=xorout.substring(30,36);
splitto6[6]=xorout.substring(36,42);
splitto6[7]=xorout.substring(42);
String rnew32="";
String row;
String column="";
for(int i=0;i<splitto6.length;i++){
row=""+splitto6[i].charAt(0)+splitto6[i].charAt(5);
column=splitto6[i].substring(1,5);
//System.out.println(splitto6[i]+" "+row+" "+column);
int irow=Integer.parseInt(row, 2);
int icolumn=Integer.parseInt(column, 2);
if(i==0){
//rnew32+=Integer.toBinaryString(sb1[irow][icolumn]);
rnew32+=String.format("%4s", Integer.toBinaryString(sb1[irow][icolumn])).replace(' ', '0');
}else if(i==1){
rnew32+=String.format("%4s", Integer.toBinaryString(sb2[irow][icolumn])).replace(' ', '0');
}else if(i==2){
rnew32+=String.format("%4s", Integer.toBinaryString(sb3[irow][icolumn])).replace(' ', '0');
}else if(i==3){
rnew32+=String.format("%4s", Integer.toBinaryString(sb4[irow][icolumn])).replace(' ', '0');
}else if(i==4){
rnew32+=String.format("%4s", Integer.toBinaryString(sb5[irow][icolumn])).replace(' ', '0');
}else if(i==5){
rnew32+=String.format("%4s", Integer.toBinaryString(sb6[irow][icolumn])).replace(' ', '0');
}else if(i==6){
rnew32+=String.format("%4s", Integer.toBinaryString(sb7[irow][icolumn])).replace(' ', '0');
}else if(i==7){
rnew32+=String.format("%4s", Integer.toBinaryString(sb8[irow][icolumn])).replace(' ', '0');
}
}
//System.out.println(rnew32.length());
//System.out.println(rnew32);
// System.out.print(splitto6[0]+" "+splitto6[1]+" "+splitto6[2]+" "+splitto6[3]+" "+splitto6[4]+" "+splitto6[5]+" "+splitto6[6]+" "+splitto6[7]+"\n");
rnew32=Ppermutation(rnew32);
return rnew32;
}
public String Epermutation(String r32){
r32=" "+r32;
String r48=""+r32.charAt(31)+r32.charAt(1)+r32.charAt(2)+r32.charAt(3)+r32.charAt(4)+r32.charAt(5)+r32.charAt(4)+r32.charAt(5)+r32.charAt(6)+r32.charAt(7)+r32.charAt(8)+r32.charAt(9)+r32.charAt(8)+r32.charAt(9)+r32.charAt(10)+r32.charAt(11)+r32.charAt(12)+r32.charAt(13)+r32.charAt(12)+r32.charAt(13)+r32.charAt(14)+r32.charAt(15)+r32.charAt(16)+r32.charAt(17)+r32.charAt(16)+r32.charAt(17)+r32.charAt(18)+r32.charAt(19)+r32.charAt(20)+r32.charAt(21)+r32.charAt(20)+r32.charAt(21)+r32.charAt(22)+r32.charAt(23)+r32.charAt(24)+r32.charAt(25)+r32.charAt(24)+r32.charAt(25)+r32.charAt(26)+r32.charAt(27)+r32.charAt(28)+r32.charAt(29)+r32.charAt(28)+r32.charAt(29)+r32.charAt(30)+r32.charAt(31)+r32.charAt(32)+r32.charAt(1);
return r48;
}
public String Ppermutation(String r32){
r32=" "+r32;
String r32p=""+r32.charAt(17)+r32.charAt(7)+r32.charAt(20)+r32.charAt(21)+r32.charAt(29)+r32.charAt(12)+r32.charAt(28)+r32.charAt(17)+r32.charAt(1)+r32.charAt(15)+r32.charAt(23)+r32.charAt(26)+r32.charAt(5)+r32.charAt(18)+r32.charAt(31)+r32.charAt(10)+r32.charAt(2)+r32.charAt(8)+r32.charAt(24)+r32.charAt(14)+r32.charAt(32)+r32.charAt(27)+r32.charAt(3)+r32.charAt(9)+r32.charAt(19)+r32.charAt(13)+r32.charAt(30)+r32.charAt(6)+r32.charAt(22)+r32.charAt(11)+r32.charAt(4)+r32.charAt(25);
return r32p;
}
public String IPpermutation(String r64){
r64=" "+r64;
String r64p=""+r64.charAt(58)+r64.charAt(50)+r64.charAt(42)+ r64.charAt(34)+r64.charAt(26)+r64.charAt(18)+r64.charAt(10)+r64.charAt(2)
+r64.charAt(60)+r64.charAt(52)+r64.charAt(44)+r64.charAt(36)+r64.charAt(28)+r64.charAt(20)+r64.charAt(12)+r64.charAt(4)+
r64.charAt(62)+r64.charAt(54)+r64.charAt(46)+r64.charAt(38)+r64.charAt(30)+r64.charAt(22)+r64.charAt(14)+r64.charAt(6)+
r64.charAt(64)+r64.charAt(56)+r64.charAt(48)+r64.charAt(40)+r64.charAt(32)+r64.charAt(24)+r64.charAt(16)+r64.charAt(8)+
r64.charAt(57)+r64.charAt(49)+ r64.charAt(41)+r64.charAt(33)+r64.charAt(25)+r64.charAt(17)+r64.charAt(9)+r64.charAt(1)+
r64.charAt(59)+r64.charAt(51)+r64.charAt(43)+r64.charAt(35)+r64.charAt(27)+r64.charAt(19)+r64.charAt(11)+r64.charAt(3)+
r64.charAt(61)+r64.charAt(53)+r64.charAt(45)+r64.charAt(37)+r64.charAt(29)+r64.charAt(21)+r64.charAt(13)+r64.charAt(5)+
r64.charAt(63)+r64.charAt(55)+r64.charAt(47)+r64.charAt(39)+r64.charAt(31)+r64.charAt(23)+r64.charAt(15)+r64.charAt(7);
return r64p;
}
public String IPpermutation_(String r64){
r64=" "+r64;
String r64p=""+
r64.charAt(40)+r64.charAt(8)+r64.charAt(48)+ r64.charAt(16)+r64.charAt(56)+r64.charAt(24)+r64.charAt(64)+r64.charAt(32)+
r64.charAt(39)+r64.charAt(7)+r64.charAt(47)+r64.charAt(15)+r64.charAt(55)+r64.charAt(23)+r64.charAt(63)+r64.charAt(31)+
r64.charAt(38)+r64.charAt(6)+r64.charAt(46)+r64.charAt(14)+r64.charAt(54)+r64.charAt(22)+r64.charAt(62)+r64.charAt(30)+
r64.charAt(37)+r64.charAt(5)+r64.charAt(45)+r64.charAt(13)+r64.charAt(53)+r64.charAt(21)+r64.charAt(61)+r64.charAt(29)+
r64.charAt(36)+r64.charAt(4)+ r64.charAt(44)+r64.charAt(12)+r64.charAt(52)+r64.charAt(20)+r64.charAt(60)+r64.charAt(28)+
r64.charAt(35)+r64.charAt(3)+r64.charAt(43)+r64.charAt(11)+r64.charAt(51)+r64.charAt(19)+r64.charAt(59)+r64.charAt(27)+
r64.charAt(34)+r64.charAt(2)+r64.charAt(42)+r64.charAt(10)+r64.charAt(50)+r64.charAt(18)+r64.charAt(58)+r64.charAt(26)+
r64.charAt(33)+r64.charAt(1)+r64.charAt(41)+r64.charAt(9)+r64.charAt(49)+r64.charAt(17)+r64.charAt(57)+r64.charAt(25);
return r64p;
}
public String[] Keygenerate(String key){
String[] keys=new String[16];
String key56=pc_1(key);
String lkey28=key56.substring(0,28);
String rkey28=key56.substring(28);
for(int i=0;i<keys.length;i++){
if(i==0 || i==1 || i==8 || i==15){
lkey28=Lbitrotate(lkey28, 1);
rkey28=Lbitrotate(rkey28, 1);
keys[i]=pc_2(lkey28+rkey28);
}else{
lkey28=Lbitrotate(lkey28, 2);
rkey28=Lbitrotate(rkey28, 2);
keys[i]=pc_2(lkey28+rkey28);
}
}
return keys;
}
public String pc_1(String key){
String key56="";
key56=""+key.charAt(56)+key.charAt(48)+key.charAt(40)+key.charAt(32)+key.charAt(24)+key.charAt(16)+key.charAt(8)+key.charAt(0)+key.charAt(57)+key.charAt(49)+key.charAt(41)+key.charAt(33)+key.charAt(25)+key.charAt(17)+key.charAt(9)+key.charAt(1)+key.charAt(58)+key.charAt(50)+key.charAt(42)+key.charAt(34)+key.charAt(26)+key.charAt(18)+key.charAt(10)+key.charAt(2)+key.charAt(59)+key.charAt(51)+key.charAt(43)+key.charAt(35)+key.charAt(62)+key.charAt(54)+key.charAt(46)+key.charAt(38)+key.charAt(30)+key.charAt(23)+key.charAt(14)+key.charAt(6)+key.charAt(61)+key.charAt(53)+key.charAt(45)+key.charAt(37)+key.charAt(29)+key.charAt(21)+key.charAt(13)+key.charAt(5)+key.charAt(60)+key.charAt(52)+key.charAt(44)+key.charAt(36)+key.charAt(28)+key.charAt(20)+key.charAt(12)+key.charAt(4)+key.charAt(27)+key.charAt(19)+key.charAt(11)+key.charAt(3);
return key56;
}
public String pc_2(String key){
String key48="";
key48=""+key.charAt(13)+key.charAt(16)+key.charAt(10)+key.charAt(23)+key.charAt(0)+key.charAt(4)+key.charAt(2)+key.charAt(27)+key.charAt(14)+key.charAt(5)+key.charAt(20)+key.charAt(9)+key.charAt(22)+key.charAt(18)+key.charAt(11)+key.charAt(3)+key.charAt(25)+key.charAt(7)+key.charAt(15)+key.charAt(6)+key.charAt(26)+key.charAt(19)+key.charAt(12)+key.charAt(1)+key.charAt(40)+key.charAt(51)+key.charAt(30)+key.charAt(36)+key.charAt(46)+key.charAt(54)+key.charAt(29)+key.charAt(39)+key.charAt(50)+key.charAt(44)+key.charAt(32)+key.charAt(47)+key.charAt(43)+key.charAt(48)+key.charAt(38)+key.charAt(55)+key.charAt(33)+key.charAt(52)+key.charAt(45)+key.charAt(41)+key.charAt(49)+key.charAt(35)+key.charAt(28)+key.charAt(31);
return key48;
}
public String Lbitrotate(String bin,int times){
String binary=bin;
String binary2="";
String binarynew="";
for(int i=0;i<times;i++){
if(i<1){
binary2=binary.substring(1)+binary.charAt(0);
}else{
binary2=binarynew.substring(1)+binarynew.charAt(0);
}
binarynew=binary2;
}
return binary2;
}
}
A: The DES Algorithm Illustrated is the best simple summary (easy to digest) of DES I have seen.
As for an actual picture see How DES Works or a Mirror of How DES Works
For AES there is A Stick Figure Guide to the Advanced Encryption Standard (AES).
A: I have checked wikipedia, seems that I needed more. I also think that Schneier's book is perfect, I have it back home and could not transport it to where I study :-( shame....
Nobody said this was going to be easy perhaps I should just read it many times and write it down on paper until it gets stuck to my brain (or have Applied Cryptography posted). Thanks for the very quick answers.
A: I found out that the more repetitions I do the better it gets! Also found out the hard way, that doing it day after day helps more than all-in-once. At least I could get a copy of Schneier's book which helps a lot.
Writing my own is not an option since it is a new algorithm each week, but it is a very interesting, helpful (and obvious sometimes) idea.
It has not to do with DES, you convinced me to learn Python :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Play a single note with DirectMusic I'm using DirectMusic for MIDI playback in an application I'm developing. Does anyone know if it's possible to use DirectMusic to play individual notes? Currently, I'm converting an in-memory data structure that represents entire 'songs' into a MIDI buffer and playing it back through DirectMusic. I'd like to be able to play individual notes without having to generate a MIDI buffer for it, loading it and playing it. Is this type of thing even possible with DirectMusic?
This is my first excursion into the world of DirectMusic so hopefully I'm not too uninformed of it's capabilities...
A: I believe that stuffing your note messages into a DirectMusicBuffer8 and then playing that is indeed the simplest way to do it.
I assume you're aware that DirectMusic is deprecated, not recommended for new development etc. etc.
A: Hmm, I'll see if I can dig more info up on that on MSDN.
I am aware DirectMusic is deprecated, however, my understanding is that XAudio2 has very poor support for MIDI. Unless I'm mistaken on that - I would switch to XAudio2 in an instant if it supports MIDI, as I'm only in the early stages of integrating DirectMusic into my application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to show fullscreen popup window in javascript? Is there a way to make a popup window maximised as soon as it is opened? If not that, at least make it screen-sized? This:
window.open(src, 'newWin', 'fullscreen="yes"')
apparently only worked for old version of IE.
A: What about this:
var popup = window.open(URL);
if (popup == null)
alert('Please change your popup settings');
else {
popup.moveTo(0, 0);
popup.resizeTo(screen.width, screen.height);
}
A: What about this, I gave width and height value to a big number and it works
window.open("https://www.w3schools.com", "_blank","toolbar=yes,scrollbars=yes,resizable=yes,top=500,left=500,width=4000,height=4000");
A: More than bad design - this "feature" is a recipe for UI disaster. There were a number of malicious web sites which exploited the full screen view features in JavaScript to hijack browser windows and display a screen indistinguishable from the user's desktop. While there may still be a way to do this, please for the love of all things decent, do not implement this.
A: Use screen.availWidth and screen.availHeight to calculate a suitable size for the height and width parameters in window.open()
Although this is likely to be close, it will not be maximised, nor accurate for everyone, especially if all the toolbars are shown.
A: Try this. This works for me and with any link you want, or anything in the popup
Anything you chose will be shown in a PopUp window in a full screen size within a PopUp Window.
<script language="JavaScript">
function Full_W_P(url) {
params = 'width='+screen.width;
params += ', height='+screen.height;
params += ', top=0, left=0'
params += ', fullscreen=yes';
params += ', directories=no';
params += ', location=no';
params += ', menubar=no';
params += ', resizable=no';
params += ', scrollbars=no';
params += ', status=no';
params += ', toolbar=no';
newwin=window.open(url,'FullWindowAll', params);
if (window.focus) {newwin.focus()}
return false;
}
</script>
<input type="button" value="Open as Full Window PopUp" onclick="javascript:Full_W_P('http://www.YourLink.com');"></input>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: XML DOM vs ADO DataSet I have the option of storing data in memory either as an XML document or multi-table ADO dataset. The web page utilizing this object will be selectively retrieving data items based on keys.
Does either of these objects have a clear performance advantage over the other?
A: If you decide to go down the XML route and if you're using .NET 3.5 consider looking at the new XDocument, XElement (and friends) classes in the System.Xml.Linq namespace. You can use Linq to XML to query your XML documents and it's rather good.
A: A DataSet is already an XML Document, sort of. One of the additional perks you get with the dataset is relationship between tables, so you can actually query the DataSet as an in-memory database.
XML is really good, especially in certain situations, but searching lots of data.
If it's items based on keys with no relation to other tables then go with the XML, but be careful with large subsets of data.
A: Picking up on what Saif said, your biggest challenge with XML will be relating keys from one document to the other. You can create unique keys for all the nodes in the document, but you may have improper collisions should you have another document of similar structure as the documents are distinct. In other words, you would have to ensure that there was no overlap by either merging documents temporarily or creating another scheme that concatenates additional identifiers to the dynamically generated keys.
Depending on your comfort level with XPath and XLST, you may want to create tables and relations with ADO, as this is more straight forward. You can do grouping of keys, and Jeni Tennison who posts at StackOverflow has a great series on these methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Embedding JavaScript engine into .NET just wondering if anyone has ever tried embedding and actually integrating any js engine into the .net environment. I could find and actually use (after a LOT of pain and effort, since it's pretty outdated and not quite finished) spidermonkey-dotnet project. Anyone with experience in this area? Engines like SquirrelFish, V8..
Not that I'm not satisfied with Mozilla's Spidermonkey (using it for Rails-like miniframework for custom components inside the core ASP.NET application), but I'd still love to explore a bit further with the options. The command-line solutions are not what I'd need, I cannot rely on anything else than CLR, I need to call methods from/to JavaScript/C# objects.
// c# class
public class A
{
public string Hello(string msg)
{
return msg + " whatewer";
}
}
// js snippet
var a = new A();
console.log(a.Hello('Call me')); // i have a console.log implemented, don't worry, it's not a client-side code :)
Just to clarify - I'm not trying to actually program the application itself in server-side javascript. It's used solely for writing custom user subapplications (can be seen as some sort of DSL). It's much easier (and safer) to allow normal people programming in js than C#.
A: Try Javascript .NET. It is hosted on GitHub It was originally hosted on CodePlex, here)
Project discussions: http://javascriptdotnet.codeplex.com/discussions
It implements Google V8. You can compile and run JavaScript directly from .NET code with it, and supply CLI objects to be used by the JavaScript code as well. It generates native code from JavaScript.
A: The open source JavaScript interpreter Jint (http://jint.codeplex.com) does exactly what you are looking for.
Edit:
The project has been entirely rewritten and is now hosted on Github at https://github.com/sebastienros/jint
A: You can try ironJS, looks promising although it is in heavy development. https://github.com/fholm/IronJS
A: I guess I am still unclear about what it is you are trying to do, but JScript.NET might be worth looking into, though Managed JScript seems like it may be more appropriate for your needs (it is more like JavaScript than JScript.NET).
Personally, I thought it would be cool to integrate V8 somehow, but I didn't get past downloading the source code; wish I had the time to actually do something with it.
A: I came up with a much simpler solution instead.
I built a .dll file using Javascript and then compiled it using the Javascript compiler which is available in a VS2013 developer command prompt.
Once we have the .dll we simply add it to the \Support folder and then referenced it in the project which needed to eval Javascript statements.
Detailed Steps to create a .dll:
*
*Create a file in Notepad with only these contents:
class EvalClass { function Evaluate(expression: String) { return eval(expression); } }
*Save the file as C:\MyEval.js
*Open a VS2005 Command Prompt (Start, Programs, VS2005, VS2005 Tools)
*Type Cd\ to get to C:\
*Type
jsc /t:library C:\MyEval.js
*A new file is created named MyEval.dll.
*Copy MyEval.dll to the project and reference it (also reference Microsoft.Jscript.dll).
*Then you should be able to call it like this:
Dim jScriptEvaluator As New EvalClass
Dim objResult As Object
objResult = jScriptEvaluator.Evaluate(“1==1 && 2==2”)
objResult is True.
A: You might also be interested in Microsoft ClearScript
which is hosted on GitHub and published under the Ms-Pl licence.
I am no Microsoft fanboy, but I must admit that the V8 support has about the same functionnalities as Javascript.Net, and more important, the project is still maintained. As far as I am concerned, the support for delegates also functions better than with Spidermonkey-dotnet.
ps: It also support JScript and VBScript but we were not interested by this old stuff.
ps: It is compatible with .NET 4.0 and 4.5+
A: If the language isn't a problem (any sandboxed scripted one) then there's LUA for .NET. The Silverlight version of the .NET framework is also sandboxed afaik.
A: Hey take a look for Javascript .NET on codeplex (http://javascriptdotnet.codeplex.com/) with the version 0.3.1 there is some pretty sweet new features that will probly interest you.
Check out a sample code:
// Initialize the context
JavascriptContext context = new JavascriptContext();
// Setting the externals parameters of the context
context.SetParameter("console", new SystemConsole());
context.SetParameter("message", "Hello World !");
context.SetParameter("number", 1);
// Running the script
context.Run("var i; for (i = 0; i < 5; i++) console.Print(message + ' (' + i + ')'); number += i;");
// Getting a parameter
Console.WriteLine("number: " + context.GetParameter("number"));
A: You can use the Chakra engine in C#. Here is an article on msdn showing how:
http://code.msdn.microsoft.com/windowsdesktop/JavaScript-Runtime-Hosting-d3a13880
A: Anybody just tuning in check out Jurassic as well:
http://jurassic.codeplex.com/
edit: this has moved to github (and seems active at first glance)
https://github.com/paulbartrum/jurassic
A: I just tried RemObjects Script for .Net.
It works, although I had to use a static factory (var a=A.createA();) from JavaScript instead of the var a=new A() syntax. (ExposeType function only exposes statics!)
Not much documentation and the source is written with Delphi Prism, which is rather unusual for me and the RedGate Reflector.
So: Easy to use and setup, but not much help for advanced scenarios.
Also having to install something instead of just dropping the assemblies in a directory is a negative for me...
A: Microsoft's documented way to add script extensibility to anything is IActiveScript. You can use IActiveScript from within anyt .NET app, to call script logic. The logic can party on .NET objects that you've placed into the scripting context.
This answer provides an application that does it, with code:
*
*Will the IE10 Chakra JScript engine available as stand alone accessible from C#?
A: There is an implementation of an ActiveX Scripting Engine Host in C# available here: parse and execute JS by C#
It allows to use Javascript (or VBScript) directly from C#, in native 32-bit or 64-bit processes. The full source is ~500 lines of C# code. It only has an implicit dependency on the installed JScript (or VBScript) engine DLL.
For example, the following code:
Console.WriteLine(ScriptEngine.Eval("jscript", "1+2/3"));
will display 1.66666666666667
A: There is also MsieJavaScriptEngine which uses Internet Explorers Chakra engine
A: i believe all the major opensource JS engines (JavaScriptCore, SpiderMonkey, V8, and KJS) provide embedding APIs. The only one I am actually directly familiar with is JavaScriptCore (which is name of the JS engine the SquirrelFish lives in) which provides a pure C API. If memory serves (it's been a while since i used .NET) .NET has fairly good support for linking in C API's.
I'm honestly not sure what the API's for the other engines are like, but I do know that they all provide them.
That said, depending on your purposes JScript.NET may be best, as all of these other engines will require you to include them with your app, as JSC is the only one that actually ships with an OS, but that OS is MacOS :D
A: I know I'm opening up an old thread but I've done a lot of work on smnet (spidermonkey-dotnet). In the recent years. It's main development focus has been seamless embedding of .net objects into the spidermonkey engine. It supports a wide variety of conversions from js values to .net objects. Some of those including delegates and events.
Just saying it might be worth checking into now that there's some steady development on it :).
I do keep the SVN repo up to date with bug fixes and new features. The source and project solution files are configured to successfully build on download. If there are any problems using it, feel free to open a discussion.
I do understand the desire to have a managed javascript solution, but of all the managed javascript's I've used they're all very lacking in some key features that help make them both robust and easy to work with. I myself am waiting on IronJS to mature a little. While I wait, I have fun playing with spidermonkey-dotnet =)
spidermonkey-dotnet project and download page
Edit: created documentation wiki page this afternoon.
A: Try ReoScript, an open-source JavaScript interpreter implemented in C#.
ReoScript makes your application can execute JavaScript. It has a wide variety of extension methons such as SetVariable, Function Extension, using CLR Type, .Net Event Binding and etc.
Hello World:
ScriptRunningMachine srm = new ScriptRunningMachine();
srm.Run(" alert('hello world!'); ");
And here is an example of script that creates a winform and show it.
import System.Windows.Forms.*; // import namespace
var f = new Form(); // create form
f.click = function() { f.close(); }; // close when user clicked on form
f.show(); // show
A: Use JSCRIPT.NET to get a library(dll) of the js . Then reference this dll in your .NET application and you are just there. DONE!
A: V8.NET is a new kid on the block (as of April 2013) that more closely wraps the native V8 engine functionality. It allows for more control over the implementation.
A: You can use Rhino a Mozilla Javascript engine written on Java, and use it with IKVM , here are some instructions
Instructions:https://www.codeproject.com/Articles/41792/Embedding-JavaScript-into-C-with-Rhino-and-IKVM
A: It's Possible now with ASP.Net MVC4 Razor View engine. the code will be this:
// c# class
public class A
{
public string Hello(string msg)
{
return msg + " whatewer";
}
}
// js snippet
<script type="text/javascript">
var a = new A();
console.log('@a.Hello('Call me')'); // i have a console.log implemented, don't worry, it's not a client-side code :)
</script>
and Razor isn't just for MVC4 or another web applications and you can use it in offline desktop applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "222"
} |
Q: MVC Frameworks for Windows Mobile Native Code Are there any good MVC frameworks for native Windows Mobile code?
Barring that could someone link to an open source Windows Mobile or CE project that uses the MVC pattern?
A: Perhaps you could try Qt. It provides some classes for mvc programming. Here is link for MVC in Qt. http://doc.trolltech.com/4.4/model-view-programming.html
A: Inspired by akam129, this is a link from Qt Quarterly on MVC.
Overview: The controls in Qt are using MVC internally, and it is possible to do MVC programming at application level by using signal-slot mechanism
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Combined post-operators? We're all familiar with the pre- and post-increment operators, e.g.
c++; // c = c + 1
++c; // ditto
and the "combined operators" which extend this principle:
c += 5; // c = c + 5
s .= ", world"; // s = s . ", world"; e.g. PHP
I've often had a need for a 'post-combined operator', which would allow:
s =. "Hello "; // s = "Hello " . s
Obviously, this is only really useful with non-commutable operators and the meaning is altered from pre-/post-increment, even though the syntax is borrowed.
Are you aware of any language that offers such an operator, and why isn't it more common?
A: A basic objection to the example as-given is that it'd create ambiguity:
a=-5; //'a = -5' or 'a =- 5'?
b=*p; //'b = *p' or 'b =* p'?
c=.5; //'c = .5' or 'c =. 5'?
Edit: But no, I'm not aware of any languages that use them. Presumably this is because they were omitted from C from whence most other languages derive their basic operators.
And yeah, I'd love to see them in the languages I use.
A: Combined 'op=' operators are shorthand for var = var op predicate, but op doesn't have to be anything in particular. The '.' operator happens to be shorthand for an "append" operation in a number of languages, but there's nothing stopping you from defining a "prepend" operation — or even changing append to prepend — in any language that allows overloading an operator on whatever data type you want to work with (or on any object you might use to contain that data otherwise). If you want to make textvar /= newtext shorthand for textobj = textobj->prepend(newtext) there's nothing stopping you.
That said, you should probably never do this, as there's no way to change an existing operator without causing loads of confusion. It is substantially easier (and clearer) to change your statement order so that you can go left to right, or use a specific function such as unshift to prepend things without any ambiguity.
A: None that I know about, and I don't think that there will be, as it's a meta-meta-command.
To explain, the original operators (for numbers) came from C where they mapped directly to machine code operations. This allowed the programmer to do optimization, since the early compiler didn't.
So,
x=x+1;
and
x+=1;
and
x++;
would generate three different assembler outputs.
Now, adding += for string is a meta-command. It doesn't map to an opcode, it just follows and extends the pattern.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Subversion large repos import/checkout My normal work flow to create a new repository with subversion is to create a new repos, do a checkout of the repos root, create my branches tags and trunk folders and place in the trunk my initial files. Then I do a commit of this "initial import", delete the checked out repos from my hard drive and do a checkout of the trunk. Then I can start working.
However, when dealing with a large import, think hundreds of megs, and off-site version control hosting (http based) this initial import can take quite a while to commit. What's worse, after committing I need to checkout this massive trunk all over again.
Is there a way with subversion to use the local copy of the trunk without doing a checkout all over again of data that is already there?
A: I agree on the "in-place import" procedure and also using a script for TTB-structure(upvoted both).
Just a small hint:
You should not import a huge (ten of thousands) number of files in a single commit, if you use http(s), as the time for displaying the version history scales by the number of added entries. The reason for this behaviour is that apache has to authenticate all added paths agains the svnaccess file(of course, only if you enabled path-based authorization). This can render your repository unusable, as all files will have to wait on a svn log for this big rev.
You should divide huge imports on directory levels
A: I usually use "svn mkdir" to create the trunk/tags/branches directly on the server immediately after creating the repository. Then I can check out the empty trunk, move my initial files into that directory, add and commit them, and start working.
A: svn checkout --force lets you checkout a workingcopy 'over' an existing path. It keeps your old files and adds files that are only in your repository.
For creating your repository: You can perform multiple mkdir commands to a repository in a single commit using the 'svnmucc' command that is available in most Subversion distributions (e.g. SlikSVN).
Type svnmucc without arguments for some help.
A: There is - it's called an "in-place import", and it's covered in the Subversion FAQ here:
http://subversion.tigris.org/faq.html#in-place-import
What you're really doing is creating a new empty project in the repository, checking out the empty project your local folder - which turns your folder into a working copy - and then adding all your (existing) files to that 'empty' project, so they're added to the repository when you do an svn commit.
A: If you've checked out a single folder, copied your files into it, run svn add and svn commit; you shouldn't need to delete the files and re-checkout.
Use the files in place: once they've been committed as you describe, they're ready to be worked on.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Good dynamic programming language for .net recommendation Well, after a long time writing .net programs in C# I started to feel angry and frustrated about all that syntactic trash that comes with statically typed languages. Now I want to change to dynamically typed languages, that takes advantage of CLI.
So I searched a bit around and I didn't like of what I saw. I first looked for IronPython. But the project feels disorganized, it just didn't look well at all. Then I heard about Boo. I liked boo's, the ideal and all, but it kinda feels like the project is halted. Then I heard of IronRuby, but the project is still at beta, and so I decided to wait until it gets more mature.
So as I couldn't find a good CLR compatible dynamic language, I'm asking you guys what do you(would you) use?
Since people started asking what is the reason for not choosing IronPython. Well the reason is, like I stated earlier, it seems disorganized. Why?
1- The homepage points to another page at codeplex, the homepage should be clean and just point the advantages of IronPython, take the IronRuby page as an example http://www.ironruby.net/ its that hides language-developent stuff from the user(even though the user has to access IronRuby's svn prior using it).
What kind of issues IPython was trying to addres when it was created. Is there a page with that kind of information?
Well the page is there, but its hidden on the 'More Information Page' among a bunch of meaningless links to articles.
On Boo's page its at clear sight named as Manifesto http://boo.codehaus.org/BooManifesto.pdf .
There is more but the feeling that I have is that IPython is just a home-brewed interpreter, despite the quality that it can actually have.
I felt that it was safer to download Boo and use IronPython(but no worries Microsoft I had also downloaded IPython).
About Duck-Typing and Boo's static typing they both seems to work fine for me.
A: I'd still use Boo. I'm not sure why you believe Boo has been halted. Development sometimes seems slow, but there are multiple people currently working on bug fixes as demonstrated by this list of recently fixed issues (bugs).
For those unfamiliar with Boo, it's very similar to Python, but includes things that Python does not (like string interpolation and syntatic macros). You can compile Boo programs or use Boo through the "Boo Interactive Shell" booish.
By the way, I didn't like IronPython either when I looked at it a couple years ago. To me it looked like a straight port of Python to the CLI, but as far as I could tell it did not include new features typical .NET development requires.
EDIT: IronPython does seem to have progressed since I first looked at it (thanks for Curt pointing this out). However I have not bothered to look at IronPython again since I found Boo.
A: In terms of practical usability IronPython is going to be your best bet right now.
Why are people suggesting Boo? It's a statically typed language, which is not what this question is asking for. Yes, I know it has optional duck typing, but really if Boo-related information is acceptable to the question author then the question should really be edited to make that clear.
Regarding IronPython, you said you didn't like it, but there really isn't any response I can give to critical comments that are so vague :)
Alternatively, I would suggest that you take a look at cPython. A couple points:
*
*You can building .exe files with py2exe and other tools.
*Much larger access to 3rd-party Python libraries and frameworks
*Access to Windows APIs via pywin32
*Python extension writen in C are usable (unlike IronPython, though there are efforts underway to improve this situation)
You will find that you really don't need .NET for most things. Of course this all depends on your application. For integrating with existing .NET code then obviously you have to use IronPython.
A: You say you want "dynamic", but if the motivation is only to avoid "all that syntactic trash", you should have a look at F#. It's statically-typed, but has the light syntax feel of a dynamic language, as well as an interactive mode (REPL loop).
A: I agree with your sentiment that IronPython seems disorganized, but I have been using it for (small) projects and I have been pretty happy with it so far.
If you haven't seen it yet, you should check out IronPython Studio.
A: You can also take a look at Fan language. It is not pure dynamic, it's a mixture of both static & dynamic. And like most of the newer languages, it is pure OO with mixture of functional in it. It also run on both JVM and CLR platform.
Syntax wise it is closer to C# with a lot of sugar syntax so it kind of look like C# blend in with Ruby/Python.
Since the language is new, only 3 years old, its performance is not stellar yet.
Update 21-Feb-2014: Fan had changed its name to Fantom
A: I've been down a similar path. Checked out Boo, dotLISP, IronPython, etc.
I recommend IronPython. BUT if you don't already have any Python experience, then you'll probably learn core Python quicker by loading CPython and using that for the examples & tutorials.
Once you have CPython understanding, IronPython will be a lot easier to understand. Of course, you still need to understand some C# and have access to the .Net SDK documentation. Without it, IronPython is extremely difficult to get useful things done with.
A: Take a look at Clojure. The CLR version is still in the early stages, but you could probably get the java version working in .Net via IKVM
A:
...syntactic trash that comes with statically typed languages...
If your concern is syntactic trash then you might also like to check out statically typed languages with type inference, most notably F#.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Lisp in the real world I have experimented with Lisp (actually Scheme) and found it to be a very beautiful language that I am interested in learning more about. However, it appears that Lisp is never used in serious projects, and I haven't seen it listed as a desired skill on any job posting. I am interested in hearing from anyone who has used Lisp or seen it used in the "real world", or who knows whether it is considered a purely academic language.
A: Peter Christensen has compiled a great list of (financially) successful lisp companies.
http://www.pchristensen.com/blog/lisp-companies/
A: Franz, Inc. provides an inexhaustive list of success stories on their website. However:
Please don't assume Lisp is only
useful for Animation and Graphics, AI,
Bioinformatics, B2B and E-Commerce,
Data Mining, EDA/Semiconductor
applications, Expert Systems, Finance,
Intelligent Agents, Knowledge
Management, Mechanical CAD, Modeling
and Simulation, Natural Language,
Optimization, Research, Risk Analysis,
Scheduling, Telecom, and Web Authoring
just because these are the only things
they happened to list. — Kent Pitman
We can find other success stories here: http://lisp-lang.org/success/
and a list of current companies using Common Lisp: https://github.com/azzamsa/awesome-lisp-companies
A: There are plenty of companies, projects, and products that use Lisp in a variety of roles — I've done work for several of them.
There are two relevant points:
*
*you may never know that your latest piece of consumer electronics was built with, or even programmed in, Common Lisp, or that some service you use is powered by a Lisp server. It would be incorrect to conclude that Lisp is "never used".
*… and, like so many domains, those jobs never appeared on Monster.com. Just because you've never seen a job posting for it doesn't mean that there are no Lisp-required or right-tool-for-the-job opportunities out there.
A: The GIMP's plug-in system is based on Scheme, I believe. I don't know if this is completely "real world", but it seems to be a practical application of Lisp, at the very least.
A: Look up ACL2. It's a lisp based formal logic engine that has been used for a number of "real world" project like formal methods in software security and proofs of correctness for Floating point hardware.
A: I just realized now that Maxima, a program for symbolic algebra, is written in Common Lisp. I've been using that for quite some time and I think it's also a very good real life example.
A: Far from exhausted list in http://www.franz.com/success/all_customer_apps.lhtml
A: If my plans work out, we will all be using Scheme in 5 years from now! ;p
A: I was quite impressed when I found out that the PRISM («The Prism project is a long term project to build software tools for radiation therapy planning, including artificial intelligence tools as well as manual simulation systems.») is written in Common Lisp.
At my job I am writing software that uses DICOM and I must say that writing good DICOM implementation is a hard task. In their report they describe how Common Lisp let them build a good DICOM implementation that is better (at least in some ways) than other implementation with lesser effort.
A: Lisp is used in real-world algorithmic music composition with the Common Music library. Rick Taube's Notes from the Metalevel is a great introductory text to the subject which has a bunch of examples in Lisp for composing. See the examples directory here and a copy of the text here.
A: Matthew Eric Bassett on using Racket in the film industry:
http://www.youtube.com/watch?v=37owCjWnkK0
Daniel Liebgold on Racket and PS3:
http://www.youtube.com/watch?v=oSmqbnhHp1c
A: Does Emacs' elisp count? That's the most "real world" use that I am familiar with (although I'm not sure that Emacs counts as "real world" either).
A: Well, it's hardly mainstream, but I use lisp for as much of my research code as is manageable. It's by far the best language I've found for the balance of dynamism & expressiveness while still generating decent performance for numerics, etc..
A: Google App Inventor is written in Scheme
A: ITA Software uses Common Lisp for its QPX low-fare search engine which powers sites like Orbitz, Kayak, and American and United Airlines among many others. It's also used in part for its upcoming passenger reservation system for Air Canada. Paul Graham has written a little bit about Lisp at ITA in the past.
(Disclaimer: I work there.)
A: GNU Make is extensible with scheme. A case for real world programming :)
https://www.gnu.org/software/make/manual/html_node/Guile-Integration.html
A: Lisp attempted the jump to lightspeed in the early 80's. Before there were PCs,
there were commercially produced "Lisp Machines" which superficailly look a lot
like modern workstations, but which were lisp "all the way down". Lisp hardware
eventually lost out to Intel (as did everything else). Lisp software eventually
lost out to C/C++. There are a variety of theories why this is all this is so.
http://www.andromeda.com/people/ddyer/lisp/
A: The story of the rise and fall of Lisp at the Jet Propulsion Lab
A: as a small startup we've built up something some people call an "application server". but in fact it's just a bunch of integrated common lisp libraries for sql connectivity and web applications. some details are available at cl-dwim project page
using that we have developed and operate a web application for the hungarian government that collect data from the local governments and calculates the relevant part of the budget of the country. this is the second budget we are planning now.
it has about 4000 users, and it runs on a cluster of computers.
as of "academic language": we are playing with things like persistent continuations for business process modelling. it's some random lisp code with a few extra process-related primitives and a few constraints. it can stop at random points in the code and fall asleep (get comitted into the database) while it waits for some external event.
is it practical or academic? you decide... :)
A: Reddit was originally written in Lisp and then later rewritten in Python. There's a good analysis of the switch and what it means for Lisp over at Finding Lisp.
A: Paul Graham has used and written about ViaWeb that was written in LISP
Read about it here - Beating the Average
A: Scheme programming language is used as a scripting language by FLUENT Flow Modelling Software (computational fluid dynamics, CFD).
A: For the AutoCAD application AutoLISP/Visual LISP are used a lot for real projects and there is a large community of users.
A: I see a few people have already mentioned it but lisp is widely used in custom Autocad development. Autocad includes a built-in lisp interpreter. It is one of the simplest ways to extend the product and provides the ability to quickly enhance your productivity.
No compiling is required, on the user side, and 1, or more, line lisp expressions can be entered on the command line and executed immediately on the drawing. For designers and draftsman willing to take even a small step to learning the basics of lisp it can provide a huge productivity boon.
Autocad does provide a number of other ways to customize their products; ObjectARX (C++), VB, C#, etc.. The lisp interface is by far the easiest to learn and implement. And the majority of other dev environments use lisp in some fashion.
The lisp interpreter was made available in a very early version of Autocad and was called Variables and expressions. It was fairly limited but was such a success with the users that additional functionality was quickly added. A full blown visual IDE was later on (in version 2000 I think).
I would hate to guess how many millions (billions?) of lines of lisp code are available for Autocad. A google search on "autocad .lsp" returns 2.3 million hits.
Ok, enough typing, it's back to work for me, writing more lisp for my current project :)
A: Algorithmic Composition Toolbox from Paul Berg:
http://www.koncon.nl/downloads/ACToolbox/
A: My company has the software writen in scheme (PLT). The software is used to act like email firewall for the big companies.
A: http://echowaves.com is build in clojure with compojure. The site was built as a learning exercise to see if it's practical to use clojure for building web apps. The answer is -- yes! Thumbs up for clojure on the web. Learn clojure by all means -- it will improve your career.
The code is opensource, if anyone wants to see example what are the typical moving parts for a typical compojure app https://github.com/echowaves/echowaves
A: As previously said, the computer algebra system "Maxima" is written in Lisp, but other CAS are also written in Lisp, for instance Axiom and its forks (OpenAxiom and Fricas).
A: Walmart uses clojure to process purchases real-time
A: The Hubble Space Telescope is scheduled using Lisp planning tools. The Space Shuttle was. The Webb telescope will be. The company I write Lisp for analyzes billions of dollars of health insurance claims and has been growing at ~30% per year even through the recession. We've been bought by a huge company, and one of our programmers matched (actually improved upon) the output of (huge company)'s software for analyzing Medicare claims, starting from scratch, by himself, in a year. (huge company)'s code, not in Lisp, took 6 years and several programmers. The trouble, career-wise, is that too many listen to the twaddle about "lots of irritating silly parentheses" and so on. Most managers don't "get it" and would rather have a project in a language familiar enough that they can micro-manage. They think "Lisp=AI" and don't even want to entertain the possibility that it's a good general purpose language. They just plug their ears. There aren't polished tools for doing M$-friendly websites or clustering or pipelining existing Java apps, and that's 90% of what IT cares about in these days of growth by acquisition. I could go on, but it would just get me bitter. :)
A: ITA software uses a fair amount of CL.
http://www.itasoftware.com/careers/l_e_t_lisp.html?catid=8
A: A fairly recent open-source project that is still enjoying consistent and considerable development activity is LilyPond.
It's a music notation program that takes a easy-to-write text file as input and converts it into beautiful sheet music (pdf files). Offers all kinds of ways to fiddle with the output if you want to. It can even produce decent sounding midi files. I use it whenever I need to produce nice sheet music that other musicians will read from. I think it's better than Finale and it's free!
In the commercial category, there is also Notehead's Igor
Engraver. Unfortunately, the site doesn't allow me to post a direct link to the page that talks about Lisp, so go to downloads and look at the bottom for a "Lisp" link.
There's also Naughty Dog (a computer game company) who use Lisp in their games. This article talks about that and even shows some code.
And there are many others that have been mentioned and linked to, but these are the main ones that resonate with me (being a composer/programmer/gamer/... type).
A:
If I started up my very own major software project now, I would make my language decision based on the criteria above. Sure, I love Lisp, CLOS is awesome, real lexical scoping rocks, Lisp macros are way cool (when used as directed), and personally I really like Lisp syntax. […] But it would take a lot, or require special circumstances, to persuade me to choose Lisp for a major software project, if I were in charge of making the choice. - Dan Weinreb
A: I believe Autocad has extensions that use Lisp to extend the product. See AutoLISP.
A: Some more recent ones:
*
*Thanandar, a German browser game: http://www.thanandar.de/
*Aula Polska, a Polish entrepreneur community: http://www.aulapolska.pl/
*LAMsight, a medical survey application: https://www.lamsight.org/
*Wigflip, a playground of silly gfx: http://wigflip.com/ :)
*Clutu, multiplayer AJAX Crossword Puzzles: http://www.clutu.com/
The first three of those were written using Weblocks, a CL web framework.
Wigflip and Clutu use pure Hunchentoot.
Now get coding! :)
A: Just adding to all the very wise comments above: look at the Corman Lisp tool and discover how to embed VERY INTELLIGENT FUNCTIONS into an embedded system!
A: http://www.gensym.com/ - Real time business rules engine have many industrial clients.
Internally it is written in Commom Lisp
A: It's a wonderful language, but it's crippled because (in my opinion as a software business owner and programmer) there are very few commercial Lisp packages, and the few that are out there demand a run-time fee (because a proper Lisp package can be used by end-users to write Lisp programs too).
I use Steel Bank Common Lisp to prototype code under Windows and Linux, and I love it -- but I would never consider shipping a product written with it. There's no easy way to set up single-click access to the programs, so that the end user will never be confronted with a Lisp prompt. There's no way to ship a compiled product so that the user can't disassemble it, make some changes to remove your name, and sell it as his own. I've seen mention of Lisp systems that both of these can be done in, but they're commercial ones where you have to pay run-time fees for each end-user of your program, which is ridiculous.
Lisp may come into its own some day (and I fervently hope that it does), but it isn't viable for most commercial software yet. The only exception is something where it's always going to be running on systems that you have complete control over, like a web server (and I've only heard of a couple companies using it even for that).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "148"
} |
Q: Is there a way I can tell whether an SMTP server is expecting a client to connect using "implicit" SSL versus "explicit" SSL? SSL can either be "explicit" or "implicit" as explained by this link:
http://help.globalscape.com/help/secureserver2/Explicit_versus_implicit_SS.htm
System.Net.Mail only support "explicit" SSL, as explained here:
http://blogs.msdn.com/webdav_101/archive/2008/06/02/system-net-mail-with-ssl-to-authenticate-against-port-465.aspx
So, I'm trying to use System.Net.Mail to connect to an SMTP server that I don't have any control over, and it's failing. How can I know for sure that it's failing because the server wants an "implicit" SSL connection? What test can I do from the client side?
A: Ordinarily this is governed by convention. SMTP running on port 25 (the normal case) uses explicit SSL. SMTPS running on port 465 uses implicit SSL. Mail submission running on port 587 uses explicit SSL.
To tell for sure, telnet to the port, as in "telnet mail.example.com 25". If you see a plain text banner where the server identifies itself, then you are dealing with explicit SSL. If you connect successfully and see nothing, then you are dealing with implicit SSL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I override a table width property in CSS I have the following style in an external CSS file called first.css
table { width: 100%; }
This makes the tables fill their container. If there are only two small columns they appear too far from each other.
To force the columns to appear nearer I have added this style
table { width: 50%; }
to a new file called second.css and linked it into the html file.
Is there any way to override the width property in first.css without the need to specify a width in second.css?
I would like the html behave as if there has never been a width property, but I do not want to modify first.css
A: You can use:
table { width: auto; }
in second.css, to strictly make "the html behave as if there was never been a width property". But I'm not 100% sure this is exactly what you want - if not, please clarify!
A: You could also add a style="width: auto" attribute to the table - this way only the html of the page will be modified.
A: table { width: 50%; !important }
adding !important to any Style makes it override other ones no matter if it is a parent element or child element
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: .NET Firebug like UI assistance I am tasked with a project whereby I need to create a scaled down version of a Firebug like UI where the user can load an HTML page, and as they hover the mouse over the elements they'll be highlighted. The app will allow the users to select a table to be screen-scraped....haven't got to that part as yet.
Any advice?
Thanks
A: Well, I haven't used the Firebug UI, but I have done exactly what you describe using the .NET 2.0 WebBrowser control in a WinForms app.
Basically I added the WebBrowser and a Timer control to the form then in the timer elapsed event, I query the mouse position using the GetCursorPos native function and use the WebBrowser.Document's (HtmlDocument class) GetElementFromPoint method (adjusting the x and y position to be relative to the browser control).
This returns whatever HtmlElement is under the mouse position. Here's the meat of the method:
HtmlElement GetCurrentElement()
{
if (Browser.ReadyState == WebBrowserReadyState.Complete && Browser.Document != null)
{
Win32Point mouseLoc = HtmlScan.Win32.Mouse.GetPosition();
Point mouseLocation = new Point(mouseLoc.x, mouseLoc.y);
// modify location to match offset of browser window and control position:
mouseLocation.X = ((mouseLocation.X - 4) - this.Left) - Browser.Left;
mouseLocation.Y = ((mouseLocation.Y - 31) - this.Top) - Browser.Top;
HtmlElement element = Browser.Document.GetElementFromPoint(mouseLocation);
return element;
}
return null;
}
After you get the HtmlElement, you can get the InnerHTML to parse as you see fit.
Richard
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Detecting when a div's height changes using jQuery I've got a div that contains some content that's being added and removed dynamically, so its height is changing often. I also have a div that is absolutely positioned directly underneath with javascript, so unless I can detect when the height of the div changes, I can't reposition the div below it.
So, how can I detect when the height of that div changes? I assume there's some jQuery event I need to use, but I'm not sure which one to hook into.
A: In response to user007:
If the height of your element is changing due to items being appended to it using .append() you shouldn't need to detect the change in height. Simply add the reposition of your second element in the same function where you are appending the new content to your first element.
As in:
Working Example
$('.class1').click(function () {
$('.class1').append("<div class='newClass'><h1>This is some content</h1></div>");
$('.class2').css('top', $('.class1').offset().top + $('.class1').outerHeight());
});
A: Use a resize sensor from the css-element-queries library:
https://github.com/marcj/css-element-queries
new ResizeSensor(jQuery('#myElement'), function() {
console.log('myelement has been resized');
});
It uses a event based approach and doesn't waste your cpu time. Works in all browsers incl. IE7+.
A: I wrote a plugin sometime back for attrchange listener which basically adds a listener function on attribute change. Even though I say it as a plugin, actually it is a simple function written as a jQuery plugin.. so if you want.. strip off the plugin specfic code and use the core functions.
Note: This code doesn't use polling
check out this simple demo http://jsfiddle.net/aD49d/
$(function () {
var prevHeight = $('#test').height();
$('#test').attrchange({
callback: function (e) {
var curHeight = $(this).height();
if (prevHeight !== curHeight) {
$('#logger').text('height changed from ' + prevHeight + ' to ' + curHeight);
prevHeight = curHeight;
}
}
}).resizable();
});
Plugin page: http://meetselva.github.io/attrchange/
Minified version: (1.68kb)
(function(e){function t(){var e=document.createElement("p");var t=false;if(e.addEventListener)e.addEventListener("DOMAttrModified",function(){t=true},false);else if(e.attachEvent)e.attachEvent("onDOMAttrModified",function(){t=true});else return false;e.setAttribute("id","target");return t}function n(t,n){if(t){var r=this.data("attr-old-value");if(n.attributeName.indexOf("style")>=0){if(!r["style"])r["style"]={};var i=n.attributeName.split(".");n.attributeName=i[0];n.oldValue=r["style"][i[1]];n.newValue=i[1]+":"+this.prop("style")[e.camelCase(i[1])];r["style"][i[1]]=n.newValue}else{n.oldValue=r[n.attributeName];n.newValue=this.attr(n.attributeName);r[n.attributeName]=n.newValue}this.data("attr-old-value",r)}}var r=window.MutationObserver||window.WebKitMutationObserver;e.fn.attrchange=function(i){var s={trackValues:false,callback:e.noop};if(typeof i==="function"){s.callback=i}else{e.extend(s,i)}if(s.trackValues){e(this).each(function(t,n){var r={};for(var i,t=0,s=n.attributes,o=s.length;t<o;t++){i=s.item(t);r[i.nodeName]=i.value}e(this).data("attr-old-value",r)})}if(r){var o={subtree:false,attributes:true,attributeOldValue:s.trackValues};var u=new r(function(t){t.forEach(function(t){var n=t.target;if(s.trackValues){t.newValue=e(n).attr(t.attributeName)}s.callback.call(n,t)})});return this.each(function(){u.observe(this,o)})}else if(t()){return this.on("DOMAttrModified",function(e){if(e.originalEvent)e=e.originalEvent;e.attributeName=e.attrName;e.oldValue=e.prevValue;s.callback.call(this,e)})}else if("onpropertychange"in document.body){return this.on("propertychange",function(t){t.attributeName=window.event.propertyName;n.call(e(this),s.trackValues,t);s.callback.call(this,t)})}return this}})(jQuery)
A: These days you can also use the Web API ResizeObserver.
Simple example:
const myElement = document.querySelector('#myElement');
const resizeObserver = new ResizeObserver(() => {
console.log('size of myElement changed');
});
resizeObserver.observe(myElement);
A: You can use the DOMSubtreeModified event
$(something).bind('DOMSubtreeModified' ...
But this will fire even if the dimensions don't change, and reassigning the position whenever it fires can take a performance hit. In my experience using this method, checking whether the dimensions have changed is less expensive and so you might consider combining the two.
Or if you are directly altering the div (rather than the div being altered by user input in unpredictable ways, like if it is contentEditable), you can simply fire a custom event whenever you do so.
Downside: IE and Opera don't implement this event.
A: This is how I recently handled this problem:
$('#your-resizing-div').bind('getheight', function() {
$('#your-resizing-div').height();
});
function your_function_to_load_content() {
/*whatever your thing does*/
$('#your-resizing-div').trigger('getheight');
}
I know I'm a few years late to the party, just think my answer may help some people in the future, without having to download any plugins.
A: You can use MutationObserver class.
MutationObserver provides developers a way to react to changes in a DOM. It is designed as a replacement for Mutation Events defined in the DOM3 Events specification.
Example (source)
// select the target node
var target = document.querySelector('#some-id');
// create an observer instance
var observer = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
console.log(mutation.type);
});
});
// configuration of the observer:
var config = { attributes: true, childList: true, characterData: true };
// pass in the target node, as well as the observer options
observer.observe(target, config);
// later, you can stop observing
observer.disconnect();
A: You can make a simple setInterval.
function someJsClass()
{
var _resizeInterval = null;
var _lastHeight = 0;
var _lastWidth = 0;
this.Initialize = function(){
var _resizeInterval = setInterval(_resizeIntervalTick, 200);
};
this.Stop = function(){
if(_resizeInterval != null)
clearInterval(_resizeInterval);
};
var _resizeIntervalTick = function () {
if ($(yourDiv).width() != _lastWidth || $(yourDiv).height() != _lastHeight) {
_lastWidth = $(contentBox).width();
_lastHeight = $(contentBox).height();
DoWhatYouWantWhenTheSizeChange();
}
};
}
var class = new someJsClass();
class.Initialize();
EDIT:
This is a example with a class. But you can do something easiest.
A: You can use this, but it only supports Firefox and Chrome.
$(element).bind('DOMSubtreeModified', function () {
var $this = this;
var updateHeight = function () {
var Height = $($this).height();
console.log(Height);
};
setTimeout(updateHeight, 2000);
});
A: Pretty basic but works:
function dynamicHeight() {
var height = jQuery('').height();
jQuery('.edito-wrapper').css('height', editoHeight);
}
editoHeightSize();
jQuery(window).resize(function () {
editoHeightSize();
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "147"
} |
Q: How often do you worry about how many if cases will need to be processed? If you have the following:
$var = 3; // we'll say it's set to 3 for this example
if ($var == 4) {
// do something
} else if ($var == 5) {
// do something
} else if ($var == 2) {
// do something
} else if ($var == 3) {
// do something
} else {
// do something
}
If say 80% of the time $var is 3, do you worry about the fact that it's going through 4 if cases before finding the true case?
I'm thinking on a small site it's not a big deal, but what about when that if statement is going to run 1000s of times a second?
I'm working in PHP, but I'm thinking the language doesn't matter.
A: A classic case of this occurring (with literally 5 options as in your post) was in ffmpeg, in the decode_cabac_residual function. This was rather important, as profiling (very important--don't optimize before profiling!) showed it counted for upwards of 10-15% of the time spent in H.264 video decoding. The if statement controlled a set of statements which was calculated differently for the various types of residuals to be decoded--and, unfortunately, too much speed was lost due to code size if the function was duplicated 5 times for each of the 5 types of residual. So instead, an if chain had to be used.
Profiling was done on many common test streams to order them in terms of likelihood; the top was the most common, the bottom the least. This gave a small speed gain.
Now, in PHP, I suspect that there's a lot less of the low-level style speed gain that you'd get in C, as in the above example.
A: Using a switch/case statement is the definitely the way to go here.
This gives the compiler (interpreter) the chance to utilize a jump table to get to the right branch without having to do N comparisons. Think of it creating an array of addresses indexed as 0, 1, 2, .. then it can just look the correct one up in the array in a single operation.
Plus, since the is less syntatic overhead in a case statement, it reads easier too.
Update: if the comparisons are suitable for a switch statement then this is an area where profile guided optimizations can help. By running a PGO build with realistic test loads the system can generate branch usage information, and then use this to optimize the path taken.
A: Here's how we did it when I used to write software for radar systems. (Speed matters in radar. It's one of the few places where "real time" actually means "real" instead of "fast".)
[I'll switch to Python syntax, it's easier for me and I'm sure you can interpret it.]
if var <= 3:
if var == 2:
# do something
elif var == 3:
# do something
else:
raise Exception
else:
if var == 4:
# do something
elif var == 5:
# do something
else:
raise Exception
Your if-statements form a tree instead of a flat list. As you add conditions to this list, you jiggle around the center of the tree. The flat sequence of n comparisons takes, on average, n/2 steps. The tree leads to a sequence of comparisons that takes log(n) comparisons.
A: Well, I believe that almost all the time, the legibility of, say, having numerically ordered values would override any tiny benefits you may gain by reducing the number of comparison instructions.
Having said that, as with all optimisation:
*
*Make it work
*Measure it
*If it's fast enough, leave it alone
*If it's too slow, THEN optimise it
Oh, and I'd probably use a switch/case from the get-go! ;-)
A: If the code has to do additional tests then it will certainly run more slowly. If performance is critical in this section of code then you should put the most common case(s) first.
I normally agree with the "measure, then optimize" method when you're not sure if the performance will be fast enough, but if the code simply needs to run as fast as possible AND the fix is as easy as rearranging the tests, then I would make the code fast now and do some measuring after you go live to insure that your assumption (e.g. that 3 is going to happen 80% of the time) is actually correct.
A: Rather than answer the PHP question, I'll answer a bit more generally. It doesn't apply directly to PHP as it will go through some kind of interpretation.
Many compilers can convert to and from if-elif-elif-... blocks to switch blocks if needed and the tests in the elif-parts are simple enough (and the rest of the semantics happens to be compatible). For 3-4 tests there is not necessarily anything to gain by using a jump table.
The reason is that the branch-predictor in the CPU is really good at predicting what happens. In effect the only thing that happens is a bit higher pressure on instruction fetching but it is hardly going to be world-shattering.
In your example however, most compilers would recognize that $var is a constant 3 and then replace $var with 3 in the if..elif.. blocks. This in turn makes the expressions constant so they are folded to either true of false. All the false branches is killed by the dead-code eliminator and the test for true is eliminated as well. What is left is the case where $var == 3. You can't rely on PHP being that clever though. In general you can't do the propagation of $var but it might be possible from some call-sites.
A: You could try having an array of code blocks, which you call into. Then all code blocks have the same overhead.
Perl 6:
our @code_blocks = (
{ 'Code Block 0' },
{ 'Code Block 1' },
{ 'Code Block 2' },
{ 'Code Block 3' },
{ 'Code Block 4' },
{ 'Code Block 5' },
);
if( 0 <= $var < @code_blocks.length ){
@code_blocks[$var]->();
}
A: With code where it is purely an equality analysis I would move it to a switch/case, as that provides better performance.
$var = 3; // we'll say it's set to 3 for this example
switch($var)
{
case 4:
//do something
break;
case 5:
//do something
break;
case:
//do something when none of the provided cases match (same as using an else{ after the elseif{
}
now if your doing more complicated comparisons I would either nest them in the switch, or just use the elseif.
A: In object-oriented languages, if an option provides massive ifs, then that means you should just move the behavior (e.g., your //do something blocks) to the object containing the value.
A: Only you can tell if the performance difference of optimizing the order, or rearranging it to in effect be a binary tree, would make a significant difference. But I suspect you'll have to have millions of times per second, not thousands, to even bother thinking about it in PHP (and even more so in some other languages).
Time it. See how many times a second you can run the above if/else if/else statement with no action being taken and $var not being one of the choices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Selection overridden by formatter I have applied a Formatter to a JFormattedTextField using a FormatterFactory, when a user clicks into the text field I want to select the contents.
A focus listener does not work as expected because the formatter gets called, which eventually causes the value to be reset which ultimately de-selects the fields contents. I think what is happening is that after the value changes, the Caret moves to the rightmost position and this deselects the field.
Does anyone have any knowledge of how to get around this and select the fields contents correctly?
A: Quick and dirty workaround is to use
EventQueue.invokeLater from your focusListener.
EventQueue.invokeLater(new Runnable(){
public void run() { yourTextField.selectAll();}
});
A: which jdk are you using - any chance this is a bug in it?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you specify that an exception should be expected using Boost.Test? I have a Boost unit test case which causes the object under test to throw an exception (that's the test, to cause an exception). How do I specify in the test to expect that particular exception.
I can specify that the test should have a certain number of failures by using BOOST_AUTO_TEST_CASE_EXPECTED_FAILURES but that seems rather unspecific. I want to be able to say at a specific point in the test that an exception should be thrown and that it should not be counted as a failure.
A: Doesn't this work?
BOOST_CHECK_THROW (expression, an_exception_type);
That should cause the test to pass if the expression throws the given exception type or fail otherwise. If you need a different severity than 'CHECK', you could also use BOOST_WARN_THROW() or BOOST_REQUIRE_THROW() instead. See the documentation
A: You can also use BOOST_CHECK_EXCEPTION, which allows you to specify test function which validates your exception.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: What are the greatest benefits of LLVM? Does anyone have experience with LLVM, llvm-gcc, or Clang?
The whole idea behind llvm seems very intriguing to me and I'm interested in seeing how it performs. I just don't want to dump a whole lot of time into trying the tools out if the tools are not ready for production.
If you have experience with the tools, what do you think of them? What major limitations have you encountered? What are the greatest benefits?
Many thanks!
A: I can't say enough good things about LLVM. It is so easy to work with compared to other compiler projects I have looked at. I am not a compiler guy, but when I get frustrated with some limitation of LLVM or clang it is usually pretty easy to dive in and change it.
We (Nate Begeman, myself, and a few others) wrote the PPC backend with no real experience in compiler design, but it looked simple enough that non-experts could approach it. We were pretty familiar with PPC assembly, but it was still pretty incredible we managed to get LLVM-gcc outputting PPC code in a few weeks of our spare time. Definitely one of the most satisfying Hello World's I have ever compiled.
A: I have been playing with LLVM on and off for many months now. I wrote two OCaml Journal articles covering the use of LLVM from the OCaml programming language. That is particularly interesting because the OCaml language is ideal for writing compilers and has a wealth of powerful and mature tools and libraries for parsing and so on.
Overall, my experience has been extremely positive. LLVM does what it says on the tin and is very easy to use. The performance of the generated code is superb. One of the programs I wrote was a simple little Brainf*ck compiler that generates some of the fastest executables of any compiler I tested (including GCC).
I have only two gripes with LLVM. Firstly, it uses abort() whenever anything goes wrong instead of raising an exception. This was a deliberate design decision by its authors who are striving to remove all uses of exceptions from LLVM but it makes it impossible to get backtraces from OCaml when trying to debug a compiler that uses LLVM: your program just dies with a textual explanation from LLVM but no clue as to where the error occurred in your source. Secondly, LLVM's compiled library is monstrously big (20Mb). I assume this is due to the bloat incurred by C++ but it makes compilation painfully slow.
EDIT: My work on LLVM culminated in the creation of a high-performance high-level garbage-collected virtual machine. Free download here and check out the corresponding benchmarks (wow!). @Alex: I'll get that BF compiler up for you somewhere ASAP.
A: I've had an initial play around with LLVM and working through this tutorial left me very very excited about it's potential; the idea that I can use it to build a JIT into an app with relative ease has me stoked.
I haven't gone deep enough to be able to offer any kind of useful opinion on it's limitations, stability, performance and suchlike. I understand that it's good on all counts but that's purely hearsay.
A: You asked about tools and I would like to mention that there is LLVM plugin for Eclipse CDT (for Windows, Linux and Mac). It integrates LLVM nicely to IDE and the user does not need to know anything about LLVM. Pressing build button is enough to produce .bc and executable files (and intermediate files on the background not visible for the user).
The latest version is available via official Eclipse update site: http://download.eclipse.org/releases/mars
It is under Programming Languages and is named "C/C++ LLVM-Family Compiler Build Support".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Is there an easier way to track scope changes in ExtremePlanner? I am about to join a new software team midway through a project. They are using ExtremePlanner to track their progress.
While they tracking tasks completed, they are not tracking how the estimated size of the project is changing over time. In the short time I have been monitoring the project this estimate has changed faster than the completion rate of the tasks. My gut feeling says that this is not just a blip but a problem that has been consistent throughout the project lifetime.
But how do I prove or disprove this?
I have not found the ExtremePlanner metrics useful for this. I have been exporting the data to MS Excel but the exported task and story info is missing important data like the creation date. Working around this is a bit of work. Is there a better way of doing this?
Alternately am I making too much of this? It has been argued by some of my prospective team mates that since no new features have been added that the scope has not changed and it is not a problem. However I argue that since new work within the features is continually being found the scope is changing and that this needs to be taken in to account when estimating the release date.
A: If this is an Agile project (e.g., using Scrum, XP, etc.), then presumably you are working in iterations (or sprints).
So my question would be - do things change so frequently within a single iteration that you need to measure those? Typically an agile project steers by freezing functionality within a single iteration. Yes, you may discover new possible implementation details or technical hurdles, but either they are just details within a 2 week iteration.
If you iterations get too long, I do see your concerns, since you'd be going a bit long to course correct if it takes a month or 6 weeks for each iteration.
I guess I'd ask - what would you do if you had these reports, and what would be on them?
A baseline of what the task estimates were originally and what they are now? ExtremePlanner does provide information on Original Estimates for tasks so you can compare those against the current state of the iteration (see the Task view - you may need to click the "customize" link for that view to display the original estimate column).
If it's something more, I'd be interested in what you would find useful there (we use ExtremePlanner also and haven't run into this need, though we use 2 week iterations).
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to determine if a JNI (jogl) is available at runtime? I'm working on a source-code visualization project that uses the Processing core library. The processing library has the option to use the jogl OpenGL library to render graphics which really improves performance. However, the JNI files that jogl uses aren't necessarily available at runtime, depending on who is using the project and on what platform.
Currently we just have the user specify if they want to use OpenGL, but it would obviously be much nicer if we could use OpenGL by default and only fall back to software rendering when it's not available. The Processing libraries don't seem to make this easy, you're only supposed to specify a renderer once, and changing renderers gives… novel behavior.
Any idea how to figure out if the necessary JNIs for jogl are available and working at runtime?
A: Simple just try to load the class with your ClassLoader using loadClass and catch a ClassNotFound exception and UnsatisfiedLinkError to do fallback functionality.
A: The other neat way to make sure you get to use Jogl is to deploy via JNLP. You can include Jogl as a remote dependency and the Java launcher will automatically fetch the appropriate native version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to manage documents for many tasks/projects I'm probably asking for the world, but is there any Windows-based software for easily managing lots of tasks/projects and all the associated documents (spreadsheets, Word documents, other files) that can be quickly navigated/searched.
I deal with probably 10-20 active projects and tasks, some lasting only a day or so, some lasting weeks, and some that go for months. I like to keep everything about any project in a single folder, but it seems like I spend forever navigating folder trees when saving a spreadsheet, SQL script, or if I want to drag a website shortcut to a folder so I can find it when I need it.
I've tried various folder organizations (i.e. a top level project directory, then broken down by department or person or application), but no matter what I try, it's still too time consuming to find what I'm looking for. The closest to what I'm looking for is like Lotus had in their Agenda program back in the DOS days, where you could organize and view data multiple ways, I'd like to be able to do the same thing with a nice fast Windows app that would fully integrate with Explorer so it would know what projects I'm actively working on when asking me where to save something, and maybe even do some minimal project management so as I mark things complete, their directories would no longer appear, or if something is marked as high priority, it would show at the top or be color coded.
Edit:
Adding user-defined labels to Explorer, with filtering would probably go 90% of the way; if I could define 10 - 20 labels, and have an option in Explorer to show a single label, which would then show only any folders that have content marked with that label. It's be even slicker if you could have a tabbed interface, one tab for each label. Mac's have had color coded user labels on files for at least 10 years. I hate to ask if Vista has this, but I'd take a look at it if it did.
A: I've had this same problem.
I believe the answer hasn't yet been invented (if I had the time, I'd write it myself and make millions).
Essentially we need to steal GMail's idea of 'labels' and use it to completely replace windows explorer for management of digital resources.
The idea would be simply that you would 'label' or 'tag' each resource as many times as you need (e.g. 'Project Albatross', 'Analysis', 'Phase 1', etc).
The application would then allow you to browse by label/tag dynamically (selecting "Project Albatross" would show subfolders of "Analysis" and "Phase 1", selecting "Phase 1" would show "Analysis" and vice versa), rather than being confined to a static tree.
Some web sites (like this one) and Outlook already have a similar idea. In fact, I would argue that integration with Outlook and web site tagging would go a long way to making this application a must-have. Unfortunately, for web site integration, we almost need some sort of open standard for tagging, which might be a fair bit of work to take on...
A: Omea Pro
It is free.
A: Vista does have some tagging capabilities that might help with docs: Tag Files and Save Searches in Windows Vista (LifeHacker).
A: You should take a look at Tele-Support HelpDesk. It has a known issues database that now allows attachments that you can use for your projects. The priority system allows color coding, and it has an extensive filtering system. You can even attach "contacts" to each known issue to give you a quick method of contacting the person for whom the project is for.
While there is no integration with the shell the way you mention, links can be added to a known issue or inquiry using a simple drag/drop procedure.
A: It seems that the root of the problem is with locating the right file.
Why do not you try Windows Search (obviously it integrates with Office products, including Outlook) or Google Desktop?
Simple folder structure, tagging files and search do help me in my software dev projects.
A: I agree with Rinat, Google Desktop or Windows Desktop search will get you about 80% of the way there. The other thing you need is some kind of revision control - CVS, RCS, SVN, GIT, Clearcase...
A: I think Directory Opus would do what you want (good review here). It's a replacement for Windows Explorer and has been around for at least 20 years, maybe more. It supports very sophisticated filtering and can also tag and colour files, as shown in this thread. You can set the colours for filtered files so that files ending in .jpg are a different colour, for example, or to show files modified in the past 1 hour, things like that. You can use regexes as well. Details of the label system shown here. You can also use custom icons for labels.
You can also create 'collections' of files, which are 'virtual folders' that bring different files and folders together in one folder. More on file collections here.
In addition to these features you get complete customisation of the interface (yes, everything, including dual-pane views etc), search, synchronising, scripting, FTP - it just goes on and on AND it's still being actively developed and improved. I have been a satisfied user for many years.
A: Yes, Microsoft Team Foundation Server does all of this (it ties in with and uses SharePoint web sites), of course it will cost you a bit. :-)
Add Project Server into the mix and you have Microsoft Project capabilities published to share point viewable sites as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to disable/enable network connection in c# Basically I'm running some performance tests and don't want the external network to be the drag factor. I'm looking into ways of disabling network LAN. What is an effective way of doing it programmatically? I'm interested in c#. If anyone has a code snippet that can drive the point home that would be cool.
A: If you are looking for a very simple way to do it, here you go:
System.Diagnostics.Process.Start("ipconfig", "/release"); //For disabling internet
System.Diagnostics.Process.Start("ipconfig", "/renew"); //For enabling internet
Make sure you run as administrator. I hope you found this helpful!
A: Found this thread while searching for the same thing, so, here is the answer :)
The best method I tested in C# uses WMI.
http://www.codeproject.com/KB/cs/EverythingInWmi02.aspx
Win32_NetworkAdapter on msdn
C# Snippet : (System.Management must be referenced in the solution, and in using declarations)
SelectQuery wmiQuery = new SelectQuery("SELECT * FROM Win32_NetworkAdapter WHERE NetConnectionId != NULL");
ManagementObjectSearcher searchProcedure = new ManagementObjectSearcher(wmiQuery);
foreach (ManagementObject item in searchProcedure.Get())
{
if (((string)item["NetConnectionId"]) == "Local Network Connection")
{
item.InvokeMethod("Disable", null);
}
}
A: For Windows 10 Change this:
for diable
("netsh", "interface set interface name=" + interfaceName + " admin=DISABLE")
and for enable
("netsh", "interface set interface name=" + interfaceName + " admin=ENABLE")
And use the program as Administrator
static void Disable(string interfaceName)
{
//set interface name="Ethernet" admin=DISABLE
System.Diagnostics.ProcessStartInfo psi = new System.Diagnostics.ProcessStartInfo("netsh", "interface set interface name=" + interfaceName + " admin=DISABLE");
System.Diagnostics.Process p = new System.Diagnostics.Process();
p.StartInfo = psi;
p.Start();
}
static void Enable(string interfaceName)
{
System.Diagnostics.ProcessStartInfo psi = new System.Diagnostics.ProcessStartInfo("netsh", "interface set interface name=" + interfaceName + " admin=ENABLE");
System.Diagnostics.Process p = new System.Diagnostics.Process();
p.StartInfo = psi;
p.Start();
}
And use the Program as Administrator !!!!!!
A: Using netsh Command, you can enable and disable “Local Area Connection”
interfaceName is “Local Area Connection”.
static void Enable(string interfaceName)
{
System.Diagnostics.ProcessStartInfo psi =
new System.Diagnostics.ProcessStartInfo("netsh", "interface set interface \"" + interfaceName + "\" enable");
System.Diagnostics.Process p = new System.Diagnostics.Process();
p.StartInfo = psi;
p.Start();
}
static void Disable(string interfaceName)
{
System.Diagnostics.ProcessStartInfo psi =
new System.Diagnostics.ProcessStartInfo("netsh", "interface set interface \"" + interfaceName + "\" disable");
System.Diagnostics.Process p = new System.Diagnostics.Process();
p.StartInfo = psi;
p.Start();
}
A: In VB.Net , You can also use it for toggle Local Area Connection
Note: myself use it in Windows XP, it's work here properly. but in windows 7 it's not work properly.
Private Sub ToggleNetworkConnection()
Try
Const ssfCONTROLS = 3
Dim sConnectionName = "Local Area Connection"
Dim sEnableVerb = "En&able"
Dim sDisableVerb = "Disa&ble"
Dim shellApp = CreateObject("shell.application")
Dim WshShell = CreateObject("Wscript.Shell")
Dim oControlPanel = shellApp.Namespace(ssfCONTROLS)
Dim oNetConnections = Nothing
For Each folderitem In oControlPanel.items
If folderitem.name = "Network Connections" Then
oNetConnections = folderitem.getfolder : Exit For
End If
Next
If oNetConnections Is Nothing Then
MsgBox("Couldn't find 'Network and Dial-up Connections' folder")
WshShell.quit()
End If
Dim oLanConnection = Nothing
For Each folderitem In oNetConnections.items
If LCase(folderitem.name) = LCase(sConnectionName) Then
oLanConnection = folderitem : Exit For
End If
Next
If oLanConnection Is Nothing Then
MsgBox("Couldn't find '" & sConnectionName & "' item")
WshShell.quit()
End If
Dim bEnabled = True
Dim oEnableVerb = Nothing
Dim oDisableVerb = Nothing
Dim s = "Verbs: " & vbCrLf
For Each verb In oLanConnection.verbs
s = s & vbCrLf & verb.name
If verb.name = sEnableVerb Then
oEnableVerb = verb
bEnabled = False
End If
If verb.name = sDisableVerb Then
oDisableVerb = verb
End If
Next
If bEnabled Then
oDisableVerb.DoIt()
Else
oEnableVerb.DoIt()
End If
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
A: best solution is disabling all network adapters regardless of the interface name is disabling and enabling all network adapters using this snippet (Admin rights needed for running , otherwise ITS WONT WORK) :
static void runCmdCommad(string cmd)
{
System.Diagnostics.Process process = new System.Diagnostics.Process();
System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo();
//startInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
startInfo.FileName = "cmd.exe";
startInfo.Arguments = $"/C {cmd}";
process.StartInfo = startInfo;
process.Start();
}
static void DisableInternet(bool enable)
{
string disableNet = "wmic path win32_networkadapter where PhysicalAdapter=True call disable";
string enableNet = "wmic path win32_networkadapter where PhysicalAdapter=True call enable";
runCmdCommad(enable ? enableNet :disableNet);
}
A: I modfied the best voted solution from Kamrul Hasan to one methode and added to wait for the exit of the process, cause my Unit Test code run faster than the process disable the connection.
private void Enable_LocalAreaConection(bool isEnable = true)
{
var interfaceName = "Local Area Connection";
string control;
if (isEnable)
control = "enable";
else
control = "disable";
System.Diagnostics.ProcessStartInfo psi =
new System.Diagnostics.ProcessStartInfo("netsh", "interface set interface \"" + interfaceName + "\" " + control);
System.Diagnostics.Process p = new System.Diagnostics.Process();
p.StartInfo = psi;
p.Start();
p.WaitForExit();
}
A: Looking at the other answers here, whilst some work, some do not. Windows 10 uses a different netsh command to the one that was used earlier in this chain. The problem with other solutions, is that they will open a window that will be visible to the user (though only for a fraction of a second). The code below will silently enable/disable a network connection.
The code below can definitely be cleaned up, but this is a nice start.
*** Please note that it must be run as an administrator to work ***
//Disable network interface
static public void Disable(string interfaceName)
{
System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo();
startInfo.FileName = "netsh";
startInfo.Arguments = $"interface set interface \"{interfaceName}\" disable";
startInfo.RedirectStandardOutput = true;
startInfo.RedirectStandardError = true;
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
System.Diagnostics.Process processTemp = new System.Diagnostics.Process();
processTemp.StartInfo = startInfo;
processTemp.EnableRaisingEvents = true;
try
{
processTemp.Start();
}
catch (Exception e)
{
throw;
}
}
//Enable network interface
static public void Enable(string interfaceName)
{
System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo();
startInfo.FileName = "netsh";
startInfo.Arguments = $"interface set interface \"{interfaceName}\" enable";
startInfo.RedirectStandardOutput = true;
startInfo.RedirectStandardError = true;
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
System.Diagnostics.Process processTemp = new System.Diagnostics.Process();
processTemp.StartInfo = startInfo;
processTemp.EnableRaisingEvents = true;
try
{
processTemp.Start();
}
catch (Exception e)
{
throw;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: F# defining/using a type/module in another file in the same project This will hopefully be an easy one. I have an F# project (latest F# CTP) with two files (Program.fs, Stack.fs). In Stack.fs I have a simple namespace and type definition
Stack.fs
namespace Col
type Stack=
...
Now I try to include the namespace in Program.fs by declaring
open Col
This doesn't work and gives me the error "The namespace or module Col is not defined." Yet it's defined within the same project. I've got to be missing something obvious
A: What order are the files in the .fsproj file? Stack.fs needs to come before Program.fs for Program.fs to be able to 'see' it.
See also the start of
http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!444.entry
and the end of
http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!347.entry
A: I had the same problems, and you are right, the order of the files is taken in account by the compiler. Instead of the Remove and Add pattern, you can use the Move Up / Move Down items in the context menu associated to the .fs files. (Alt-Up and Alt-Down are the shortcut keys in most of the standard key-bindings)
A: All of the above are correct, but how to do this in VS2013 is another question. I had to edit my .fsproj file manually, and set the files in exact order within an ItemGroup node. In this case it would look like this:
<ItemGroup>
<Compile Include="Stack.fs" />
<Compile Include="Program.fs" />
<None Include="App.config" />
</ItemGroup>
A: I had the same issue and it was indeed the ordering of the files. However, the links above didn't describe how to fix it in Visual Studio 2008 F# 1.9.4.19.
If you open a module, make sure your source file comes after the dependency in the solution explorer. Just right click your source and select Remove. Then re-add it. This will make it appear at the bottom of the list. Hopefully you don't have circular dependencies.
A: I'm using Visual Studio for Mac - 8.1.4 and i've noticed that some .fs files are not marked as "Compile". You can see this by Viewing Build Output and see if all your files are there and in the correct order.
I've had to manually make sure certain files are marked with "Compile", and have had to move them up and down manually until it "takes".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Table Creation DDL from Microsoft Access Is there any easy way to retrieve table creation DDL from Microsoft Access (2007) or do I have to code it myself using VBA to read the table structure?
I have about 30 tables that we are porting to Oracle and it would make life easier if we could create the tables from the Access definitions.
A: I've done this:
There's a tool for "upsizing" from Access to SQL Server. Do that, then use the excellent SQL Server tools to generate the script.
http://support.microsoft.com/kb/237980
A: Thanks for the other suggestions. While I was waiting I wrote some VBA code to do it. It's not perfect, but did the job for me.
Option Compare Database
Public Function TableCreateDDL(TableDef As TableDef) As String
Dim fldDef As Field
Dim FieldIndex As Integer
Dim fldName As String, fldDataInfo As String
Dim DDL As String
Dim TableName As String
TableName = TableDef.Name
TableName = Replace(TableName, " ", "_")
DDL = "create table " & TableName & "(" & vbCrLf
With TableDef
For FieldIndex = 0 To .Fields.Count - 1
Set fldDef = .Fields(FieldIndex)
With fldDef
fldName = .Name
fldName = Replace(fldName, " ", "_")
Select Case .Type
Case dbBoolean
fldDataInfo = "nvarchar2"
Case dbByte
fldDataInfo = "number"
Case dbInteger
fldDataInfo = "number"
Case dbLong
fldDataInfo = "number"
Case dbCurrency
fldDataInfo = "number"
Case dbSingle
fldDataInfo = "number"
Case dbDouble
fldDataInfo = "number"
Case dbDate
fldDataInfo = "date"
Case dbText
fldDataInfo = "nvarchar2(" & Format$(.Size) & ")"
Case dbLongBinary
fldDataInfo = "****"
Case dbMemo
fldDataInfo = "****"
Case dbGUID
fldDataInfo = "nvarchar2(16)"
End Select
End With
If FieldIndex > 0 Then
DDL = DDL & ", " & vbCrLf
End If
DDL = DDL & " " & fldName & " " & fldDataInfo
Next FieldIndex
End With
DDL = DDL & ");"
TableCreateDDL = DDL
End Function
Sub ExportAllTableCreateDDL()
Dim lTbl As Long
Dim dBase As Database
Dim Handle As Integer
Set dBase = CurrentDb
Handle = FreeFile
Open "c:\export\TableCreateDDL.txt" For Output Access Write As #Handle
For lTbl = 0 To dBase.TableDefs.Count - 1
'If the table name is a temporary or system table then ignore it
If Left(dBase.TableDefs(lTbl).Name, 1) = "~" Or _
Left(dBase.TableDefs(lTbl).Name, 4) = "MSYS" Then
'~ indicates a temporary table
'MSYS indicates a system level table
Else
Print #Handle, TableCreateDDL(dBase.TableDefs(lTbl))
End If
Next lTbl
Close Handle
Set dBase = Nothing
End Sub
I never claimed to be VB programmer.
A: Use Oracle's SQL Developer Migration Workbench.
There's a full tutorial on converting Access databases to Oracle available here. If its only the structures you're after, then you can concentrate on section 3.0.
A: You can use the export feature in Access to export tables to an ODBC data source. Set up an ODBC data source to the Oracle database and then right click the table in the Access "Tables" tab and choose export. ODBC is one of the "file formats" - it will then bring up the usual ODBC dialog.
A: You might want to look into ADOX to get at the schema information. Using ADOX you can get things such as the keys, views, relations, etc.
Unfortunately I am not a VB programmer, but there are plenty of examples on the web using ADOX to get at the table schema.
A: A bit late to the party, but I use RazorSQL to generate DDL for Access databases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Using Linux, how to specify which ethernet interface data is transmitted on I'm working on a Linux based server system in which there are two network interfaces, both on the same subnet (for now, lets just say they are 172.17.32.10 & 172.17.32.11). When I send data to a host on the network, I would like to specify which interface on my server the data is transmitted on. I need to be able to switch from one interface to the other (or maybe even transmit on both) in software (static routing rules won't work for this application).
I found a related question in StackOverflow that suggested using the netlink library to modify routes on the fly. This intuitively seems like it should work, but I was wondering if there were any other options to accomplish this same result.
A: No offense intended, but the answer about using bind() is quite wrong. bind() will control the source IP address placed within the packet IP header. It does not control which interface will be used to send the packet: the kernel's routing table will be consulted to determine which interface has the lowest cost to reach a particular destination. (*see note)
Instead, you should use an SO_BINDTODEVICE sockopt. This does two things:
Packets will always egress from the interface you specified, regardless of what the kernel routing tables says.
Only packets arriving on the specified interface will be handed to the socket. Packets arriving on other interfaces will not.
If you have multiple interfaces you want to switch between, I'd suggest creating one socket per interface. Because you'll also only receive packets to the interface you've bound to, you'll need to add all of these sockets to your select()/poll()/whatever you use.
#include <net/if.h>
struct ifreq ifr;
memset(&ifr, 0, sizeof(ifr));
strncpy(ifr.ifr_name, "eth1", sizeof(ifr.ifr_name));
if (setsockopt(s, SOL_SOCKET, SO_BINDTODEVICE,
(void *)&ifr, sizeof(ifr)) < 0) {
perror("SO_BINDTODEVICE failed");
}
(*note)
Bind() to an interface IP address can lead to confusing but nonetheless correct behavior. For example if you bind() to the IP address for eth1, but the routing table sends the packet out eth0, then a packet will appear on the eth0 wire but carrying the source IP address of the eth1 interface. This is weird but allowed, though packets sent back to the eth1 IP address would be routed back to eth1. You can test this using a Linux system with two iP interfaces. I have one, and did test it, and bind() is not effective in steering the packet out a physical interface.
Though technically allowed, depending on topology this may nonetheless not work. To dampen distributed denial of service attacks where the attackers use forged IP source addresses, many routers now perform Reverse Path Forwarding (RPF) checks. Packets with a source IP address on the "wrong" path may be dropped.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Parsing XML with multi-line records I'd like to take XML in the format below and load each code record into a domain object in my BootStrap.groovy. I want to preserve the formatting of each snippet of code.
XML
<records>
<code>
<language>Groovy</language>
<snippet>
println "This is Groovy"
println "A very powerful language"
</snippet>
</code>
<code>
<language>Groovy</language>
<snippet>
3.times {
println "hello"
}
</snippet>
</code>
<code>
<language>Perl</language>
<snippet>
@foo = split(",");
</snippet>
</code>
</records>
Domain Object
Code {
String language
String snippet
}
BootStrap.groovy
new Code(language l, snippet: x).save()
A: roughly something like this:
def CODE_XML = '''
<records>
<code>
<language>Groovy</language>
<snippet>
println "This is Groovy"
println "A very powerful language"
</snippet>
</code>
<code>
<language>Groovy</language>
<snippet>
3.times {
println "hello"
}
</snippet>
</code>
<code>
<language>Perl</language>
<snippet>
@foo = split(",");
</snippet>
</code>
</records>
'''
def records = new XmlParser().parseText(CODE_XML)
records.code.each() { code ->
new Code(language: code.language, snippet: code.snippet).save()
}
A: If you can specity a DTD or similar and your XML parser obeys it, I think you can specify the contents of the snippet element to be CDATA and always get it as-is.
A: Try adding xml:space="preserve" attribute to <snippet> elements.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Calling .NET COM web service wrapper from Excel I'm trying to call a web service from Excel 2003 module.
The way i've implemented it is creating a .NET COM library with all the classes/methods i need to be exposed.
When i try to call a method that queries a web service from Excel the execution just stops on that line without any error.
May be it has to do with references? I'm using Microsoft.Web.Services2.dll. I have tried putting it in C:\WINDOWS\SYSTEM32 - no luck
A: I am not sure if I get the entire picture, but hopefully some of this will help. I think you have Excel vba calling into .NET via a COM interface and then into a SOAP web svc.
You should have the correct PIA installed and referenced by your .NET assembly. Your COM interface should look something like this:
[Guid("123Fooetc...")]
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface IBar
{
[DispId(1)]
void SomeMethod(Excel.Range someRange);
}
Implement the interface with a class something like this:
[Guid("345Fooetc..")]
[ClassInterface(ClassInterfaceType.None)]
[ProgId("MyNameSpace.MyClass")]
public class MyClass : IBar
{
public void SomeMethod(Excel.Range someRange)
{...}
}
The first thing I would do is to replace your web service call with a real simple method in your .NET code, to make sure your interface and interop wrapper are working right.
Once your skeleton is working right you may want to consider calling your service with an HTTP method instead of using SOAP. For example, here is a simple call using HTTP GET:
string resource = yourUrl;
using (WebClient web = new WebClient())
{
web.Credentials = CredentialCache.DefaultCredentials;
someXml = web.DownloadString(resource);
}
return someXml; // or do something interesting with Excel range
A: To solve the problem i used Access instead of Excel. In Access it was showing me errors. It turned out that the location of all the reference assemblies should be the location of caller application (in this case it was C:\Program Files\Microsoft Office\OFFICE11). Sencondly my web services proxies were loading the endpoint urls from .config file, which in that context was C:\Program Files\Microsoft Office\OFFICE11\MSACCESS.EXE.config
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Error when running XNA code from another computer I have develop an XNA game on computer 1. When I send it to computer two (and I have everything to be able to run XNA Code). When the program execute game.run, I get an InvalidOperationException.
I didn't tried to run code from computer two on computer one. But I know that both machine can run the code I've wrote on them.
Do you have any idea ?
EDIT : Oh, I added the asnwer, but I can't select my post as the answer...
CallStack :
App.exe!App.Program.Main(string[] args = {Dimensions:[0]}) Line 14 C#
And here is the code
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
static void Main(string[] args)
{
using (Game1 game = new Game1())
{
game.Run();
}
}
}
And the same code run on another machine
A: I finally found the problem. For a reason, the hardware acceleration setting was set to None. So the project wouldn't start.
Thanks for all your reply.
A: The docs say Game.Run will throw that exception if Game.Run is called more than once. What does the rest of the exception say? i.e. Message, StackTrace, etc?
A: My first question would be, what is the rest of the error? Without that it'll be hard to diagnose this. If I were to give an educated guess, I'd have to say you either don't have the proper XNA runtimes installed, or your video card doesn't support Shader Model 2.0.
A: Are there any .dll files that you need to package with the project that the other computer may be missing? Dependency Walker might be useful for determining which (if any) these are.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Div 100% height works on Firefox but not in IE I have a container div that holds two internal divs; both should take 100% width and 100% height within the container.
I set both internal divs to 100% height. That works fine in Firefox, however in IE the divs do not stretch to 100% height but only the height of the text inside them.
The following is a simplified version of my style sheet.
#container
{
height: auto;
width: 100%;
}
#container #mainContentsWrapper
{
float: left;
height: 100%;
width: 70%;
margin: 0;
padding: 0;
}
#container #sidebarWrapper
{
float: right;
height: 100%;
width: 29.7%;
margin: 0;
padding: 0;
}
Is there something I am doing wrong? Or any Firefox/IE quirks I am missing out?
A: I think "works fine in Firefox" is in the Quirks mode rendering only.
In the Standard mode rendering, that might not work fine in Firefox too.
percentage depends on "containing block", instead of viewport.
CSS Specification says
The percentage is calculated with
respect to the height of the generated
box's containing block. If the height
of the containing block is not
specified explicitly (i.e., it depends
on content height), and this element
is not absolutely positioned, the
value computes to 'auto'.
so
#container { height: auto; }
#container #mainContentsWrapper { height: n%; }
#container #sidebarWrapper { height: n%; }
means
#container { height: auto; }
#container #mainContentsWrapper { height: auto; }
#container #sidebarWrapper { height: auto; }
To stretch to 100% height of viewport, you need to specify the height of the containing block (in this case, it's #container).
Moreover, you also need to specify the height to body and html, because initial Containing Block is "UA-dependent".
All you need is...
html, body { height:100%; }
#container { height:100%; }
A: Its hard to give you a good answer, without seeing the html that you are actually using.
Are you outputting a doctype / using standards mode rendering? Without actually being able to look into a html repro, that would be my first guess for a html interpretation difference between firefox and internet explorer.
A: I'm not sure what problem you are solving, but when I have two side by side containers that need to be the same height, I run a little javascript on page load that finds the maximum height of the two and explicitly sets the other to the same height. It seems to me that height: 100% might just mean "make it the size needed to fully contain the content" when what you really want is "make both the size of the largest content."
Note: you'll need to resize them again if anything happens on the page to change their height -- like a validation summary being made visible or a collapsible menu opening.
A: I've been successful in getting this to work when I set the margins of the container to 0:
#container
{
margin: 0 px;
}
in addition to all your other styles
A: You might have to put one or both of:
html { height:100%; }
or
body { height:100%; }
EDIT: Whoops, didn't notice they were floated. You just need to float the container.
A: I've done something very similar to what 'tvanfosson' said, that is, actually using JavaScript to constantly monitor the available height in the window via events like onresize, and use that information to change the container size accordingly (as pixels rather than percentage).
Keep in mind, this does mean a JavaScript dependency, and it isn't as smooth as a CSS solution. You'd also need to ensure that the JavaScript function is capable of correctly returning the window dimensions across all major browsers.
Let us know if one of the previously mentioned CSS solutions work, as it sounds like a better way to fix the problem.
A: I don't think IE supports the use of auto for setting height / width, so you could try giving this a numeric value (like Jarett suggests).
Also, it doesn't look like you are clearing your floats properly. Try adding this to your CSS for #container:
#container {
height:100%;
width:100%;
overflow:hidden;
/* for IE */
zoom:1;
}
A: Try this..
#container
{
height: auto;
min-height:100%;
width: 100%;
}
#container #mainContentsWrapper
{
float: left;
height: auto;
min-height:100%
width: 70%;
margin: 0;
padding: 0;
}
#container #sidebarWrapper
{
float: right;
height: auto;
min-height:100%
width: 29.7%;
margin: 0;
padding: 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: Haskell FFI / C MPFR library wrapper woes In order to create an arbitrary precision floating point / drop in replacement for Double, I'm trying to wrap MPFR using the FFI but despite all my efforts the simplest bit of code doesn't work. It compiles, it runs, but it crashes mockingly after pretending to work for a while. A simple C version of the code happily prints the number "1" to (640 decimal places) a total of 10,000 times. The Haskell version, when asked to do the same, silently corrupts (?) the data after only 289 print outs of "1.0000...0000" and after 385 print outs, it causes an assertion failure and bombs. I'm at a loss for how to proceed in debugging this since it "should work".
The code can be perused at http://hpaste.org/10923 and downloaded at http://www.updike.org/mpfr-broken.tar.gz
I'm using GHC 6.83 on FreeBSD 6 and GHC 6.8.2 on Mac OS X. Note you will need MPFR (tested with 2.3.2) installed with the correct paths (change the Makefile) for libs and header files (along with those from GMP) to successfully compile this.
Questions
*
*Why does the C version work, but the Haskell version flake out? What else am I missing when approaching the FFI? I tried StablePtrs and had the exact same results.
*Can someone else verify if this is a Mac/BSD only problem by compiling and running my code? (Does the C code "works" work? Does the Haskell code "noworks" work?) Can anyone on Linux and Windows attempt to compile/run and see if you get the same results?
C code: (works.c)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <gmp.h>
#include <mpfr.h>
#include "mpfr_ffi.c"
int main()
{
int i;
mpfr_ptr one;
mpf_set_default_prec_decimal(640);
one = mpf_set_signed_int(1);
for (i = 0; i < 10000; i++)
{
printf("%d\n", i);
mpf_show(one);
}
}
Haskell code: (Main.hs --- doesn't work)
module Main where
import Foreign.Ptr ( Ptr, FunPtr )
import Foreign.C.Types ( CInt, CLong, CULong, CDouble )
import Foreign.StablePtr ( StablePtr )
data MPFR = MPFR
foreign import ccall "mpf_set_default_prec_decimal"
c_set_default_prec_decimal :: CInt -> IO ()
setPrecisionDecimal :: Integer -> IO ()
setPrecisionDecimal decimal_digits = do
c_set_default_prec_decimal (fromInteger decimal_digits)
foreign import ccall "mpf_show"
c_show :: Ptr MPFR -> IO ()
foreign import ccall "mpf_set_signed_int"
c_set_signed_int :: CLong -> IO (Ptr MPFR)
showNums k n = do
print n
c_show k
main = do
setPrecisionDecimal 640
one <- c_set_signed_int (fromInteger 1)
mapM_ (showNums one) [1..10000]
A: Judah Jacobsen answered this on the Haskell-cafe mailing list:
This is a known issue with GHC because of the way GHC uses GMP internally (to maintain Integers).
Apparently C data in the heap is left alone by GHC in basically all cases except code that uses the FFI to access GMP or any C library that relies on GMP (like MPFR that I wanted to use). There are some workarounds (painful) but the "right" way would be to either hack GHC (hard) or get the Simons to remove GHC's dependence on GMP (harder).
A: I see the problem too, on a
$ uname -a
Linux burnup 2.6.26-gentoo-r1 #1 SMP PREEMPT Tue Sep 9 00:05:54 EDT 2008 i686 Intel(R) Pentium(R) 4 CPU 2.80GHz GenuineIntel GNU/Linux
$ gcc --version
gcc (GCC) 4.2.4 (Gentoo 4.2.4 p1.0)
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.8.3
I also see the output changing from 1.0000...000 to 1.0000...[garbage].
Let's see, the following does work:
main = do
setPrecisionDecimal 640
mapM_ (const $ c_set_signed_int (fromInteger 1) >>= c_show) [1..10000]
which narrows down the problem to parts of one being somehow clobbered during runtime. Looking at the output of ghc -C and ghc -S, though, isn't giving me any hints.
Hmm, ./noworks +RTS -H1G also works, and ./noworks +RTS -k[n]k, for varying values of [n], demonstrate failures in different ways.
I've got no solid leads, but there are two possibilities that jump to my mind:
*
*GMP, which the GHC runtime uses, and MPFR having some weird interaction
*stack space for C functions called within the GHC runtime is limited, and MPFR not dealing well
That being said... is there a reason you're rolling your own bindings rather than use HMPFR?
A: Aleš Bizjak, maintainer of HMPFR posted to haskell-cafe and showed how to keep GHC from controlling allocation of the limbs (and hence leaving them alone, instead of GCing them and clobbering them):
mpfr_ptr mpf_new_mpfr()
{
mpfr_ptr result = malloc(sizeof(__mpfr_struct));
if (result == NULL) return NULL;
/// these three lines:
mp_limb_t * limb = malloc(mpfr_custom_get_size(mpfr_get_default_prec()));
mpfr_custom_init(limb, mpfr_get_default_prec());
mpfr_custom_init_set(result, MPFR_NAN_KIND, 0, mpfr_get_default_prec(), limb);
return result;
}
To me, this is much easier than joining the effort to write a replacement for GMP in GHC, which would be the only alternative if I really wanted to use any library that depends on GMP.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do databases work internally? I've been working with databases for the last few years and I'd like to think that I've gotten fairly competent with using them. However I was reading recently about Joel's Law of Leaky Abstractions and I realised that even though I can write a query to get pretty much anything I want out of a database, I have no idea how the database actually interprets the query. Does anyone know of any good articles or books that explain how databases work internally?
Some specific things I'm interested in are:
*
*What does a database actually do to find out what matches a select statement?
*How does a database interpret a join differently to a query with several "where key1 = key2" statements?
*How does the database store all its memory?
*How are indexes stored?
A:
What does a database actually do to
find out what matches a select
statement?
To be blunt, it's a matter of brute force. Simply, it reads through each candidate record in the database and matches the expression to the fields. So, if you have "select * from table where name = 'fred'", it literally runs through each record, grabs the "name" field, and compares it to 'fred'.
Now, if the "table.name" field is indexed, then the database will (likely, but not necessarily) use the index first to locate the candidate records to apply the actual filter to.
This reduces the number of candidate records to apply the expression to, otherwise it will just do what we call a "table scan", i.e. read every row.
But fundamentally, however it locates the candidate records is separate from how it applies the actual filter expression, and, obviously, there are some clever optimizations that can be done.
How does a database interpret a join
differently to a query with several
"where key1 = key2" statements?
Well, a join is used to make a new "pseudo table", upon which the filter is applied. So, you have the filter criteria and the join criteria. The join criteria is used to build this "pseudo table" and then the filter is applied against that. Now, when interpreting the join, it's again the same issue as the filter -- brute force comparisons and index reads to build the subset for the "pseudo table".
How does the database store all its
memory?
One of the keys to good database is how it manages its I/O buffers. But it basically matches RAM blocks to disk blocks. With the modern virtual memory managers, a simpler database can almost rely on the VM as its memory buffer manager. The high end DB'S do all this themselves.
How are indexes stored?
B+Trees typically, you should look it up. It's a straight forward technique that has been around for years. It's benefit is shared with most any balanced tree: consistent access to the nodes, plus all the leaf nodes are linked so you can easily traverse from node to node in key order. So, with an index, the rows can be considered "sorted" for specific fields in the database, and the database can leverage that information to it benefit for optimizations. This is distinct from, say, using a hash table for an index, which only lets you get to a specific record quickly. In a B-Tree you can quickly get not just to a specific record, but to a point within a sorted list.
The actual mechanics of storing and indexing rows in the database are really pretty straight forward and well understood. The game is managing buffers, and converting SQL in to efficient query paths to leverage these basic storage idioms.
Then, there's the whole multi-users, locking, logging, and transactions complexity on top of the storage idiom.
A: *
*What does a database actually do to find out what matches a select statement?
DBs are using indexes(see below)
*How does a database interpret a join differently to a query with several "where key1 = key2" statements?
Join Operations can be translated to binary tree operations by merging trees.
*How does the database store all its memory?
memorymapped files for faster access of their data
*How are indexes stored?
Internally DBs are working with B-Trees for indexing.
This should be explained in greater details on wikipedia..
http://en.wikipedia.org/wiki/B-tree
http://en.wikipedia.org/wiki/Database
A: In addition to reading, it can be instructive to use the DB tools to examine the execution plan that the database uses on your queries. In addition to getting insight into how it is working, you can experiment with techniques to optimize the queries with a better feedback loop.
A: Saif, excellent link. A bird's eye overview that manages to cover most topics, and provide details on specific vendor implementations.
I made three tries at writing an explanation, but this is really too big a topic. Check out the Hellerstein article (the one on the berkeley server that Saif linked to), and then ask about specifics.
It's worth noting that only a subset of "known good ideas" is implemented in any given DBMS. For example, SQLite doesn't even do hash joins, it only does nested loops (ack!!). But then, it's an easily embeddable dbms, and it does its work very well, so there's something to be said for the lack of complexity.
Learning about how a DBMS gathers statistics and how it uses them to construct query plans, as well as learning how to read the query plans in the first place, is an invaluable skill -- if you have to choose one "database internals" topic to learn, learn this. It will make a world of difference (and you will never accidentally write a Cartesian product again... ;-)).
A: If you want to know more in detail, I'd recommend getting the sqlite sources and having a look at how it does it. It's complete, albeit not at the scale of the larger open source and commercial databases. If you want to know more in detail I recommend The Definitive Guide to SQLite which is not only a great explanation of sqlite, but also one of the most readable technical books I know. On the MySQL side, you could learn from MySQL Performance Blog as well as on the book front the O'Reilly High Performance MySQL (V2) of which the blog is one of the authors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83"
} |
Q: Best 2D animation library/tech for "iPhone" style animation on WIN32? All,
I have built a nifty demo application that displays data about our internal systems as a full-screen "billboard" style display. You could think of this as something like an application displaying the national deficit - rapidly increasing numbers, animating very quickly, all day.
The problem is that the demo works really well and the client would like me to build an industrial strength version!
I'd like to do this in C++ but it could be Java or maybe C# (though I'd prefer not to use C#, as I'm not so strong in that env).
I'm toying with SDL or Allegro, but I have no experience in either, so I'm open to the best (and ideally easiest) toolkit out there.
When I say "iPhone"-style, I mean simple but elegant transitions between panels. The iPhone makes exellent use of slides, fades and blends. My app doesn't need to do any 3D style animations. In terms of graphics, I really only need simple things: 90% text, some images, and simple primitives like lines, rectangles and gradient fills.
Of course, I could implement this in "plain old" DirectDraw or OpenGL, but I really don't want to think about writing timer classes and choosing timing methods for animation - some toolkit out there should be just right for this.
Thanks for any help!
RF
A: I think you are looking for the Clutter Toolkit. It's free, cool, and multi-platform. Works on top of OpenGL by implementing all those timers and stuff you can't be arsed to implement yourself, and wrapping them on a very convenient and awesome API.
http://clutter-project.org/
A: Qt has support for SVG and easy to use animations. (apart from being a great all-around cross-platform GUI framework)
A: The Windows Presentation Foundation (or WPF) is the first thing that came to mind reading your question. From what I've seen it's Microsoft's answer to the slick UI that all of Apple's products are dripping with. Everything is DirectX accelerated, making for very smooth transitions and animations.
As far as I know only the .Net languages support WPF (with a heavy emphasis on C#), so that may be a downside in your case. If you find that it doesn't suit your needs, then I also recommend QT as a very nice C++ framework. Just be sure to check out the licensing first to make sure it meets your needs.
A: I would give Processing a spin. "Ease of maintenance" could overcome the C++ requisite. I know many designers that love to tinker with it.
A: SDL is an excellent toolkit to work with. If you already understand the principles of "plain old DirectDraw or OpenGL," you should have no trouble with SDL. I haven't seen every graphics framework out there, but of the ones I have seen, I'd definitely recommend SDL. It's designed by experienced game programmers who know what they're doing because they've been there and done that, and it's interface is very intuitive. And unlike WPF, it's designed from the ground up for cross-platform compatibility.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Displaying an IGrouping<> with nested ListViews I need to retrieve a set of Widgets from my data access layer, grouped by widget.Manufacturer, to display in a set of nested ASP.NET ListViews.
The problem is that (as far as I can tell) the nested ListView approach requires me to shape the data before using it, and I can't figure out the best approach to take. The best I've been able to come up with so far is to put a LINQ query in my data access layer like so:
var result = from widget in GetAllWidgets(int widgetTypeID)
group widget by widget.Manufacturer into groupedWidgets
let widgets = from widgetGroup in groupedWidgets
select widgetGroup
select new { Manufacturer = groupedWidgets.Key, Widgets = widgets };
Of course, anonymous types can't be passed around, so that doesn't work. Defining a custom class to enclose data seems like the wrong way to go. Is there some way I can perform the grouping on the ASP.NET side of things? I'm using ObjectDataSources to access the DAL.
Updated: OK, I'm not creating an anonymous type anymore, and instead my DAL passes an IEnumerable<IGrouping<Manufacturer, Widget>> to the ASP.NET page, but how can I use this in my ListViews? I need to render the following HTML (or something pretty much like it)
<ul>
<li>Foo Corp.
<ol>
<li>Baz</li>
<li>Quux</li>
</ol>
</li>
<li>Bar Corp.
<ol>
<li>Thinger</li>
<li>Whatsit</li>
</ol>
</li>
</ul>
Originally, I had a ListView within a ListView like so:
<asp:ListView ID="ManufacturerListView">
<LayoutTemplate>
<ul>
<asp:Placeholder ID="itemPlaceholder" runat="server" />
</ul>
</LayoutTemplate>
<ItemTemplate>
<li><asp:Label Text='<%# Eval("Manufacturer.Name") %>' />
<li>
<asp:ListView ID="WidgetsListView" runat="server" DataSource='<%# Eval("Widgets") %>'>
<LayoutTemplate>
<ol>
<asp:PlaceHolder runat="server" ID="itemPlaceholder" />
</ol>
</LayoutTemplate>
<ItemTemplate>
<li><asp:Label Text='<%# Eval("Name") %>'></li>
</ItemTemplate>
</asp:ListView>
</li>
</ItemTemplate>
</asp:ListView>
Note how the DataSource property of WidgetsListView is itself databound. How can I duplicate this functionality without reshaping the data?
This is getting kind of complicated, sorry if I should have just made a separate question instead.
A: I've just spent quite a while on this. Eventually found the solution and it's so simple.
var enumerableData = myData.Tables[0].AsEnumerable();
var groupedData = enumerableData.GroupBy(x => x["GroupingColumn"]);
myParentRepeater.DataSource = groupedData;
myParentRepeater.DataBind();
<asp:Repeater ID="myParentRepeater" runat="server">
<ItemTemplate>
<h3><%#Eval("Key") %></h3>
<asp:Repeater ID="myChildRepeater" runat="server" DataSource='<%# Container.DataItem %>'>
<ItemTemplate>
<%#((DataRow)Container.DataItem)["ChildDataColumn1"] %>
<%#((DataRow)Container.DataItem)["ChildDataColumn2"] %>
</ItemTemplate>
<SeparatorTemplate>
<br />
</SeparatorTemplate>
</asp:Repeater>
</ItemTemplate>
</asp:Repeater>
Eval("Key") returns the grouped value.
When retrieving child info, Container.DataItem is of type IGrouping but you simply cast it to the correct type.
Hope this helps someone else.
A: When you're using Linq to group, you can get a strongly typed object without that shaping:
List<int> myInts = new List<int>() { 1, 2, 3, 4, 5 };
IEnumerable<IGrouping<int, int>> myGroups = myInts.GroupBy(i => i % 2);
foreach (IGrouping<int, int> g in myGroups)
{
Console.WriteLine(g.Key);
foreach (int i in g)
{
Console.WriteLine(" {0}", i);
}
}
Console.ReadLine();
In your case, you'd have:
IEnumerable<IGrouping<Manufacturer, Widget>> result =
GetAllWidgets(widgetTypeId).GroupBy(w => w.Manufacturer);
This will let you return the result from the method.
A: Ok, I'm going to contradict my prior statement. Since eval wants some kind of property name in the nested control, we should probably shape that data.
public class CustomGroup<TKey, TValue>
{
public TKey Key {get;set;}
public IEnumerable<TValue> Values {get;set;}
}
// and use it thusly...
IEnumerable<CustomGroup<Manufacturer, Widget>> result =
GetAllWidgets(widgetTypeId)
.GroupBy(w => w.Manufacturer)
.Select(g => new CustomGroup<Manufacturer, Widget>(){Key = g.Key, Values = g};
/// and even later...
<asp:ListView ID="ManufacturerListView">
<LayoutTemplate>
<ul>
<asp:Placeholder ID="itemPlaceholder" runat="server" />
</ul>
</LayoutTemplate>
<ItemTemplate>
<li><asp:Label Text='<%# Eval("Key.Name") %>' />
<li>
<asp:ListView ID="WidgetsListView" runat="server" DataSource='<%# Eval("Values") %>'>
<LayoutTemplate>
<ol>
<asp:PlaceHolder runat="server" ID="itemPlaceholder" />
</ol>
</LayoutTemplate>
<ItemTemplate>
<li><asp:Label Text='<%# Eval("Name") %>'></li>
</ItemTemplate>
</asp:ListView>
</li>
</ItemTemplate>
</asp:ListView>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Executing code stored as a list After understanding (quote), I'm curious as to how one might cause the statement to execute. My first thought was
(defvar x '(+ 2 21))
`(,@x)
but that just evaluates to (+ 2 21), or the contents of x. How would one run code that was placed in a list?
A: (eval '(+ 2 21))
A: @Christián Romo:
Backtick example: you can kinda implement apply using eval and backtick, because you can splice arguments into a form. Not going to be the most efficient thing in the world, but:
(eval `(and ,@(loop for x from 1 upto 4 collect `(evenp ,x))))
is equivalent to
(eval '(and (evenp 1) (evenp 2) (evenp 3) (evenp 4)))
Incidentally, this has the same result as the (much more efficient)
(every 'evenp '(1 2 3 4))
Hope that satisfies your curiosity!
A: Take a look at funny Lisp tutorial at http://lisperati.com/. There are versions for Common Lisp and Emacs Lisp, and it demonstrates use of quasiquote and macros.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Implementation of ISupportErrorInfo - what does it mean? What does the ISupportErrorInfo interface mean? I'm at a bit of a loss to understand it. From MSDN:
This interface ensures that error
information can be propagated up the
call chain correctly. Automation
objects that use the error handling
interfaces must implement
ISupportErrorInfo.
This method indicates whether or not
an interface supports the IErrorInfo
interface.
HRESULT InterfaceSupportsErrorInfo(
REFIID riid
);
What does it mean to return S_OK in InterfaceSupportsErrorInfo? Should you return S_OK for all interfaces? Just some?
A: My understanding of it (based on some related MSDN pages) is that by implementing ISupportErrorInfo, you are indicating that one or more interfaces on your class returns error information by calling SetErrorInfo, as opposed to just returning a failure HRESULT.
To that end, your implementation of ISuportErrorInfo::InterfaceSupportsErrorInfo should return S_OK only for those interfaces on your class that actually use SetErrorInfo to return error information to the caller, and only those interfaces.
For example, say you have a class that implements an interface you wrote called IFoo that has a DoSomething method. If someone else creates an instance of your class and calls IFoo::DoSomething, they are supposed to do the following if DoSomething returns a failure HRESULT (paraphrasing from various MSDN pages, but I started from here: http://msdn.microsoft.com/en-us/library/ms221510.aspx):
*
*Call QueryInterface on the IFoo pointer to get the ISupportErrorInfo interface for the object that is implementing IFoo
*If the called object doesn't implement ISupportErrorInfo,
then the caller will have
to handle the error based on the
HRESULT value, or pass it up the call stack.
*If the called object does implement ISupportErrorInfo, then the caller should call ISupportErrorInfo::InterfaceSupportsErrorInfo, passing in a REFIID for the interface that returned the error. In this case, the DoSomething method of the IFoo interface returned an error, so you would pass REFIID_IFoo (assuming it's defined) to InterfaceSupportsErrorInfo.
*If InterfaceSupportsErrorInfo
returns S_OK, then the caller
knows at this point that it can
retrieve more detailed information
about the error by calling
GetErrorInfo. If InterfaceSupportsErrorInfo returns S_FALSE, the caller can assume the called interface doesn't supply detailed error information, and will have to rely on the returned HRESULT to figure out what happened.
The reason for this somewhat confusing/convoluted error-handling API seems to be for flexibility (as far I as I can tell anyway. This is COM after all ;). With this design, a class can support multiple interfaces, but not every interface is required to use SetErrorInfo to return error information from its methods. You can have certain, select interfaces on your class return detailed error information via SetErrorInfo, while other interfaces can continue to use normal HRESULTs to indicate errors.
In summary, the ISupportErrorInfo interface is a way to inform the calling code that at least one of the interfaces your class implements can return detailed error information, and the InterfaceSupportsErrorInfo method tells the caller whether a given interface is one of those interfaces. If so, then the caller can retrieve the detailed error information by calling GetErrorInfo.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: In Microsoft Visual Studio 2005 is it possible to set the size of an edit window when opened? I have a very large monitor. When I open a text file in MSVC, the file defaults to a width of about 80% of my screen space. For most bits of code that's about twice the size I need. Is there a way to set the default size of a newly opened file?
A: In the regular paned UI you can choose to split the code window horizontally or vertically, try that. You then get two code windows with each their own tab bar.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PHP Libraries: SVN Access Are there any decent PHP libraries available for accessing SVN repositories? Right now I just have some scripts executing commands to the command line and parsing the feedback, I'd love to find something less dependent on the command line and more tightly integrated. Any ideas?
A: A quick google search;
http://au2.php.net/svn
http://php-svn-client.tigris.org
http://pecl.php.net/package/svn
A: I think you are fine just the way you are. WebSvn, from websvn.tigris.org, the Subversion people themselves, does it the same way. I also shell out to the command line and parse the responses in my app BugTracker.NET.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Creating a winforms app that allows drag and dropping of custom 'widgets' I want to create a winforms application, this application will be similiar to the vs.net winforms designer (in certain aspects).
Basically it is going to be a blank page, where the user can drag and drop a bunch of 'widget's onto the screen. The widgets are basically custom images that I will create, that the user can resize, and it has some text on it by default which the user can double click on it to change the text.
Is this doable in winforms? If yes, what are the key aspects that I have to learn?
A: This is pretty easy to do in WinForms. Check out basic drag and drop. It's targeted toward 2.0. You'll use the DoDragDrop() method and capture data in drag and drop events. It requires some verbosity, but it gets the job done.
If you're keen on using WPF, take a look at MSDN's topic.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Oracle Cursor Issue Anyone have any idea how to do the following?
declare cursor
open cursor
fetch cursor
<< Start reading the cursor in a LOOP >>
Lets say the cursor have 10 records.
Read until 5th record then go to the 6th record and do some checking.
Now, is it possible to go back to 5th record from 6th record ?
A: Depends on the requirements.
You can use the LAG() and LEAD() analytic functions to get information for the next and prior rows, i.e.
SQL> ed
Wrote file afiedt.buf
1 select ename,
2 sal,
3 lead(sal) over (order by ename) next_sal,
4 lag(sal) over (order by ename) prior_sal
5 from emp
6* order by ename
SQL> /
ENAME SAL NEXT_SAL PRIOR_SAL
---------- ---------- ---------- ----------
ADAMS 1100 1600
ALLEN 1600 2850 1100
BLAKE 2850 2450 1600
CLARK 2450 3000 2850
FORD 3000 950 2450
JAMES 950 2975 3000
JONES 2975 5000 950
KING 5000 1250 2975
MARTIN 1250 1300 5000
MILLER 1300 3000 1250
SCOTT 3000 800 1300
ENAME SAL NEXT_SAL PRIOR_SAL
---------- ---------- ---------- ----------
SMITH 800 1500 3000
TURNER 1500 1250 800
WARD 1250 1500
14 rows selected.
If you don't want to use analytic functions, you can use PL/SQL collections, BULK COLLECT the data into those collections (using the LIMIT clause if you have more data than you want to store in your PGA) and then move forward and backward through your collections.
A: How far do you need to go back? If you only need a look-ahead of one row, you could buffer just the previous row in your loop (application-side).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Add (external) relationship (tagging) to existing Hibernate entities I need to add a new many-to-many relationship to an existing Hibernate entity.
I do not want to touch the original Hibernate entity bean or its configuration.
What I am adding is a "tagging" feature that can be viewed as an external contribution and not part of the entity's data itself.
I want to have a simple join table with only two columns, the entity primary key, and the tag id.
Can I use Hibernate to manage this table without introducting a new (artificial) entity type that contains a single tag mapping?
Or am I misguided and should actually want to have this "relationship-entity", so that I can add attributes (such as timestamps) later?
A: The idea behind Hibernate is being able to traverse a Java object tree from a given starting point. If you want to define a many-to-many relationship from the original object to a tag object, you can still define it on the tag object, and it will properly allow you to get a list of the original objects that have that tag.
The drawback is that you won't be able to query the original object for it's list of tags (that would require an annotation to reverse the relationship and an accessor that returned a list of Set of tag objects). You will however be able to retrieve a list of the original objects that marked with a given tag.
Here's an example ... Let's assume that the object to be tagged is a Post. If so, here's the code to add a many-to-many relationship to the Tag, so that you can look up a list of Posts that have a specific Tag:
@ManyToMany
@JoinTable(
name = "TAG-POST",
joinColumns = {@JoinColumn(name = "TAG-ID")},
inverseJoinColumns = {@JoinColumn(name = "POST-ID")}
)
private Set<Posts> posts = new HashSet<Post>();
Normally, you'd also want to be able to look up all the Tags related to a Post, but you can leave out the reverse mapping. If you do need the reverse mapping, you'll need to add something like this to your Post object:
@ManyToMany(mappedBy = "tags")
private Set<Tag> tags = new HashSet<Tag>();
Now you can also look up the tags that are related to a Post.
After rereading your post (and viewing your comment), I realize that you're also interested in skipping the creation of a Tag entity. If there is nothing but tag name, you could conceivably only use the table you've described, but you need to shift your mindset a bit. What you're really describing is a one-to-many relationship between the Post and its Tag entries. In this case, you'll need to map a post to a series of tag records, that have two columns ... a POST-ID and a TAG-NAME. Without altering the original object, you can still query the table for a list of Posts with a specific TAG-NAME or for a list of TAG-NAME rows that are related to a specific Post.
Note that this doesn't actually eliminate an entity ... You won't have the Tag entity, but the many-to-many lookup table will have to be created as a many-to-one relationship, which makes it an entity itself. This approach does however use one less table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: hashing sensitive data I need to scramble the names and logins of all the users in a UAT database we have. (because of the data protection act)
However, there is a catch.
The testers still need to be able to login using the hashed login names
so if a user login is "Jesse.J.James" then the hash should be something like
Ypois.X.Qasdf
i.e. approximately the same length, with the dots in the same place
so MD5, sha1 etc would not be suitable as they would create very long strings and also add their own special characters such as + and = which are not allowed by the validation regex.
So I'm looking for some suggestions as to how to achieve this
I guess I need to rollmy own hashing algorith
anyone done anything similar?
I am using c# but I guess that is not so important to the algorithm
thanks alot
ADDED -
Thanks for all the answers. I think I am responsible for the confusion by using the word "Hash" when that is not what needed to be done
A: You do not need to hash the data. You should just randomize it so it has no relation to the original data.
For example, update all the login names, and replace each letter with another random letter.
A: I think you are taking the wrong approach here. The idea of a hash is that it is one-way, noone should be able to use that hash to access the system (and if they can then you are likely still in violation of the data protection act. Also, testers should not be using real accounts unless those accounts are their own.
You should have the testers using mock accounts in a separated environment. By using mock accounts in a separate environment there is no danger in giving the testers the account information.
A: Testers should NOT be logging in as legitimate users. That would clearly violate the non-repudiation requirement of whatever data protection act you're working under.
The system should not allow anyone to log in using the hashed value. That defeats the whole purpose of hashing!
I'm sorry I am not answering your specific question, but I really think your whole testing system should be reevaluated.
ADDED:
The comments below by JPLemme shed a lot of light on what you are doing, and I'm afraid that I completely misunderstood (as did those who voted for me, presumably).
Part of the confusion is based on the fact that hashes are typically used to scramble passwords so that no one can discover what another person's password is, including those working on the system. That is, evidently, the wrong context (and now I understand why you are hashing usernames instead of just passwords). As JPLemme has pointed out, you are actually working with a completely separate parrallel system into which live data has been copied and anonymized, and the secure login process that uses hashed (and salted!) passwords will not be molested.
In that case, WW's answer below is more relevant, and I recommend everyone to give your up votes to him/her instead. I'm sorry I misunderstood.
A: Generally speaking, it is ill advised to roll your own encryption/hashing algorithms. The existing algorithms do what they do for a reason.
Would it really be so bad to either give the testers an access path that hashed the user names for them or just have them copy/paste SHA-1 hashes?
A: Hashes are one-way, by definition.
If all you are trying to protect from is casual perusal of the data (so the encryption level is low), do something simple like a transposition cypher (a 1-1 mapping of different characters to one another -- A becomes J, B becomes '-', etc). Or even just shift everything by one (IBM becomes HAL).
But do recognize that this is by no means a guarantee of privacy or security. If those are qualities you are looking for, you can't have testers impersonating real users, by definition.
A: Did this recommendation go through your organization's auditing department? You might want to talk to them if not, it's not at all clear the scheme you're using protects your organization from liability.
A: Why not use a test data generator for the data that could identify an individual?
Creating test data in a database
A: To give you some more information:
I need to test a DTS package that imports all the users of the system from a text file into our database. I will be given the live data.
However, once the data is in the database it must be scrambled so that it doesnt make sense to the casual reader but allows testers to log in to the system
A: thanks for all the answers. I think you are almost certainly right about our test strategy being wrong.
I'll see if I can change the minds of the powers that be
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Detecting Back Button/Hash Change in URL I just set up my new homepage at http://ritter.vg. I'm using jQuery, but very minimally.
It loads all the pages using AJAX - I have it set up to allow bookmarking by detecting the hash in the URL.
//general functions
function getUrl(u) {
return u + '.html';
}
function loadURL(u) {
$.get(getUrl(u), function(r){
$('#main').html(r);
}
);
}
//allows bookmarking
var hash = new String(document.location).indexOf("#");
if(hash > 0)
{
page = new String(document.location).substring(hash + 1);
if(page.length > 1)
loadURL(page);
else
loadURL('news');
}
else
loadURL('news');
But I can't get the back and forward buttons to work.
Is there a way to detect when the back button has been pressed (or detect when the hash changes) without using a setInterval loop? When I tried those with .2 and 1 second timeouts, it pegged my CPU.
A: The answers here are all quite old.
In the HTML5 world, you should the use onpopstate event.
window.onpopstate = function(event)
{
alert("location: " + document.location + ", state: " + JSON.stringify(event.state));
};
Or:
window.addEventListener('popstate', function(event)
{
alert("location: " + document.location + ", state: " + JSON.stringify(event.state));
});
The latter snippet allows multiple event handlers to exist, whereas the former will replace any existing handler which may cause hard-to-find bugs.
A: Use the jQuery hashchange event plugin instead. Regarding your full ajax navigation, try to have SEO friendly ajax. Otherwise your pages shown nothing in browsers with JavaScript limitations.
A: Another great implementation is balupton's jQuery History which will use the native onhashchange event if it is supported by the browser, if not it will use an iframe or interval appropriately for the browser to ensure all the expected functionality is successfully emulated. It also provides a nice interface to bind to certain states.
Another project worth noting as well is jQuery Ajaxy which is pretty much an extension for jQuery History to add ajax to the mix. As when you start using ajax with hashes it get's quite complicated!
A: jQuery BBQ (Back Button & Query Library)
A high quality hash-based browser history plugin and very much up-to-date (Jan 26, 2010) as of this writing (jQuery 1.4.1).
A: HTML5 has included a much better solution than using hashchange which is the HTML5 State Management APIs - https://developer.mozilla.org/en/DOM/Manipulating_the_browser_history - they allow you to change the url of the page, without needing to use hashes!
Though the HTML5 State Functionality is only available to HTML5 Browsers. So you probably want to use something like History.js which provides a backwards compatible experience to HTML4 Browsers (via hashes, but still supports data and titles as well as the replaceState functionality).
You can read more about it here:
https://github.com/browserstate/History.js
A: I do the following, if you want to use it then paste it in some where and set your handler code in locationHashChanged(qs) where commented, and then call changeHashValue(hashQuery) every time you load an ajax request.
Its not a quick-fix answer and there are none, so you will need to think about it and pass sensible hashQuery args (ie a=1&b=2) to changeHashValue(hashQuery) and then cater for each combination of said args in your locationHashChanged(qs) callback ...
// Add code below ...
function locationHashChanged(qs)
{
var q = parseQs(qs);
// ADD SOME CODE HERE TO LOAD YOUR PAGE ELEMS AS PER q !!
// YOU SHOULD CATER FOR EACH hashQuery ATTRS COMBINATION
// THAT IS PASSED TO changeHashValue(hashQuery)
}
// CALL THIS FROM YOUR AJAX LOAD CODE EACH LOAD ...
function changeHashValue(hashQuery)
{
stopHashListener();
hashValue = hashQuery;
location.hash = hashQuery;
startHashListener();
}
// AND DONT WORRY ABOUT ANYTHING BELOW ...
function checkIfHashChanged()
{
var hashQuery = getHashQuery();
if (hashQuery == hashValue)
return;
hashValue = hashQuery;
locationHashChanged(hashQuery);
}
function parseQs(qs)
{
var q = {};
var pairs = qs.split('&');
for (var idx in pairs) {
var arg = pairs[idx].split('=');
q[arg[0]] = arg[1];
}
return q;
}
function startHashListener()
{
hashListener = setInterval(checkIfHashChanged, 1000);
}
function stopHashListener()
{
if (hashListener != null)
clearInterval(hashListener);
hashListener = null;
}
function getHashQuery()
{
return location.hash.replace(/^#/, '');
}
var hashListener = null;
var hashValue = '';//getHashQuery();
startHashListener();
A: Try simple & lightweight PathJS lib.
Simple example:
Path.map("#/page").to(function(){
alert('page!');
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/172957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Converting mysql TIME from 24 HR to AM/PM format I want to display the TIME field from my mysql table on my website, but rather than showing 21:00:00 etc I want to show 8:00 PM. I need a function/code to do this or even any pointers in the right direction. Will mark the first reply with some code as the correct reply.
A: You can also select the column as a unix timestamp using MYSQL's UNIX_TIMESTAMP() function. Then format it in PHP. IMO this is more flexible...
select a, b, c, UNIX_TIMESTAMP(instime) as unixtime;
The in PHP use the date() function & format it any way you want.
<?php echo date('Y/m/d', $row->unixtime); ?>
The reason I like this method as opposed to formatting it in SQL is b/c, to me, the date's format is a display decision & (in my opinion) formatting the date in SQL feels wrong... why put display logic in your SQL?
Now - if you're not processing the data in PHP and are doing adhoc queries then DATE_FORMAT() is the way to go. But if you're gonna have the data show up on the web I'd go with UNIX_TIMESTAMP() and do the formatting in PHP...
I mean... lets say you want to change how the date & time are displayed on the page... wouldn't it feel "off" to have to modify your SQL for a display tweak?
my 2 cents
A: Check this out: http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html
I'd imagine you'd want date_format().
Example: DATE_FORMAT($date, "%r")
A: I had been trying to do the same and got this page returned from a Google search. I worked a solution for the time 21:00:00;
*
*using DATE_FORMAT(<field>,'%l.%i%p') which returned 9.00PM
*putting a LOWER() function around it to return 9.00pm
So the full code is; DATE_FORMAT(<field>,'%l.%i%p')
Worked OK for me ...
A: Show the date & time data in AM/PM format with the following example...
SELECT DATE_FORMAT(`t`.`date_field`,'%h:%i %p') AS `date_field` FROM `table_name` AS `t`
OR
SELECT DATE_FORMAT(`t`.`date_field`,'%r') AS `date_field` FROM `table_name` AS `t`
Both are working properly.
A: Use DATE_FORMAT()
DATE_FORMAT(<Fieled>,'%h:%i:%s %p')
or
DATE_FORMAT(<Fieled>,'%r')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Apache: Multiple log files? What are the access.log.* files?
A: Apache, I believe, does log rotation. So these would be the older log files with the access.log file being the current one.
A: Apache / apache2 itself doesn't do its own log rotation. On *nix systems, logs (including logs by Apache) are usually rotated via logrotate, a command which looks like a service but is actually only a script triggered by cron in defined intervals. (@nobody already pointed that out in comments). One default logrotate configuration appends ".1" to an older, rotated log, so a file like access.log.1 would end up in your logs directory. This is what you are probably seeing.
Is it possible to have apache log to multiple log files?
The question's title can be ambiguous. For anyone coming here to learn if it is possible to make Apache write to multiple log files simultaneously, the answer is: Yes.
The TransferLog or CustomLog directives are used to define logfiles. These directives can be repeated to make Apache write to more than one log file. This also works for VHOSTS / Virtual Host entris. Only ancient Apache releases were limited to only one logfile per server configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you make a WPF slider snap only to discrete integer positions? All too often I want a WPF slider that behaves like the System.Windows.Forms.TrackBar of old. That is, I want a slider that goes from X to Y but only allows the user to move it in discrete integer positions.
How does one do this in WPF since the Value property on the Slider is double?
A: For those that want to snap to specific positions, you can also use the Ticks property:
<Slider Minimum="1" Maximum="500" IsSnapToTickEnabled="True" Ticks="1,100,200,350,500" />
A: The simple answer is that you take advantage of the IsSnapToTickEnabled and TickFrequency properties. That is, turn snapping to ticks on and set the tick frequency to 1.
Or, in other words ... take advantage of ticks ... but you don't necessarily have to show the ticks that you are snapping to.
Check out the following piece of xaml:
<Slider
Orientation="Vertical"
Height="200"
Minimum="0"
Maximum="10"
Value="0"
IsSnapToTickEnabled="True"
TickFrequency="1"
/>
A: The snap trick is handy but has limitations, for instance if you want to only show a subset of valid ticks. I've had success with two alternatives: either bind to an integer or round the new value. Here is a combined example:
public int MyProperty { get; set; }
private void slider1_ValueChanged(object sender,
RoutedPropertyChangedEventArgs<double> e)
{
(sender as Slider).Value = Math.Round(e.NewValue, 0);
}
<Slider
Name="slider1"
TickPlacement="TopLeft"
AutoToolTipPlacement="BottomRight"
ValueChanged="slider1_ValueChanged"
Value="{Binding MyProperty}"
Minimum="0" Maximum="100" SmallChange="1" LargeChange="10"
Ticks="0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100"/>
I have no idea how the performance of either compares to the snap trick but I haven't had any trouble*.
*If you also bind the value of the slider to a type of text field you will experience that, every once in a while if using the mouse, the text field will show decimals. If you also bind to an int at the same time the empty string will cause a conversion exception to be thrown that briefly bogs down the UI. These issues haven't been severe enough for me to look for solutions.
A: If you set your tick marks in the right way, you can use IsSnapToTickEnabled. This worked pretty well for me. See MSDN for details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "130"
} |
Q: Why log4j cannot generate backup files? I'm running some java processes on Windows 2003 server R2
I'm using Apache log4j-1.2.8. All my processes called via
one jar file with different parameter example
java -jar process.jar one
java -jar process.jar two
java -jar process.jar three
And I config log4j.properties follow
#===============================
# Declare Variables
#===============================
logpath=${user.dir}/log/
simple_pattern=%d{yyyy-MM-dd HH:mm:ss.SSS}%-5x - %m%n
backup_pattern='.'yyyy-MM-dd
#===============================
# PROCESS & STANDARD OUTPUT
#===============================
log4j.logger.process.Process=NULL,proclog,procstdout
log4j.appender.proclog=org.apache.log4j.DailyRollingFileAppender
log4j.appender.proclog.File=${logpath}process.log
log4j.appender.proclog.DatePattern=${backup_pattern}
log4j.appender.proclog.layout=org.apache.log4j.PatternLayout
log4j.appender.proclog.layout.conversionPattern=${simple_pattern}
log4j.appender.procstdout=org.apache.log4j.ConsoleAppender
log4j.appender.procstdout.layout=org.apache.log4j.PatternLayout
log4j.appender.procstdout.layout.ConversionPattern=${simple_pattern}
#===============================
# ONE
#===============================
log4j.logger.process.log.One=NULL,one
log4j.appender.one=org.apache.log4j.DailyRollingFileAppender
log4j.appender.one.File=${logpath}one.log
log4j.appender.one.DatePattern=${backup_pattern}
log4j.appender.one.layout=org.apache.log4j.PatternLayout
log4j.appender.one.layout.conversionPattern=${simple_pattern}
#===============================
# TWO
#===============================
log4j.logger.process.log.Two=NULL,two
log4j.appender.two=org.apache.log4j.DailyRollingFileAppender
log4j.appender.two.File=${logpath}two.log
log4j.appender.two.DatePattern=${backup_pattern}
log4j.appender.two.layout=org.apache.log4j.PatternLayout
log4j.appender.two.layout.conversionPattern=${simple_pattern}
#===============================
# THREE
#===============================
log4j.logger.process.log.Three=NULL,three
log4j.appender.three=org.apache.log4j.DailyRollingFileAppender
log4j.appender.three.File=${logpath}three.log
log4j.appender.three.DatePattern=${backup_pattern}
log4j.appender.three.layout=org.apache.log4j.PatternLayout
log4j.appender.three.layout.conversionPattern=${simple_pattern}
first time I use process appender is single logger and now i separate it
to ONE, TWO and THREE logger.
my processes executed by windows schedule every 1 minute.
So. I got Big problem
I don't know why log4j cannot generate backup files.
but when I execute manual by command line It's Ok.
A: Is your log4j.properties file in the classpath when executed by the scheduler? I had a similar problem in the past, and it was due to the configuration file not being in the classpath.
You can include it in your process.jar file, or specify its location like this:
java
-Dlog4j.configuration=file:///path/to/log4j.properties
-jar process.jar one
A: Appending should be the default, according to the javadocs, but it's worth specifying it in your config file to remove the ambiguity. With luck, it might fix your problem
log4j.appender.three.Append=true
A: Many thanks, I will try again for your solution.
and Now My schedule executed my processes via
bgprocess.bat
bgprocess.bat
@echo off
set CLASSPATH=.;%CLASSPATH%
set path=C:\j2sdk1.4.2\bin;%path%
javaw -jar process.jar %1
process.jar manifest.mf
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.6.2
Created-By: 1.4.2 (IBM Corporation)
Main-Class: process.Process
Class-Path: ./lib/Utility.jar ./lib/DB2LibRAD.jar ./lib/rowset.jar ./l
ib/log4j-1.2.8.jar ./lib/com.ibm.mq.jar .
process directory
- process.jar
- bgprocess.bat
- lib <dir>
- log4j-1.2.8.jar
- com.ibm.mq.jar
- connector.jar
- DB2LibRAD.jar
- rowset.jar
- Utility.jar
- log <dir>
- one.log
- two.log
- three.log
- process.log
and all log files working normally but when pass backup time
it will truncated and begin new log at first line.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Ghostdoc-like plugin for IntelliJ IDEA I've become lazy in my old age. For my C# work I've become quite reliant on Roland Weigelt's excellent GhostDoc plugin for Visual Studio.
Is anyone aware of a similar plugin for Java work in IntelliJ IDEA?
A: The builtin javadoc completion does some of this - if you type /** and press it puts parameters and return types into a skeleton javadoc.
Not quite the same as Ghostdoc, but -q on the method name brings up its javadoc with a link to the relevant super type docs, if applicable.
I presume you have looked in the plugin repository.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Can anyone think of some good reasons *not* to use an Object-Oriented DBMS to back a website? Say you're coding some kind of web application. Something where people can contribute content, e.g. a simple photo-sharing site.
How many good reasons can you think of to not go with an object-oriented database (e.g. db4o)?
A: An OODBMS is better if you only need to access your data through your objects. If your solution requires additional pathways to your data (e.g. ad-hoc queries, reporting, other applications that need data access but can't make use of your objects), then a traditional RDBMS system is better.
Note: OODBMSes have made a lot of improvement in this area.
A: I don't know how big your plans are, but the availability of experienced and skilled people to hire (or just to lend a hand) would factor into my decision, as well as just a generally large body of knowledge regarding all of the ins and outs of the DB.
Oracle or MySQL have their flaws, but odds are if you have a problem 100 other people have had the same problem and can tell you how to solve it.
A: I would say the fact that if you are considering something like db4o that they don't appear to have enterprise examples of powering websites, and are mostly in use for embedded applications.
See my other post on this.
(Example websites using db4o)
Nothing technically in the way here, just adoption it seems. However, for speed of development, maintainence and design flexibility, OODBs are pretty unbeatable.
heavy reporting etc. can be done by synching with a relational back end if required which i know db4o supports
A: This is a bit of a stretch, but to paraphrase a Joel post, plan for success. What if your app becomes really popular?
For example, what if you are hosting your app on your own machine, but decide to go to a formal hosting site, or even a server farm. What are the chances they will support an OODB versus MySQL?
A: I'd only recommend going for OODBMSs if your application design is really, really heavily object oriented, and the complexity presents a need for it. A photo-sharing site doesn't sound like it's heavy on the OO side, so I don't see the point of going for db4o.
However if you really just want to learn the ins and outs of using an OODBMS out of a pet project, it's fine to use one.
A: Another good reason is relative longevity. db40 is an excellent product for what it does, but its user base is small and it isn't likely to outlive something like SQL Server.
Of course, I also used to say there was no way Java was going to survive.
A: Size of data (If I'm dealing with millions and millions of rows, I'm sticking with what I know)
Reporting (Typically difficult enough in normalized databases, worse in OO databases)
Availability of expertise/experience (RDBMS clearly have more adherents)
Large amounts of ETL (Most people import and export in flat files, unless you're getting/sending XML, you're talking plain old tables)
None of these sound like obstacles for your project
A: My personal opinion, where there's data ... there's reporting.
No OODBs is going to give your data the appropriate storage model to be available to your reporting applications.
A: Need for speed when all you got is a pedal bike.
Scenarios include data capture (e.g. logging) where after the event the captured data is often processed at a later stage and probably broken up into its object constituents anyway.
A: Maybe you guys also want to check this article:
http://microsoft.apress.com/asptodayarchive/74063/using-an-object-oriented-d
"Using an OODB in a Website" by Jim Paterson
Best!
A: For a complex application with modest data needs, you can't beat GLASS (Gemstone, Seaside and Smalltalk). Reporting is definitely something you want to do OO in Smalltalk.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: NOT IN vs NOT EXISTS Which of these queries is the faster?
NOT EXISTS:
SELECT ProductID, ProductName
FROM Northwind..Products p
WHERE NOT EXISTS (
SELECT 1
FROM Northwind..[Order Details] od
WHERE p.ProductId = od.ProductId)
Or NOT IN:
SELECT ProductID, ProductName
FROM Northwind..Products p
WHERE p.ProductID NOT IN (
SELECT ProductID
FROM Northwind..[Order Details])
The query execution plan says they both do the same thing. If that is the case, which is the recommended form?
This is based on the NorthWind database.
[Edit]
Just found this helpful article:
http://weblogs.sqlteam.com/mladenp/archive/2007/05/18/60210.aspx
I think I'll stick with NOT EXISTS.
A: Also be aware that NOT IN is not equivalent to NOT EXISTS when it comes to null.
This post explains it very well
http://sqlinthewild.co.za/index.php/2010/02/18/not-exists-vs-not-in/
When the subquery returns even one null, NOT IN will not match any
rows.
The reason for this can be found by looking at the details of what the
NOT IN operation actually means.
Let’s say, for illustration purposes that there are 4 rows in the
table called t, there’s a column called ID with values 1..4
WHERE SomeValue NOT IN (SELECT AVal FROM t)
is equivalent to
WHERE SomeValue != (SELECT AVal FROM t WHERE ID=1)
AND SomeValue != (SELECT AVal FROM t WHERE ID=2)
AND SomeValue != (SELECT AVal FROM t WHERE ID=3)
AND SomeValue != (SELECT AVal FROM t WHERE ID=4)
Let’s further say that AVal is NULL where ID = 4. Hence that !=
comparison returns UNKNOWN. The logical truth table for AND states
that UNKNOWN and TRUE is UNKNOWN, UNKNOWN and FALSE is FALSE. There is
no value that can be AND’d with UNKNOWN to produce the result TRUE
Hence, if any row of that subquery returns NULL, the entire NOT IN
operator will evaluate to either FALSE or NULL and no records will be
returned
A: In your specific example they are the same, because the optimizer has figured out what you are trying to do is the same in both examples. But it is possible that in non-trivial examples the optimizer may not do this, and in that case there are reasons to prefer one to other on occasion.
NOT IN should be preferred if you are testing multiple rows in your outer select. The subquery inside the NOT IN statement can be evaluated at the beginning of the execution, and the temporary table can be checked against each value in the outer select, rather than re-running the subselect every time as would be required with the NOT EXISTS statement.
If the subquery must be correlated with the outer select, then NOT EXISTS may be preferable, since the optimizer may discover a simplification that prevents the creation of any temporary tables to perform the same function.
A: I was using
SELECT * from TABLE1 WHERE Col1 NOT IN (SELECT Col1 FROM TABLE2)
and found that it was giving wrong results (By wrong I mean no results). As there was a NULL in TABLE2.Col1.
While changing the query to
SELECT * from TABLE1 T1 WHERE NOT EXISTS (SELECT Col1 FROM TABLE2 T2 WHERE T1.Col1 = T2.Col2)
gave me the correct results.
Since then I have started using NOT EXISTS every where.
A: Database table model
Let’s assume we have the following two tables in our database, that form a one-to-many table relationship.
The student table is the parent, and the student_grade is the child table since it has a student_id Foreign Key column referencing the id Primary Key column in the student table.
The student table contains the following two records:
id
first_name
last_name
admission_score
1
Alice
Smith
8.95
2
Bob
Johnson
8.75
And, the student_grade table stores the grades the students received:
id
class_name
grade
student_id
1
Math
10
1
2
Math
9.5
1
3
Math
9.75
1
4
Science
9.5
1
5
Science
9
1
6
Science
9.25
1
7
Math
8.5
2
8
Math
9.5
2
9
Math
9
2
10
Science
10
2
11
Science
9.4
2
SQL EXISTS
Let’s say we want to get all students that have received a 10 grade in Math class.
If we are only interested in the student identifier, then we can run a query like this one:
SELECT
student_grade.student_id
FROM
student_grade
WHERE
student_grade.grade = 10 AND
student_grade.class_name = 'Math'
ORDER BY
student_grade.student_id
But, the application is interested in displaying the full name of a student, not just the identifier, so we need info from the student table as well.
In order to filter the student records that have a 10 grade in Math, we can use the EXISTS SQL operator, like this:
SELECT
id, first_name, last_name
FROM
student
WHERE EXISTS (
SELECT 1
FROM
student_grade
WHERE
student_grade.student_id = student.id AND
student_grade.grade = 10 AND
student_grade.class_name = 'Math'
)
ORDER BY id
When running the query above, we can see that only the Alice row is selected:
id
first_name
last_name
1
Alice
Smith
The outer query selects the student row columns we are interested in returning to the client. However, the WHERE clause is using the EXISTS operator with an associated inner subquery.
The EXISTS operator returns true if the subquery returns at least one record and false if no row is selected. The database engine does not have to run the subquery entirely. If a single record is matched, the EXISTS operator returns true, and the associated other query row is selected.
The inner subquery is correlated because the student_id column of the student_grade table is matched against the id column of the outer student table.
SQL NOT EXISTS
Let’s consider we want to select all students that have no grade lower than 9. For this, we can use NOT EXISTS, which negates the logic of the EXISTS operator.
Therefore, the NOT EXISTS operator returns true if the underlying subquery returns no record. However, if a single record is matched by the inner subquery, the NOT EXISTS operator will return false, and the subquery execution can be stopped.
To match all student records that have no associated student_grade with a value lower than 9, we can run the following SQL query:
SELECT
id, first_name, last_name
FROM
student
WHERE NOT EXISTS (
SELECT 1
FROM
student_grade
WHERE
student_grade.student_id = student.id AND
student_grade.grade < 9
)
ORDER BY id
When running the query above, we can see that only the Alice record is matched:
id
first_name
last_name
1
Alice
Smith
So, the advantage of using the SQL EXISTS and NOT EXISTS operators is that the inner subquery execution can be stopped as long as a matching record is found.
A: I always default to NOT EXISTS.
The execution plans may be the same at the moment but if either column is altered in the future to allow NULLs the NOT IN version will need to do more work (even if no NULLs are actually present in the data) and the semantics of NOT IN if NULLs are present are unlikely to be the ones you want anyway.
When neither Products.ProductID or [Order Details].ProductID allow NULLs the NOT IN will be treated identically to the following query.
SELECT ProductID,
ProductName
FROM Products p
WHERE NOT EXISTS (SELECT *
FROM [Order Details] od
WHERE p.ProductId = od.ProductId)
The exact plan may vary but for my example data I get the following.
A reasonably common misconception seems to be that correlated sub queries are always "bad" compared to joins. They certainly can be when they force a nested loops plan (sub query evaluated row by row) but this plan includes an anti semi join logical operator. Anti semi joins are not restricted to nested loops but can use hash or merge (as in this example) joins too.
/*Not valid syntax but better reflects the plan*/
SELECT p.ProductID,
p.ProductName
FROM Products p
LEFT ANTI SEMI JOIN [Order Details] od
ON p.ProductId = od.ProductId
If [Order Details].ProductID is NULL-able the query then becomes
SELECT ProductID,
ProductName
FROM Products p
WHERE NOT EXISTS (SELECT *
FROM [Order Details] od
WHERE p.ProductId = od.ProductId)
AND NOT EXISTS (SELECT *
FROM [Order Details]
WHERE ProductId IS NULL)
The reason for this is that the correct semantics if [Order Details] contains any NULL ProductIds is to return no results. See the extra anti semi join and row count spool to verify this that is added to the plan.
If Products.ProductID is also changed to become NULL-able the query then becomes
SELECT ProductID,
ProductName
FROM Products p
WHERE NOT EXISTS (SELECT *
FROM [Order Details] od
WHERE p.ProductId = od.ProductId)
AND NOT EXISTS (SELECT *
FROM [Order Details]
WHERE ProductId IS NULL)
AND NOT EXISTS (SELECT *
FROM (SELECT TOP 1 *
FROM [Order Details]) S
WHERE p.ProductID IS NULL)
The reason for that one is because a NULL Products.ProductId should not be returned in the results except if the NOT IN sub query were to return no results at all (i.e. the [Order Details] table is empty). In which case it should. In the plan for my sample data this is implemented by adding another anti semi join as below.
The effect of this is shown in the blog post already linked by Buckley. In the example there the number of logical reads increase from around 400 to 500,000.
Additionally the fact that a single NULL can reduce the row count to zero makes cardinality estimation very difficult. If SQL Server assumes that this will happen but in fact there were no NULL rows in the data the rest of the execution plan may be catastrophically worse, if this is just part of a larger query, with inappropriate nested loops causing repeated execution of an expensive sub tree for example.
This is not the only possible execution plan for a NOT IN on a NULL-able column however. This article shows another one for a query against the AdventureWorks2008 database.
For the NOT IN on a NOT NULL column or the NOT EXISTS against either a nullable or non nullable column it gives the following plan.
When the column changes to NULL-able the NOT IN plan now looks like
It adds an extra inner join operator to the plan. This apparatus is explained here. It is all there to convert the previous single correlated index seek on Sales.SalesOrderDetail.ProductID = <correlated_product_id> to two seeks per outer row. The additional one is on WHERE Sales.SalesOrderDetail.ProductID IS NULL.
As this is under an anti semi join if that one returns any rows the second seek will not occur. However if Sales.SalesOrderDetail does not contain any NULL ProductIDs it will double the number of seek operations required.
A: They are very similar but not really the same.
In terms of efficiency, I've found the left join is null statement more efficient (when an abundance of rows are to be selected that is)
A: If the execution planner says they're the same, they're the same. Use whichever one will make your intention more obvious -- in this case, the second.
A: If the optimizer says they are the same then consider the human factor. I prefer to see NOT EXISTS :)
A: Actually, I believe this would be the fastest:
SELECT ProductID, ProductName
FROM Northwind..Products p
outer join Northwind..[Order Details] od on p.ProductId = od.ProductId)
WHERE od.ProductId is null
A: I have a table which has about 120,000 records and need to select only those which does not exist (matched with a varchar column) in four other tables with number of rows approx 1500, 4000, 40000, 200. All the involved tables have unique index on the concerned Varchar column.
NOT IN took about 10 mins, NOT EXISTS took 4 secs.
I have a recursive query which might had some untuned section which might have contributed to the 10 mins, but the other option taking 4 secs explains, atleast to me that NOT EXISTS is far better or at least that IN and EXISTS are not exactly the same and always worth a check before going ahead with code.
A: It depends..
SELECT x.col
FROM big_table x
WHERE x.key IN( SELECT key FROM really_big_table );
would not be relatively slow the isn't much to limit size of what the query check to see if they key is in. EXISTS would be preferable in this case.
But, depending on the DBMS's optimizer, this could be no different.
As an example of when EXISTS is better
SELECT x.col
FROM big_table x
WHERE EXISTS( SELECT key FROM really_big_table WHERE key = x.key);
AND id = very_limiting_criteria
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "617"
} |
Q: Is there a simple JavaScript slider? I need to create a custom volume slider for a WMP object. The current slider is complicated to modify, and use, is there a simple way to generate a slider on an HTML page that can have it's value passed to a javascript function?
A: Here is another light JavaScript Slider that seems to fit your needs.
A: Yahoo UI library has also a slider control...
A: script.aculo.us has a slider control that might be worth checking out.
A: hey i've just created my own JS slider because I had enough of the heavy Jquery UI one. Interested to hear people's thoughts. Been on it for 5 hours, so really really early stages.
jsfiddle_slider
A: jQuery UI Slider (API docs)
A: There's a nice javascript slider, it's very easy to implement. You can download the zip package here: http://ruwix.com/javascript-volume-slider-control/
p.s. here the simplified version of the above script:
DEMO link
A: HTML 5 with Webforms 2 provides an <input type="range"> which will make the browser generate a native slider for you. Unfortunately all browsers doesn't have support for this, however google has implemented all Webforms 2 controls with js. IIRC the js is intelligent enough to know if the browser has implemented the control, and triggers only if there is no native implementation.
From my point of view it should be considered best practice to use the browsers native controls when possible.
A: The lightweight MooTools framework has one: http://demos.mootools.net/Slider
A: I recommend Slider from Filament Group, It has very good user experience
http://www.filamentgroup.com/lab/update_jquery_ui_slider_from_a_select_element_now_with_aria_support/
A: A simple slider: I have just tested this in pure HTML5, and it's so simple !
<input type="range">
It works like a charm on Chrome. I've not tested other browsers yet.
A: Here is a simple slider object for easy to use
pagecolumn_webparts_sliders
A: The Carpe Slider has newer versions also:
v1.5 carpe_ambiprospect_slider
v2.0b ...slider/drafts/v2.0/
A: The code below should be enough to get you started. Tested in Opera, IE and Chrome.
<script>
var l=0;
function f(i){
im = 'i' + l;
d=document.all[im];
d.height=99;
document.all.f1.t1.value=i;
im = 'i' + i;
d=document.all[im];
d.height=1;
l=i;
}
</script>
<center>
<form id='f1'>
<input type=text value=0 id='t1'>
</form>
<script>
for (i=0;i<=50;i++)
{
s = "<img src='j.jpg' height=99 width=9 onMouseOver='f(" + i + ")' id='i" + i + "'>";
document.write(s);
}
</script>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: parameterised jsp:includes of stripes actions? I've been trying to solve this, and have been getting stuck, so I thought I'd ask.
Imagine two ActionBeans, A and B.
A.jsp has this section in it:
...
<jsp:include page="/B.action">
<jsp:param name="ponies" value="on"/>
</jsp:include>
<jsp:include page="/B.action">
<jsp:param name="ponies" value="off"/>
</jsp:include>
...
Take it as read that the B ActionBean does some terribly interesting stuff depending on whether the "ponies" parameter is set to either on or off.
The parameter string "ponies=on" is visible when you debug into the request, but it's not what's getting bound into the B ActionBean. Instead what's getting bound are the parameters to the original A.action.
Is there some way of getting the behaviour I want, or have I missed something fundamental?
A: So are you saying that in each case ${ponies} on your JSP page prints out "on"?
Because it sounds like you are confusing JSP parameters with Stripes action beans. Setting a JSP parameter simply sets a parameter on that JSP page, that you can reference as shown above, it doesn't actually set anything on the stripes action bean.
A: The reason that this wasn't working was because of massaging done by our implementation of HttpServletRequest.
It works fine with the "normal" implementation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Does anyone write really long, complex PHP apps? It's hard for me to imagine a php script that is more than a few hundred lines of code. It seems that, for a non-persistent environment, web-based scripting is usually done is small chunks and used for the purpose of delivering a portion of a website to the end user. I'd like to know if people are developing any type of large, or persistent, or complex apps with php, and what is it exactly you are working on. I've only done small projects for small websites, so I don't know what can be accomplished on a larger scale. It would also be nice to know what libraries you are using, and what other technologies you are integrating with. Please enlighten me so I can start to dream of bigger things!
A: For my day-job we run everything in PHP - our front-end website, our backend for agents and employees, inventory, server control interfaces, etc. These are everything from spiffy new AJAX-enabled Zend Framework apps to legacy code that we haven't ported yet. On top of that we use things like Mantis (bug tracking built in PHP), Mediawiki, and phpMyAdmin.
The only thing that isn't PHP are vendor apps because vendors love Java. The one ASP.NET application we have was actually abandoned by the vendor during the project (not really a knock against ASP.NET, that app was just the perfect definition of a runaway project and would have failed no matter what language it was written in).
With mature frameworks like Zend Framework, CodeIgnitor, and CakePHP creating just about anything in PHP is possible.
A: The biggest problem developing large scale programs is definitely keeping them maintainable in the long-term. Initially, a program starts out all full of ideal methods and ideas, but keeping the integrity intact, especially, over time fails, in my opinion, more often than not.
In addition, scope creep is your enemy. You HAVE to reign that in ASAP.
As far as large scale programs go the company I work for has a few internal programs constantly under development. One example is our proprietary website engine. It's a very large code-base that includes a dozen modules (user management, survey system, blogs, user galleries, etc) that allows us to build our clients sites rapidly.
We also develop our own internal project management program for managing our clients work.
You should definitely be thinking in terms of scale in the long term. In almost every project I've worked on there's a permission/group element for users involved. You might want to start thinking about the possibilities and issues involved in that and work up to more complicated functionality.
A: I would look at some of the well-known open source web apps that use PHP to get a good sense of what can be accomplished, and how PHP is used in each of them. The advantage is that since they are all open source, you can actually look at the PHP code to see how various functionality was implemented.
Some good examples to look at include:
*
*WordPress
*TextPattern
*MediaWiki
*PhpBB *questionable code quality
*SugarCRM
*Joomla
*Drupal
Also look at some of the popular frameworks to see what kind of functionality they offer (this should give you a good sense of what types of things PHP is most often used for):
*
*Zend Framework
*CakePHP
*CodeIgniter
*Symfony
A: MediaWiki is one of the largest public PHP apps, and it's got very nice code. . I know some larger ones, but they're utterly awful and you'd learn nothing by reading them.
A: There are lots of complex OpenSource php applications. For example, the Drupal CMS, which can be considered a platform in its own right for developing other web sites.
You can browse through the source code online: http://cvs.drupal.org/viewvc.py/drupal/drupal/
A: +1 for Wilco
I have a software I use for some of my clients, it's a CMS, Blog, eCommerce beast, the code base is HUGE, but everything cooperates with each other nicely.
A: My company works on educational software. We've recently started doing web-based content delivery, including video and audio, with the backend written entirely in PHP using MySQL. We have two primary apps, one which lives on our servers and one which is delivered to the customer. One clocks in at ~42,000 lines of code (using a physical line count) and one at ~68,000 lines.
We use PEAR extensively and a recently started project is using the Zend Framework.
A: We use PHP at our company. (We do online language learning: http://www.livemocha.com. You should go take a look at the site. Yes, it's sort of a shameless plug, but it's also topical. :-) )
I can't give you a precise number of users, but we put out a press release a while back celebrating hitting the 3 million mark. That's a pretty large scale as web apps go.
We build on the CakePHP framework, which is based on an MVC architecture... at least in theory. In practice, they auto-generate certain methods for the models which tend to have the result of pushing some pieces of model code (caching, deciding which DB to use) into the controllers. They also have a few localization issues in 1.2 that make me think this part of the framework hasn't really reached maturity yet. That said, I find CakePHP pretty comfortable to work with overall, and you should at least take a look at it if you're considering implementing a large-scale web app in PHP. It has some excellent documentation available as well (google for "CakePHP bakery").
A: Get CodeIgniter and rebuild Amazon or Ebay. If you can dream it you can build it in PHP but you might not be able to maintain it because it is so easy to created bad code that works. PHP.net is your friend. Whatever framework you use make sure your read the User Guide and let it guide you.
A: I can't believe nobody has mentioned the MVC pattern yet. IMO, it's one of the best things you can use to help you maintain large codebases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: 2.9999999999999999 >> .5? I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome:
2.5 >> .5 == 2;
2.9999 >> .5 == 2;
2.999999999999999 >> .5 == 2; // 15 9s
2.9999999999999999 >> .5 == 3; // 16 9s
After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE.
My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity.
I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated.
For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly:
for 0: 0.9999999999999999444888487687421729788184165954589843749¯
for 1: 1.9999999999999999888977697537484345957636833190917968749¯
for 2-3: x+.99999999999999977795539507496869191527366638183593749¯
for 4-7: x+.9999999999999995559107901499373838305473327636718749¯
for 8-15: x+.999999999999999111821580299874767661094665527343749¯
...and so forth
A: If you wanna go deeper, read "What Every Computer Scientist Should Know About Floating-Point Arithmetic": https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
A: Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first:
2.999999 >> 0.5
Becomes:
Math.floor(2.999999) >> Math.floor(0.5)
Which in turn is:
2 >> 0
Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer.
The SpiderMonkey source code has:
switch (op) {
case JSOP_LSH:
case JSOP_RSH:
if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor()
return JS_FALSE;
if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor()
return JS_FALSE;
j &= 31;
d = (op == JSOP_LSH) ? i << j : i >> j;
break;
Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser:
alert(2.999999999999999);
You'll get 2.999999999999999. Now try adding one more 9:
alert(2.9999999999999999);
You'll get a 3.
A: Try this javascript out:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359374999999"));
Then try this:
alert(parseFloat("2.9999999999999997779553950749686919152736663818359375"));
What you are seeing is simple floating point inaccuracy. For more information about that, see this for example: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
The basic issue is that the closest that a floating point value can get to representing the second number is greater than or equal to 3, whereas the closes that the a float can get to the first number is strictly less than three.
As for why right shifting by 0.5 does anything sane at all, it seems that 0.5 is just itself getting converted to an int (0) beforehand. Then the original float (2.999...) is getting converted to an int by truncation, as usual.
A: I don't think your right shift is relevant. You are simply beyond the resolution of a double precision floating point constant.
In Chrome:
var x = 2.999999999999999777955395074968691915273666381835937499999;
var y = 2.9999999999999997779553950749686919152736663818359375;
document.write("x=" + x);
document.write(" y=" + y);
Prints out: x = 2.9999999999999996 y=3
A: The shift right operator only operates on integers (both sides). So, shifting right by .5 bits should be exactly equivalent to shifting right by 0 bits. And, the left hand side is converted to an integer before the shift operation, which does the same thing as Math.floor().
A: This is possibly the single worst idea I have ever seen. Its only possible purpose for existing is for winning an obfusticated code contest. There's no significance to the long numbers you posted -- they're an artifact of the underlying floating-point implementation, filtered through god-knows how many intermediate layers. Bit-shifting by a fractional number of bytes is insane and I'm surprised it doesn't raise an exception -- but that's Javascript, always willing to redefine "insane".
If I were you, I'd avoid ever using this "feature". Its only value is as a possible root cause for an unusual error condition. Use Math.floor() and take pity on the next programmer who will maintain the code.
Confirming a couple suspicions I had when reading the question:
*
*Right-shifting any fractional number x by any fractional number y will simply truncate x, giving the same result as Math.floor() while thoroughly confusing the reader.
*2.999999999999999777955395074968691915... is simply the largest number that can be differentiated from "3". Try evaluating it by itself -- if you add anything to it, it will evaluate to 3. This is an artifact of the browser and local system's floating-point implementation.
A: I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999 to its binary representation would be enlightening. It's probably only 1 bit different from true 3.
A:
I suspect that converting 2.9999999999999997779553950749686919152736663818359374999999
to it's binary representation would be enlightening. It's probably only 1 bit different
from true 3.
Good guess, but no cigar.
As the double precision FP number has 53 bits, the last FP number before 3 is actually
(exact): 2.999999999999999555910790149937383830547332763671875
But why it is
2.9999999999999997779553950749686919152736663818359375
(and this is exact, not 49999... !)
which is higher than the last displayable unit ? Rounding. The conversion routine (String to number) simply is correctly programmed to round the input the the next floating point number.
2.999999999999999555910790149937383830547332763671875
.......(values between, increasing) -> round down
2.9999999999999997779553950749686919152736663818359375
....... (values between, increasing) -> round up to 3
3
The conversion input must use full precision. If the number is exactly the half between
those two fp numbers (which is 2.9999999999999997779553950749686919152736663818359375)
the rounding depends on the setted flags. The default rounding is round to even, meaning that the number will be rounded to the next even number.
Now
3 = 11. (binary)
2.999... = 10.11111111111...... (binary)
All bits are set, the number is always odd. That means that the exact half number will be rounded up, so you are getting the strange .....49999 period because it must be smaller than the exact half to be distinguishable from 3.
A: And to add to John's answer, the odds of this being more performant than Math.floor are vanishingly small.
I don't know if JavaScript uses floating-point numbers or some kind of infinite-precision library, but either way, you're going to get rounding errors on an operation like this -- even if it's pretty well defined.
A: It should be noted that the number ".0000000000000007779553950749686919152736663818359374" is quite possibly the Epsilon, defined as "the smallest number E such that (1+E) > 1."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Flash/Actionscript2 - Can't get comboBox "change" event to fire I'm trying to use the combobox component for Flash. I can't get the change event to fire. My code is pretty much straight of of the adobe site (link below). The box gets populated but changing the value produces no trace output. What am I doing wrong?
http://livedocs.adobe.com/flash/mx2004/main_7_2/wwhelp/wwhimpl/js/html/wwhelp.htm?href=00002149.html#3138459
myCombo.addItem("hi1", "hi5");
myCombo.addItem("h2", "hi6");
myCombo.addItem("hi3", "hi7");
myCombo.addItem("h4", "hi8");
var form = new Object();
form.change = function(eventObj){
trace("Value changed to " + eventObj.target.value);
}
myCombo.addEventListener("change", form);
A: I pasted your code into an AS2 project and it worked as expected for me. No other output? Try adding a trace before and after the addEventListener to make sure it's getting called. Try using a name other than form for your object. Try running it in debug and set a breakpoint in the change function.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: C# .NET 3.0/3.5 features in 2.0 using Visual Studio 2008 What are some of the new features that can be used in .NET 2.0 that are specific to C# 3.0/3.5 after upgrading to Visual Studio 2008? Also, what are some of the features that aren't available?
Available
*
*Lambdas
*Extension methods (by declaring an empty System.Runtime.CompilerServices.ExtensionAttribute)
*Automatic properties
*Object initializers
*Collection Initializers
*LINQ to Objects (by implementing IEnumerable extension methods, see LinqBridge)
Not Available
*
*Expression trees
*WPF/Silverlight Libraries
A: Pretty much everything! Daniel Moth covers this here and here. That only leaves runtime support: LINQ-to-Objects is provided by LINQBridge - which leaves just bigger APIs like Expression support, and tools like LINQ-to-SQL. These are too big to be reasonably ported back to .NET 2.0, so I'd use .NET 3.5 for these.
A: I cover this in an article on my site.
Almost all C# 3.0 features are available when targeting .NET 2.0. For extension methods, you need to define an extra attribute. Expression trees aren't available at all. Query expression support is based on a translation followed by "normal" C# rules, so you'll need something to provide the Select, Where etc methods. LINQBridge is the de facto standard "LINQ to Objects in .NET 2.0" implementation. You may well want to declare the delegates in the Func and Action delegate families to make it easier to work with lambda expressions - and then remove them if/when you move to .NET 3.5
A: To define extension methods, you'll need to supply the following class if you're targeting .NET 2.0:
namespace System.Runtime.CompilerServices {
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class | AttributeTargets.Assembly)]
sealed class ExtensionAttribute : Attribute { }
}
A: There was a previous discussion about something similar you may also want to read too:
Targeting .NET Framework 3.5, Using .NET 2.0 Runtime. Caveats?
A: You can use Mono's version of the System.Core which fully supports LINQ & Expression Trees.
I compiled its source against .net 2.0, and now I can use it in my .net2.0 projects.
This is great for projects that needs to be deployed on win2k, where .net3.5 is not available.
A: You can use any new C# 3.0 feature that is handled by the compiler by emitting 2.0-compatible IL and doesn't reference any of the new 3.5 assemblies:
*
*Lambdas (used as Func<..>, not Expression<Func<..>> )
*Extension methods (by declaring an empty System.Runtime.CompilerServices.ExtensionAttribute)
*Automatic properties
*Object Initializers
*Collection Initializers
*LINQ to Objects (by implementing IEnumerable<T> extension methods, see LinqBridge)
A: Lambdas & Extension methods are handled purely by the compiler and can be used with the .Net 2.0 framework.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Custom Date Formatting in J2ME I would like to format a J2ME Date object to only show the date part, not the date and time. What would be the easiest way? Would probably need to include an external library to do this.
A: java.util.Calendar has all the methods required to format a date output.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Election 2008 data files - how to know which candidate received the financial contribution? I was trying to programmatically go through presidential campaign contributions to see which web 2.0 people contributed to which candidates. You can get the data file indiv08.zip on the site http://www.fec.gov/finance/disclosure/ftpdet.shtml#a2007_2008
I can parse out who contributed and how much they contributed, but I cannot figure out who the contribution went to. There seems to be some ID, but it does not match the candidate IDs in any other file I can find. If you can help with this, I think I could make a pretty nice page showing who contributed to which candidate.
update -- I wonder if I just misunderstand this contribution system. Perhaps candidates cannot receive contributions at all, only committees? For example I see "C00431445OBAMA FOR AMERICA" received a lot of contributions. That makes it a bit more complicated then to associate those committees to candidates. Basically I want to know who supported Obama and who supported McCain.
A: On page 3 of the tutorial that is linked at the top of the page you liked to contains the column names including "Candidate Identification".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: mootools or javascript : what does $tmp stand for or what does it pertain too I'm currently working on the Tips.js from mootools library and my code breaks on the line that has those el.$tmp, and console says it's undefined
Can anybody help me?
A: I'd suggest taking your question and posting it, along with a link to the page to either/or/and:
http://mooforum.net
http://groups.google.com/group/mootools-users/topics
That's the community that swarms with it.
Now as for answering it here - I'd need a lot more information (code example?)
A: in 1.11 (haven't checked in 1.2+) $tmp is a reference to the element itself, created and used internally by the garbage collector:
var Garbage = {
elements: [],
collect: function(el){
if (!el.$tmp){
Garbage.elements.push(el);
el.$tmp = {'opacity': 1};
}
return el;
},
trash: function(elements){
for (var i = 0, j = elements.length, el; i < j; i++){
if (!(el = elements[i]) || !el.$tmp) continue;
if (el.$events) el.fireEvent('trash').removeEvents();
for (var p in el.$tmp) el.$tmp[p] = null;
for (var d in Element.prototype) el[d] = null;
Garbage.elements[Garbage.elements.indexOf(el)] = null;
el.htmlElement = el.$tmp = el = null;
}
Garbage.elements.remove(null);
},
empty: function(){
Garbage.collect(window);
Garbage.collect(document);
Garbage.trash(Garbage.elements);
}
};
the lines el.$tmp = {'opacity': 1}; (in collect method above) and el.htmlElement = el.$tmp = el = null; (in trash method above) are the only places in the source where this property is assigned that i could find, although it's called by various other methods, such as Element.setOpacity and Element.getStyle (specifically, only to return opacity value), as well as methods in the Tips class
1.2 might not have this issue, but in any case, hope that helps and sorry i couldn't help more
A: Hmmm. I'm not exactly sure what el.$tmp is a reference to in MooTools but a message stating "console is undefined" is probably because someone was trying to log to the Firebug (or another) console and that object does not exist if you don't have Firebug and friends.
If you don't have http://getfirebug.com'>Firebug installed for Firefox then you might give it a shot. See if you can find the console statement and remove it. Also, if you aren't using Firefox, you can use Firebug Lite in IE, Safari, or Opera.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to reinterpret cast a float to an int? Is there a non-static conversion operator or user-defined assignment operator for conversion on 'this'? 1.
How can I reinterpret cast a float to an int (or a double to a long)?
float f = 2.0f;
int i = (int)f; // causes conversion
I only want to copy the bit-pattern from f to i. How can this be done?
2.
The implicit and explicit operators in C# uses one intermediate object because the operator function is static
public static implicit operator MyClass(double s)
{
return new MyClass(s);
}
..
..
MyClass m = 2.2; // this code uses 'm' and one intermediate object.
This is fine for reference types, but for value-types which are big (say 20-30 bytes), this will cause unnecessary data copy. Is my understanding correct? And If yes, then why doesn't C# have a non-static conversion operator or user-defined assignment operator so that the conversion/assignment takes place on 'this'? If it does, whats the way to do it?
A: *
*The BitConverter class can retrieve the bytes for any primitive type, which you can then use to create an int. Another option is Buffer.BlockCopy if you have large amounts of converting to do.
float ff = 2.0f;
int ii = BitConverter.ToInt32(BitConverter.GetBytes(ff), 0);
float[] ff = new float[...];
int[] ii = new int[ff.Length];
Buffer.BlockCopy(ff, 0, ii, 0, ff.Length * 4); // byte-wise copy of ff into ii
*No there is no other option given in C#, however, I think that while you're correct in the sense that there will be a copy made, any sufficiently simple implementation will have JIT optimizations done, possibly removing the need for a copy.
A: This approach, while unsafe, works as a generic solution
static unsafe TDest ReinterpretCast<TSource, TDest>(TSource source)
{
var tr = __makeref(source);
TDest w = default(TDest);
var trw = __makeref(w);
*((IntPtr*)&trw) = *((IntPtr*)&tr);
return __refvalue(trw, TDest);
}
A: This should be the fastest and cleanest way to do it:
public static class ReinterpretCastExtensions {
public static unsafe float AsFloat( this int n ) => *(float*)&n;
public static unsafe int AsInt( this float n ) => *(int*)&n;
}
public static class MainClass {
public static void Main( string[] args ) {
Console.WriteLine( 1.0f.AsInt() );
Console.WriteLine( 1.AsFloat() );
}
}
A: 1: BitConverter (as sixlettervariables) is an option; as is unsafe code (which doesn't need an intermediate buffer):
float f = 2.0f;
int i;
// perform unsafe cast (preserving raw binary)
unsafe
{
float* fRef = &f;
i = *((int*)fRef);
}
Console.WriteLine(i);
// prove same answer long-hand
byte[] raw = BitConverter.GetBytes(f);
int j = BitConverter.ToInt32(raw, 0);
Console.WriteLine(j);
2: note that you should limit the size of structs. I can't find a citation fr it, but the number "16 bytes" (max, as a recommendation) seems to stick in my mind. Above this, consider an immutable reference-type (class).
A: Barring unsafe code - this is the fastest method I know of to perform a reinterpret:
[StructLayout(LayoutKind.Explicit)]
private struct IntFloat
{
[FieldOffset(0)]
public int IntValue;
[FieldOffset(0)]
public float FloatValue;
}
private static float Foo(float x)
{
var intFloat = new IntFloat { FloatValue = x };
var floatAsInt = intFloat.IntValue;
...
Hope this helps someone.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What's the best way to only allow a PHP file to be included? I want to make sure people can't type the name of a PHP script in the URL and run it. What's the best way of doing this?
I could set a variable in the file that will be including this file, and then check that variable in the file being included, but is there an easier way?
A: You could check the URI and see if that file is being called with `
$_SERVER['SCRIPT_FILENAME']
or you could move the file outside the public folder, this is a better solution.
A: I have long kept everything except directly viewable scripts outside the web root. Then configure PHP to include your script directory in the path. A typical set up would be:
appdir
include
html
In the PHP config (either the global PHP config or in a .htaccess file in the html directory) add this:
include_path = ".:/path/to/appdir/include:/usr/share/php"
or (for Windows)
include_path = ".;c:\path\to\appdir\include;c:\php\includes"
Note that this line is probably already in your php.ini file, but may be commented out allowing the defaults to work. It might also include other paths. Be sure to keep those, as well.
If you are adding it to a .htaccess file, the format is:
php_value include_path .:/path/to/appdir/include:/usr/share/php
Finally, you can add the path programatically with something like this:
$parentPath = dirname(dirname(__FILE__));
$ourPath = $parentPath . DIRECTORY_SEPARATOR . 'include';
$includePath = ini_get('include_path');
$includePaths = explode(PATH_SEPARATOR, $includePath);
// Put our path between 'current directory' and rest of search path
if ($includePaths[0] == '.') {
array_shift($includePaths);
}
array_unshift($includePaths, '.', $ourPath);
$includePath = implode(PATH_SEPARATOR, $includePaths);
ini_set('include_path', $includePath);
(Based on working code, but modified, so untested)
This should be run in your frontend file (e.g. index.php). I put it in a separate include file which, after modifying the above, can be included with something like #include '../includes/prepPath.inc'.
I've used all the versions I've presented here with success. The particular method used depends on preferences, and how the project will be deployed. In other words, if you can't modify php.ini, you obviously can't use that method
A: In a few of the open source applications I've poked around in, including Joomla and PHPBB, they declare a constant in the main includes file, and then verify that constant exists in each of the includes:
// index.php
require_once 'includes.php';
// includes.php
define('IN_MY_PROJECT', true);
include 'myInc.php';
// myInc.php
defined('IN_MY_PROJECT') || die("No direct access, plsktnxbai");
A: The Zend Framework recommends you keep the files outside the web root, as Unkwntech has suggested. I'd say this is the safest and most fool proof solution.
A: From a PHP Nuke module:
<?php
if (!eregi("modules.php", $PHP_SELF)) {
die ("You can't access this file directly...");
}
// more code ...
?>
Replace modules.php with your file name, and that file cannot be called directly.
A: One way I've seen a lot is to create a variable that has to be present in every included file and check first thing in every include:
if(!isset($in_prog)){
exit;
}
A: I guess the best way is to put files you want to include inside "/include" folder and put access right 700 to the folder
A: This is an old question, but I found this one-liner quite effective:
$inc = get_included_files(); if(basename(__FILE__) == basename($inc[0])) exit();
get_included_files() returns an array with the files included on the script. The first position is the root file, and the other positions are the included files.
A: An alternative is to use this code:
// Prevent direct access, use exclusively as include only.
if (count(get_included_files()) == 1) {
http_response_code(403);
die();
}
If php file is called directly, then only one item is in included files array.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: conditional logic based on type Given:
interface I
{
}
class B: I
{
}
class C: I
{
}
class A
{
public void Method(B arg)
{
}
public void Method(C arg)
{
}
public void Method(I arg)
{
// THIS is the method I want to simplify.
if (I is B)
{
this.Method(arg as B);
}
else if (I is C)
{
this.Method(arg as C);
}
}
}
I know that there are better ways to design this type of interactions, but because of
details which would take too long to explain this is not possible.
Since this pattern will be duplicated MANY times, I would like to replace the
conditional logic with a generic implementation which I could use just one line.
I can't see a simple way to implement this generic method/class, but my instincts tell me it should be possible.
Any help would be appreciated.
A: What you want is double dispatch, and visitor pattern in particular.
A: I would put the method inside the interface and then let polymorphism decide which method to call
interface I
{
void Method();
}
class B : I
{
public void Method() { /* previously A.Method(B) */}
}
class C : I
{
public void Method() { /* previously A.Method(C) */ }
}
class A
{
public void Method(I obj)
{
obj.Method();
}
}
Now when you need to add a new class, you only need to implement I.Method. You don't need to touch A.Method.
A: This is kinda ugly but it gets the job done:
public void Method(B arg)
{
if (arg == null) return;
...
}
public void Method(C arg)
{
if (arg == null) return;
...
}
public void Method(I arg)
{
this.Method(arg as B);
this.Method(arg as C);
}
I don't think I would do it this way, though. It actually hurts looking at that. I'm sorry I forced you all to look at this as well.
A: interface I
{
}
class B : I
{
}
class C : I
{
}
class A
{
public void Method(B arg)
{
Console.WriteLine("I'm in B");
}
public void Method(C arg)
{
Console.WriteLine("I'm in C");
}
public void Method(I arg)
{
Type type = arg.GetType();
MethodInfo method = typeof(A).GetMethod("Method", new Type[] { type });
method.Invoke(this, new I[] { arg });
}
}
A: It doesn't exist in a convenient form withing C# - see here for an idea based on F#'s pattern matching, that does exactly what you want. You can do some things with reflection to select the overload at runtime, but that will be very slow, and has severe issues if anything satisfies both overloads. If you had a return value you could use the conditional operator;
return (I is B) ? Method((B)I) : ((I is C) ? Method((C)I) : 0);
Again - not pretty.
A: Easy. In Visual Basic I do this all the time using CallByName.
Sub MethodBase(value as Object)
CallByName(Me, "RealMethod", CallType.Method, value)
This will call the overload of RealMethod that most closely matches the runtime type of value.
I'm sure you can use CallByName from C# by importing Microsoft.VisualBasic.Interaction or by creating your own version using reflection.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Removing border/background from Crystal Report Viewer in Visual Studio 2008 Can someone please explain how to remove the background/borders off an embedded CrystalReportViewer control in Visual Studio 2008.
I'm trying to remove the light gray (below the "Crystal Report" heading) and then the darker gray underneath that. I want to be left with only the white box and the report inside this.
This is the output I'm currently getting:
http://img411.imageshack.us/my.php?image=screenshotml3.jpg
The HTML snippet is:
<div>
<h2>Crystal Report</h2>
<CR:CrystalReportViewer ID="CrystalReportViewer1" runat="server"
AutoDataBind="true" DisplayToolbar="False" />
</div>
The C# code snippet is:
string strReportName = "CrystalReport";
string strReportPath = Server.MapPath(strReportName + ".rpt");
ReportDocument rptDocument = new ReportDocument();
rptDocument.Load(strReportPath);
CrystalReportViewer1.HasCrystalLogo = false;
CrystalReportViewer1.HasDrilldownTabs = false;
CrystalReportViewer1.HasDrillUpButton = false;
CrystalReportViewer1.HasExportButton = false;
CrystalReportViewer1.HasGotoPageButton = false;
CrystalReportViewer1.HasPageNavigationButtons = false;
CrystalReportViewer1.HasPrintButton = false;
CrystalReportViewer1.HasRefreshButton = false;
CrystalReportViewer1.HasSearchButton = false;
CrystalReportViewer1.HasToggleGroupTreeButton = false;
CrystalReportViewer1.HasToggleParameterPanelButton = false;
CrystalReportViewer1.HasZoomFactorList = false;
CrystalReportViewer1.DisplayToolbar = false;
CrystalReportViewer1.EnableDrillDown = false;
CrystalReportViewer1.BestFitPage = true;
CrystalReportViewer1.ToolPanelView = CrystalDecisions.Web.ToolPanelViewType.None;
CrystalReportViewer1.BackColor = System.Drawing.Color.Red;
CrystalReportViewer1.BorderColor = System.Drawing.Color.Green;
CrystalReportViewer1.CssClass
CrystalReportViewer1.Height = 200;
CrystalReportViewer1.Width = 500;
CrystalReportViewer1.ReportSource = rptDocument;
A: Your code worked for me in Visual Studio 2008 with the Crystal Reports XI Release 2 Developer Edition (stand-alone product). I had no visible gray bars or background. In fact, the white space of the report itself showed up as the assigned BackColor, Red. Are you using the bundled CrystalReportViewer that comes with Visual Studio 2008? It might be worth trying to set the BorderStyle property to BorderStyle.None to see if that has any effect.
There is a tutorial on MSDN about customizing the CrystalReportViewer control at: http://msdn.microsoft.com/en-us/library/ms227538.aspx
That's the one for VS2008/.NET 3.5, but I'm not sure how much the tutorial has actually changed from the previous version.
A: I had the same problem.
It was caused by another CSS file conflicting with the control's CSS file.
Once I made a master file for reports, without all the site's CSS file references, the background and taskbar were fine - they have a white background.
A: Try setting the DocumentView property to WebLayout instead of PrintLayout:
Code-Behind
CrystalReportViewer.DocumentView = CrystalDecisions.Shared.DocumentViewType.WebLayout
Web.config
<configSections>
<sectionGroup name="businessObjects">
<sectionGroup name="crystalReports">
<section name="printControl" type="System.Configuration.NameValueSectionHandler" />
<section name="crystalReportViewer" type="System.Configuration.NameValueSectionHandler" />
</sectionGroup>
</sectionGroup>
</configSections>
<businessObjects>
<crystalReports>
<crystalReportViewer>
<add key="documentView" value="weblayout" />
</crystalReportViewer>
</crystalReports>
</businessObjects>
SAP Note 1344534 - How to change the documentView for a Crystal Report web viewer
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you stay focused and ship projects? I find way too many projects to get involved in, way to many languages to play with (and way too many cool features within those languages), and way too many books to read...
How do you guys stay focused and actually get anything done, rather than leaving a trail of partially complete "experiments?"
A: Money, and the feeling of accomplishment that goes along with actually finishing something. When I first thought about working for myself I started coming up with ideas of software that I would develop and then later sell. Of course, I really didn't know if what I was making would actually sell, so it was easy to get distracted and jump at new ideas.
So I decided to go with being a contractor/consultant. When you know that there is a buyer for what you're making, and that somebody is waiting on it, it gives you motivation. If it's an interesting or challenging project, there's a rush associated with finishing it. So that adds extra motivation because you want that rush more and more.
Once I got a fairly steady flow of work-for-hire projects, I found that I can stay focused on my side projects better because I have incentive to practice good time management. I give myself a certain amount of time every day or week to work on my side projects, and it helps me stay focused when I take that time.
Of course, I still go off on tangents occasionally and start new side projects as well, but the ones that I am most interested in I have been able to stick with.
Also, after you finish some projects, then you get a better feel for what it actually takes to go from conception to completion, and it makes it a lot easier to do it again and again.
A: Seems like there are two types of developers: Tinkerers and Entrepreneurs.
Tinkerers want to know how every little thing works. Once they get the hang of something, they're distracted by everything they don't know. The tech world is brutal for a Tinkerer because there's so much to learn and each new year creates more. Tinkerers are proud of their knowledge.
Entrepreneurs want to know enough to build something really great. They think in terms of features and end-user experiences. You never hear them argue about Python over .NET over Java over C because they just don't care. They're more interested in the result of a language versus the language itself. Entrepreneurs are proud of their user-base.
Sounds like you're struggling with your Tinkerer tendencies. I've got the same problem and have found only one thing that helps - find an Entrepreneur developer that you thoroughly respect. When you put the two together, it's unbeatable. The Tinkerer plumbs the depth of every technical nuance. They keep the Entrepreneur technically honest. In turn, the Entrepreneur creates focus and opportunity for the Tinkerer. When they catch you browsing the Scala site (assuming you're not a Scala developer), they reveal a new challenge in your existing project. Not only that, they're much better at understanding what non-Tinkerers want.
A: I think a good programmer may well have lots of unfinished "experiments" hanging around, this is a good thing.
Usually with a good manager, you will be held accountable if your work is simply not getting done. If you're a student, though, it's tougher. I realized that it is impossible to learn everything you want to.
I limit myself to only learning 1 or 2 new languages per year, and only 1 book per month. That seems to be a nice balance between programming chaos and getting my job done well.
Kudos for having a great learning attitude :)
A: Probably the best motivator (for a team or an individual) is to set goals early and often.
One of the best methods I've observed in project management was the introduction of "feature themed weeks" - where the team (or an individual) was set goals or deliverables which aligned under a general flavour, e.g "Customer Features", "Reporting and Metrics" etc. This kept the team/person focused on one area of delivery/effort. It also made it easy to communicate to the customer where progress was being made.
Also.. Try to make your (or your team's) progress visible. If you can establish an automated build process (or some other mechanism) and "publish" incremental implementation of work over a short period of time you can often gain traction and early by-in which can drive results faster (and help aid in early course correction).
A: 1) I leave a utterly MASIVE trail of unfinished stuff, all side projects of course.
2) When I need motivation to work I open my wallet... That usually does it for me.
A: I find that getting involved with the "business" side of the equation helps tremendously. When you see how much benefit the actual users of your program can get out of your creative solutions to their problems - it's an extreme motivation to provide those solutions to them. :-)
A: I'm building an app I plan on selling and see it as a way of making extra money or reducing the amount of time I spend working for other people.
My wife likes this idea and her encouragement has managed to keep me focused longer than normal as it's now "work" rather than "play"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Difference Between ViewData and TempData? I know what ViewData is and use it all the time, but in ASP.NET Preview 5 they introduced something new called TempData.
I normally strongly type my ViewData, instead of using the dictionary of objects approach.
So, when should I use TempData instead of ViewData?
Are there any best practices for this?
A: In one sentence: TempData are like ViewData with one difference: They only contain data between two successive requests, after that they are destroyed. You can use TempData to pass error messages or something similar.
Although outdated, this article has good description of the TempData lifecycle.
As Ben Scheirman said here:
TempData is a session-backed temporary storage dictionary that is available for one single request. It’s great to pass messages between controllers.
A: ViewData:
*
*ViewData is a dictionary type public ViewDataDictionary ViewData { get; set; }
*It can be used to pass data from controller to view, one way only
*It’s life lies only during the current request
*If passing string then no need to typecast
*If passing object then you need to typecast it but before that you need to check if it is not null
*Its a property on ControllerBase, which is the parent of Controller class
TempData:
*
*TempData internally use TempDataDictionary: public TempDataDictionary TempData { get; set; }
*Once data is saved into TempDataDictionary object:
*
*It persists in it and can be read from any view or any action in any controller
*It can only be read once; once read, it becomes null
*It is saved into session so on expiration of session data is lost.
This behavior is new from ASP.NET MVC 2 and latter versions.
In earlier versions of ASP.NET MVC, the values in TempData were available only until the next request.
*It’s alive, until it is read or session expires and can be read from anywhere.
See the comparison of ViewData, ViewBag, TempData and Session in MVC in detail
A: I found this comparison useful: http://www.dotnet-tricks.com/Tutorial/mvc/9KHW190712-ViewData-vs-ViewBag-vs-TempData-vs-Session.html
One gotcha I came across is that TempData values are cleared after they are read by default. There are options, see methods 'Peek' and 'Keep' on Msdn for more info.
A: When an action returns a RedirectToAction result it causes an HTTP redirect (equivalent to Response.Redirect). Data can be preserved in the TempData property (dictionary) of the controller for the duration of a single HTTP redirect request.
A: view data is used when we want to pass data from controller to corresponding view.
view data have very short life it means it will destroy when redirection occurs.
Example(Controller):
public ViewResult try1()
{
ViewData["DateTime"] = DateTime.Now;
ViewData["Name"] = "Mehta Hitanshi";
ViewData["Twitter"] = "@hitanshi";
ViewData["City"] = "surat";
return View();
}
try1.cshtm
<table>
<tr>
<th>Name</th>
<th>Twitter</th>
<th>Email</th>
<th>City</th>
<th>Mobile</th>
</tr>
<tr>
<td>@ViewData["Name"]</td>
<td>@ViewData["Twitter"]</td>
<td>@ViewData["City"]</td>
</tr>
</table>
TempData transfers the data between controllers or between actions.
It is used to store one time messages and its life is very short.we can use TempData.Keep() to make it available through all actions or to make it persistent.
Example(Controller):
public ActionResult try3()
{
TempData["DateTime"] = DateTime.Now;
TempData["Name"] = "Ravina";
TempData["Twitter"] = "@silentRavina";
TempData["Email"] = "Ravina12@gmail.com";
TempData["City"] = "India";
TempData["MobNo"] = 9998975436;
return RedirectToAction("TempView1");
}
public ActionResult TempView1()
{
return View();
}
TempView1.cshtm
<table>
<tr>
<th>Name</th>
<th>Twitter</th>
<th>Email</th>
<th>City</th>
<th>Mobile</th>
</tr>
<tr>
<td>@TempData["Name"]</td>
<td>@TempData["Twitter"]</td>
<td>@TempData["Email"]</td>
<td>@TempData["City"]</td>
<td>@TempData["MobNo"]</td>
</tr>
</table>
A: Just a side note to TempData.
Data in it is stored not stored until the next request, but until the next read operation is called!
See:
TempData won't destroy after second request
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "102"
} |
Q: Can someone explain per-pixel collision detection? Can someone explain the pros and cons of it and any math involved with it?
A: It's more accurate than vertexes (or hit-boxes etc). I'm assuming you're talking about 2d here (3d would be box-model vs vertex). Per-pixel would allow you to have detailed sprites which small things (missiles, say) would collide with more realistically.
It's more math and slower than the conventional method, which would be to draw a box (or some other easily math shape like a circle) and say 'this is the actor, anything in here is he'. It is however, more accurate.
A: When talking pros and cons you also have to consider collision response. What do you want to do when a collision is detected? If you're detecting an object hitting another object where the result is one or both of the objects being destroyed, then per-pixel collision detection is good and accurate. If you want the object to react in some other way, i.e. sliding against a wall, bouncing, etc... then you may want to work with some type of bounding rectangle/circle/oval which will make the collision response seem smoother and more consistent, with less chance of getting stuck.
A: For 2D:
You do not need any math for this problem, you need only a custom bitblit-routine. You will blit collision candidates into a hidden surface by painting their collisionmasks onto this surface and checking, if the pixels you just want to draw are (pixel != 0) . Then you will have a collision. Of course, you should precheck by bounding rectangles if an collision can occur.
For 3D:
You will need math(a lot)!
basically you will check each surface of your actor against each surface of your enemy.
This will be done by calculating a plane-ray intersection. There is a lot of optimization possible here, but it depends on your 3d representation. This is also no Per-Pixel-Collision, but Per-Vertex-Collision
A: I will start by answering the pros and cons of per-pixel collision detection, and then consider the mathematical aspects later.
Per-pixel collision detection, also known as pixel-perfect collision detection, and maybe more precisely image-based collision detection, is finding collisions between collision objects that are represented as images. This spatial method stands in contrast to more geometric methods, where polygons and other geometric shapes are used to represent the collision objects.
For 2D, there are generally 3 different options:
*
*Image-based
*Simple geometric shapes (axis-aligned bounding boxes, circles)
*Complex geometric shapes (convex polygons, convave polygons, ellipses, etc.)
Image-based collision detection is precise and easy to use and understand. In regards to games that uses images for drawing, using image-based collision detection means that whenever the sprites on the screen overlap, they also overlap in the collision detection system. They are also useful for games where deformable collision objects are needed, such as for destructible terrain seen games such as Worms 2D, since there is generally little pre-computation involved. Their main drawback is that they are very inefficient compared to the other methods, especially when rotating and scaling the collision objects.
Simple geometric shapes are both simple to work with and very efficient. If high precision is not needed, or the collision objects fit well with simple geometric shapes (for instance, if your collision objects are balls, circles is a perfect fit, sometimes even better than images). Their main drawback is their precision. For high precision where the basics shapes don't fit, you either have to combine the simple shapes into more complex shapes, or you have to use more general and complex shapes. In either case, you end up in the third method.
Complex geometric shapes can be somewhat precise and relatively efficient or inefficient, depending on the complexity of the used shape(s) to represent a collision object. An important drawback is the ease of use. When the collision objects does not fit with the available geometric shapes, either the precision will have to suffer, or multiple, possibly different shapes will have to be used to represent it, which takes time. Furthermore, some of the shapes are complex and not easy to create, unless you can generate them from an image automatically. One important advantage is that rotation and scaling is generally efficient and easy, especially compared to image-based collision detection.
Image-based collision detection is generally seen as a bad solution, because it is frequently inefficient, especially when using rotation and scaling. However, since it is so flexible, precise and easy to use, I decided to implement a library that seeks to solve the issue of efficiency. The result is PoxelColl, which uses automatically precomputed convex hulls to speed up the image-based collision detection. This gives ease of use, flexibility, precision and efficiency, and supports rotation and scaling. The main drawbacks is that it isn't efficient in all cases compared to the pure geometric solutions, and it uses pre-computation is required, meaning it isn't very inefficient for deformable collision objects.
For 3D, the options and advantages are somewhat similar:
*
*Volume-based
*Simple geometric shapes (axis-aligned bounding boxes, circles)
*Complex geometric shapes (convex polygons, convave polygons, ellipses, etc.)
It should be noted that Peter Parker's answer is wrong for 3D; pixels (picture elements) in 2D correspond to voxels (volume elements) in 3D.
Some important differences are that the spatial method is much rarer for 3D than it is for 2D. One possible reason is that because 3D adds an extra dimension, the spatial solution becomes even less efficient, while the simple geometric solutions are still efficient. And in games, collision detection is generally an online operation, requiring some level of efficiency, making efficiency important. Volumes are therefore more often used in non-game applications where collisions do not need to be determined online.
For examples of collision detection with volume-based collision detection, see for instance Volumetric collision detection for deformable objects, where their use of volumes instead of geometric shapes means that they can handle deformable collision objects with arbitrarily shaped, closed surfaces.
As for the second question, the math involved in image-based collisions can range from simple to complex. The simple case is basically using axis-aligned bounding boxes for the images, finding their intersection, and then only check the images in the intersection. More complex solutions include the library I mentioned before, where convex polygon intersection is required. And for the 3D case, solutions range from simple to very complex.
A: The pros have already been mentioned: It’s pixel-perfect and fair, there are no false positives nor false negatives. The main disadvantage is that it’s expensive to compute, but if You do a simple bounding box check first, this should not be a big problem. In the age of OpenGL and DirectX there is one more problem: The sprite data are usually textures, which means they are in the VRAM and You cannot check the pixel values easily Yourself. In OpenGL You can use the glReadPixels function to get the intersected portion of two sprites back to RAM and check the collision, or You can use the occlusion query. The occlusion query approach should have better performance, as You are not moving data back from GPU, but occlusion queries are not supported everywhere (ie. they are not supported in OpenGL ES, somebody please correct me if I am wrong).
A: taking about OpenGL and the texture-case: you could precalculate a bit matrix for the image and test if two pixels are overlapping.
A: Per-pixel collision detection is a relic from the past, when graphics were simple and 2D hardware included free collision checking between sprites and backgrounds, because even basic distance calculations were computationally expensive. While todays 2d graphics are more complex, per-pixel collision checks are rarely used, especially because object visible shape and collision shape are usually different. Circles or boxes are sufficient for most cases. Also, since opengl-based graphics hardware can't do collision checks anymore, you have to write additional rendering code, using CPU for the sole purpose of collision checking, while keeping additional bitmap data in system memory, since graphics memory cannot be accessed directly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Getting color of a data series from a flot chart After seeing the cool new "reputation" tab on the stackoverflow user page, I was inspired to play with the Flot charting library a little. I have a line chart that has several hundred series. Only a couple of these series will be visible at any given time. My data series are grouped into several "categories" and I assign the numeric color index based on that category. I'd like to be able to see what actual color was assigned by Flot to a particular color index value, for the ultimate purpose of creating a custom legend that relates the color to my "category" of data. How can I get these color values?
I see that I can provide my own array for colors, but I am reluctant to do this, because I am not sure how many categories I will have until I load the data. I suppose I could just create an array that's just way too big, but that seems wasteful if it's possible to ask Flot what color each series is.
A: There's an example at the bottom of http://flot.googlecode.com/svn/trunk/API.txt that does just that. Something like:
var plot = $.plot(placeholder, data, options)
var series = plot.getData();
for (var i = 0; i < series.length; ++i)
alert(series[i].color);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: JQuery's $ is in conflict with that of StringTemplate.Net in ASP.Net MVC I am exploring ASP.NET MVC and I wanted to add jQuery to make the site interactive. I used StringTemplate, ported to .Net, as my template engine to generate html and to send JSON. However, when I view the page, I could not see it. After debugging, I've realized that the $ is used by the StringTemplate to access property, etc and jQuery uses it too to manipulate the DOM. Gee, I've looked on other template engines and most of them uses the dollar sign :(.
Any alternative template engine for ASP.Net MVC? I wanted to retain jQuery because MSFT announced that it will used in the Visual Studio (2008?)
Thanks in Advance :)
Update
Please go to the answer in ASP.NET MVC View Engine Comparison question for a comprehensive list of Template engine for ASP.NET MVC, and their pros and cons
Update 2
At the end I'll just put the JavaScript code, including JQuery, in a separate script file, hence I wouldn't worry about the $ mingling in the template file.
Update 3
Changed the Title to reflect what I need to resolve. After all "The Best X in Y" is very subjective question.
A: JQuery can be disambiguated by using the jQuery keyword like this:
jQuery(
instead of this:
$(
I would consider this a best practice. It eliminates any possibility of clashing with another library, and makes the code more readable.
A: Perhaps jQuery.noConflict will work for you
A: Have a look at the mvccontrib project. They have 4 different view engines at the moment which are brail, nhaml, nvelocity and xslt.
http://www.codeplex.com/MVCContrib
A: In case you want to stick with StringTemplate (ST) see this article from the ST wiki. You may also change the behaviour totally by editing Antlr.StringTemplate.Language\DefaultTemplateLexer.cs and replacing the "$" with what you want.
A: I really like the syntax in Django, so I recommend NDjango :)
A: You can of course move your js logic into a .js file. But if you want it inline with your StringTemplate views, you can escape it using the \$ construct.
In addition, you can simply use the jQuery("selector"), instead of $("selector") construct if you want to avoid the escaping syntax.
Here's a good article on using StringTemplate as a View Engine in MVC.
There's also an accompanying OpenSource engine, along with some samples.
Also, as mentioned above, you can modify your Type Lexer. (make it an alternate character to the $).
A: I would highly recommend Spark. I've been using it for awhile now with jQuery and haven't ran into a single issue so far.
A: Have you tried $$ or /$ to escape the dollar signs in string template? I'm not sure about ST specifically but thats how most template engines work.
As for other templating engines, I really loved nVelocity when I used it on a project.
A: JsonFx.NET has a powerful client-side templating engine with familiar ASP.NET style syntax. The entire framework is specifically designed to work well with jQuery and ASP.NET MVC. You can get examples of how to build real world UI from: http://code.google.com/p/jsonfx-examples/
A: I've been using ANTLR StringTemplate for ASP.NET MVC project. However what I did was to extend the StringTemplate grammar (template.g) to recognize '%' (aspx.template.g) as delimiters. You can find these files if you download the StringTemplate.net version. I generated the corresponding files: AspxTemplateLexer.cs, AspxTemplateParser.cs, AspxTemplateParserTokenTypes.cs and AspxTemplateParserTokenTypes.txt.
In addition I altered StringTemplateLoader.cs to recognize the extensions .aspx and .ascx which Visual Studio recognizes. This way I am not stuck with the .st extension and clients don't know the difference.
Anyway after rebuilding StringTemplate I have the behavior that I want. What I like about StringTemplate is that it does NOT permit ANY code to be embedded in the template. It looks like Spark like the default ASP/MVC template is code permissive which makes the templates less portable.
I would prefer is "<%" and "%>" as delimiters but unfortunately the ANTLR grammar seems somewhat difficult and fragile to alter unless someone else has done it. On the other had StringTemplate has a great support community and a great approach to separation -- which is the point of MVC.
A: You could try jsRepeater.
A: You may need this .NET Template Engine. If you wish to use '$' character, simply use '$$'. See the code below:
{%carName = "Audi R8"/}
{%string str = "This is an $carName$"/}
$str$
$$str$$
the output will be
This is an Audi R8
$str$
A: If I understand StringTemplate version 4 correctly you can define your own escape char in Template (or TemplateGroup) constructor.
A: Found Mustache to be the most fool-proof, easiest-to-use, lightest full-featured templating engine for .Net projects (Web and backend)
Works well with .Net 3.5 (meaning it does not need dynamic type and .Net 4.0 to work for mixed type models, like Razor).
The part that I like the most is ability to nest arbitrary IDicts within and have the engine do the right thing. This makes the mandatory-for-all engines reboxing step super-simple:
var child = new {
nested = "nested value"
};
var parent = new {
SomeValue = "asdfadsf"
, down = child
, number = 123
};
var template = @"This is {{#down}}{{nested}}{{/down}}. Yeah to the power of {{number}}";
string output = Nustache.Core.Render.StringToString(template,parent);
// output:
// "This is nested value. Yeah to the power of 123"
What's most beautiful about Mustache is that same exact template works exactly same in pure JavaScript or any other of 20 or so supported languages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do I connect to an .mdf (Microsoft SQL Server Database File) in a simple web project? Specifically, in VS 2008, I want to connect to a data source that you can have by right-clicking on the automatically-generated App_Data folder (an .mdf "database"). Seems easy, and it is once you know how.
A: So here's the answer from MSDN:
Choos[e] "Add New Data Source" from the
Data menu.[And follow the connection wizard]
Very easy, except that I have no Data menu. If you don't have a Data menu, do the following:
*
*Click on Tools >> Connect to Database...
*Select "Microsoft SQL Server Database File", take the default Data provider, and click OK
*On the next screen, browse to your Database file, which will be in your VS Solution folder structure somewhere.
Test the connection. It'll be good. If you want to add the string to the web.config, click the Advanced button, and copy the Data Source line (at the bottom of the dialog box), and paste it into a connection string in the appropriate place in the web.config file. You will have to add the "AttachDbFilename" attribute and value. Example:
The raw text from the Advanced panel:
Data Source=.\SQLEXPRESS;Integrated Security=True;Connect Timeout=30;User Instance=True
The actual entry in the web.config:
<add name="SomeDataBase" connectionString="Data Source=.\SQLEXPRESS;
AttachDbFilename=C:\Development\blahBlah\App_Data\SomeDataFile.mdf;
Integrated Security=True; Connect Timeout=30; User Instance=True" />
A: Just one more -- i've always kept a udl file on my desktop to easily create and test connection strings. If you've never done it before - create a new text file and name it to connection.udl (the ext is the only important part). Open the file, start on the Provider tab and work your way through. Once you're happy with the connection rename the file giving it a .txt extension. Open the file and copy the string - it's relatively easy and lets you test the connection before using it.
A: <add name="Your Database" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=System.Data.SqlClient;provider connection string="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Expanse.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient"/>
A: A great resource I always keep around is connectionstrings.com.
It's really handy for finding these connection strings when you can't find an example.
Particularly this page applied to your problem
Attach a database file on connect to a local SQL Server Express instance
Driver={SQL Native Client};Server=.\SQLExpress;AttachDbFilename=c:\asd\qwe\mydbfile.mdf; Database=dbname;Trusted_Connection=Yes;
A: In your Login.aspx.cs (the code behind file for your login page in the submit button click event) add
string constr = @"Data Source=(LocalDB)\v11.0; AttachDbFilename=|DataDirectory|\myData.mdf; Integrated Security=True; Connect Timeout=30;";
using (SqlConnection conn = new SqlConnection(constr))
string constr = ConfigurationManager.ConnectionStrings["myData"].ToString();
using (SqlConnection conn = new SqlConnection(constr))
{
sqlQuery=" Your Query here"
SqlCommand com = new SqlCommand(sqlQuery, conn);
com.Connection.Open();
string strOutput = (string)com.ExecuteScalar();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: mysql_real_escape_string() leaving slashes in MySQL I just moved to a new hosting company and now whenever a string gets escaped using:
mysql_real_escape_string($str);
the slashes remain in the database. This is the first time I've ever seen this happen so none of my scripts use
stripslashes()
anymore.
This is on a CentOS 4.5 64bit running php 5.2.6 as fastcgi on a lighttpd 1.4 server. I've ensured that all magic_quotes options are off and the mysql client api is 5.0.51a.
I have the same issue on all 6 of my webservers.
Any help would be appreciated.
Thanks.
Edit:
Magic Quotes isn't on. Please don't recommend turning it off. THIS IS NOT THE ISSUE.
A: I can think of a number of things that could cause this. But it depends how you are invoking SQL queries. If you moved to use parameterized queries like with PDO, then escaping is unnecessary which means the call to mysql_real_escape_string is adding the extra slashes.
If you are using mysql_query etc. then there must be some code somewhere like addslashes which is doing this. This could either be before the data is going into the database, or after.
Also you say you have disabled magic quotes... if you haven't already, just do a hard check in the code with something like this:
echo htmlentities($_GET['value']); // or $_POST, whichever is appropriate
Make sure there are no slashes in that value, then also check this:
echo "Magic quotes is " . (get_magic_quotes_gpc() ? "ON" : "OFF");
I know you've said multiple times it isn't magic quotes, but for us guys trying to help we need to be sure you have checked the actual PHP output rather than just changing the config (which might not have worked).
A: it sounds as though you have magic quotes turned on. Turning it off isn't too hard: just create a file in your root directory called .htaccess and put this line in it:
php_flag magic_quotes off
If that's not possible for whatever reason, or you want to change your application to be able to handle magic quotes, use this technique:
Instead of accessing the request variables directly, use a function instead. That function can then check if magic quotes is on or off and strip out slashes accordingly. Simply running stripslashes() over everything won't work, because you'll get rid of slashes which you actually want.
function getVar($key) {
if (get_magic_quotes_gpc()) {
return stripslashes($_POST[$key]);
} else {
return $_POST[$key];
}
}
$x = getVar('x');
Now that you've got that, all your incoming variables are ready to be escaped again and mysql_real_escape_string() won't stuff them up.
A:
the slashes remain in the database.
It means that your data gets double escaped.
There are 2 possible reasons:
*
*magic quotes are on, despite of your feeling. Double-check it
*There is some code in your application, that just mimic magic quotes behaviour, escaping all input.
This is very common misconception to have a general escaping function to "protect" all the incoming data. While it does no good at all, it also responsible for the cases like this.
Of so - just find that function and wipe it out.
A: The host that you've moved probably has magic_quotes_runtime turned on. You can turn it off with set_magic_quotes_runtime(0).
Please turn off magic_quotes_runtime, and then change your code to use bind variables, rather than using the string escaping.
A: You must probably have magic quotes turned on. Figuring out exactly how to turn it off can be quite a headache in PHP. While you can turn off magic quotes with set_magic_quotes_runtime(0), it isn't enough -- Magic quotes has already altered the input data at this point, so you must undo the change. Try with this snippet: http://talks.php.net/show/php-best-practices/26
Or better yet -- Disable magic quotes in php.ini, and any .htaccess files it may be set in.
A: I am not sure if I understand the issue correctly but I had a very same problem. No matter what I did the slashes were there when the string got escaped. Since I needed the inserted value to be in the exact same format as it was entered I used
htmlentities($inserted_value)
this will leave all inserted quote marks unescaped but harmless.
A: What might be the problem (it was with us) that you use mysql_real_escape_string() multiple times on the same var. When you use it multiple times, it will add the slashes.
A: mysql_real_escape_string($str); is supposed to do exactly that. it is meant to add backslashes to special characters especially when you want to pass the query to mysql. Take note that it also takes into account the character set of mysql.
For safer coding practices it would be good to edit your code and use stripslashes() to read out the data and remove the slashes.
A: Function below will correctly remove slashes before inserting into the database. I know you said magic quotes isn't on but something is adding slashes so try the following page and see the output. It'll help figure out where. Call with page.php?var=something-with'data_that;will`be|escaped
You will most likely see number three outputting more slashes than needed.
*Change the db details too.
<?php
$db = mysql_connect('host', 'user', 'pass');
$var = $_REQUEST['var'];
echo "1: $var :1<br />";
echo "2: ".stripslashes($var)." :2<br />";
echo "3: ".mysql_real_escape_string($var)." :3<br />";
echo "4: ".quote_smart($var)." :4<br />";
function quote_smart($value)
{
// Stripslashes is gpc on
if (get_magic_quotes_gpc())
{
$value = stripslashes($value);
}
// Quote if not a number or a numeric string
if ( !is_numeric($value) )
{
$value = mysql_real_escape_string($value);
}
return $value;
}
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: php-cgi runs as root I run php 5.2.6 as a cgi under lighttpd 1.4 and for some reason it's always running as root. All php-cgi processes in are owned by root and all files written to the file system are owned by root.
I've tried setting the user in lighttpd as non privileged, and confirmed, it's running right it's just php that runs as root.
How would I set php-cgi to run as a safer user?
A: *
*Ensure :
server.username = "nonprivuser"
server.groupname = "nonprivgroup"
*stop lighttpd.
*check for existing php processes and kill them.
*start lighttpd
*check php processes are running as non priv
if php is still running as root, then you possibly have a SETUID script somewhere loading them ( you really shouldn't, but its feasible )
if this is the case, check the file 'bin-path' refers to doesn't have anything funky on it.
A: It is possible that you have a fastcgi process that was started on the server as root. If this is the case, then the fastcgi process will continue to run php processes called from lighttpd.
I suggest killing the fastcgi processes on your server and restarting lighttpd.
You might also want to take a look at any startup scripts that might launch the fastcgi daemon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What Could Cause Intermittent Issues with Images Loading in Internet Explorer 6? I am having issues with a website that I am working on in which images and background-images fail to load in Internet Explorer 6.
Here is an example of a page on which you might experience this issue:
Example Page
So far I have looked at the following possible issues and pretty much ruled them out:
*
*XML/Extraneous data in the image files (google photoshop 7 internet explorer)
*Corrupt image files
I have not ruled out invalid markup.
I have noticed that there are validation errors in most of the pages where this problem has been reported and I am working on getting those fixed where appropriate.
The behavior I see is that the page will load and all elements other than the background image render. There are no javascript errors thrown. When using Fiddler, no request for the image is made. If the browser is pointed directly to the background-image, the cache is cleared and then the browser is pointed back at the HTML page, the background-image will load inside the HTML page.
Does anyone have any additional suggestions for ways to attack this issue?
A: Twice now I've had people have problems with photos not showing up, and it was because they were in an incorrect colorspace, using CMYK instead of RGB.
A: this is a weird issue with IE6. I just right click on the image and select "Show Picture" then the image loads properly.
A: I'm looking at this in IE6 and trying to replicate the problem, but I can't seem to get it to happen - it always seems to load.
Some thoughts on things to try though as there appears to be another two classes that the background is over-riding is to try adding !important after the background assignment, so:
div.gBodyContainer {
background-image:url(/etc/medialib/europe/about_infiniti/environment.Par.7366.Image.964.992.direct.jpg); !important
}
Another thing to try is getting rid of all the . in the filename and cut down the length of it, shouldn't matter, but it may be causing some problems, doesn't hurt to try it anyway.
The other thing you could try is making gBodyContainer an ID instead of a class, or give it an ID as well as a class and assign the background to the ID. Again, it shouldn't matter, but it doesn't hurt to try and see if it works, IE6 does a lot of funny things.
A: is it only ie6 and not ie7 too? IE is pretty strict with html sometimes, versus firefox lets you get away with more. Not sure if this helps, but I just debugged weird IE6/7 bugs by slowly taking away content. But if it's only intermittent, as in happens with the same code on and off, that's a really weird one.
A: The problem is the "IE6" part ;-)
A: I think in some cases you could solve this issue by loading the full size image before the request and hide it with style display: none; so IE6 will load the image from cache.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What library should be included to use TransparentBlt? What library should be included to use TransparentBlt?
This is VC98 (Visual Studio 6) linking to the Gdi32.lib. (Other GDI functions such as BitBlt link as expected), and the compilers compiles with out error or warning.
Even though the Gdi32.lib is included, yet the linker returns this error:
mtcombo.obj : error LNK2001: unresolved external symbol __imp__TransparentBlt@44
C:\Work\Montel\Targ2_12\guitest.exe : fatal error LNK1120: 1 unresolved externals
What am I missing?
A: AFAIK, you will need the Msimg32.lib
http://msdn.microsoft.com/en-us/library/ms532303(VS.85).aspx
A: Msimg32.lib
FYI you can search the functions on http://msdn.microsoft.com/library and at the bottom it will tell you what library you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to continue executing other processes in command line window after breaking a running process using C# I have written C# command window application. I'm running bunch of processes on command line inside the main(). For e.g.
void main()
{
process p1 = new process()
set p1 properties
p1.start()
-->p1.StandardInput.WriteLine("start /WAIT cmd.exe /c BUILD -cZP");
}
This line will execute some program in a new command window. While executing that last line I will break this execution using ctrl+c and return the control to execution of the main program.
loop through to output to the execution window.
p1.StandardInput.WriteLine("Done some action");
p1.WaitForExit();
p1.Close();
The above three lines are not executed. The question is p1 never closes to execute following lines which I have in my program.
process p2 = new process()
...
p2.waitforExit()
p2.close.
Any insight for above challenges will be great. thx.
A: If I understand you correctly (which I admit I may not be understanding you), I believe the problem is that when you press CTRL-C to break into process p1, you are actually killing that process. Then you are trying to send text to the standard input for the process, which has just been killed. Since the process is no longer available to take your input, the main program hangs. That's my best guess.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx) I have a web project where I must import text and images from a user-supplied document, and one of the possible formats is Microsoft Office 2007. There's also a need to generate documents in this format.
The server runs CentOS 5.2 and has PHP/Perl/Python installed. I can execute local binaries and shell scripts if I must. We use Apache 2.2 but will be switching over to Nginx once it goes live.
What are my options? Anyone had experience with this?
A: The python docx module can generate formatted Microsoft office docx files from pure Python. Out of the box, it does headers, paragraphs, tables, and bullets, but the makeelement() module can be extended to do arbitrary elements like images.
from docx import *
document = newdocument()
# This location is where most document content lives
docbody = document.xpath('/w:document/w:body',namespaces=wordnamespaces)[0]
# Append two headings
docbody.append(heading('Heading',1) )
docbody.append(heading('Subheading',2))
docbody.append(paragraph('Some text')
A: I have successfully used the OpenXML Format SDK in a project to modify an Excel spreadsheet via code. This would require .NET and I'm not sure about how well it would work under Mono.
A: You can probably check the code for Sphider. They docs and pdfs, so I'm sure they can read them. Might also lead you in the right direction for other Office formats.
A: The Office 2007 file formats are open and well documented. Roughly speaking, all of the new file formats ending in "x" are zip compressed XML documents. For example:
To open a Word 2007 XML file Create a
temporary folder in which to store the
file and its parts.
Save a Word 2007 document, containing
text, pictures, and other elements, as
a .docx file.
Add a .zip extension to the end of the
file name.
Double-click the file. It will open in
the ZIP application. You can see the
parts that comprise the file.
Extract the parts to the folder that
you created previously.
The other file formats are roughly similar. I don't know of any open source libraries for interacting with them as yet - but depending on your exact requirements, it doesn't look too difficult to read and write simple documents. Certainly it should be a lot easier than with the older formats.
If you need to read the older formats, OpenOffice has an API and can read and write Office 2003 and older documents with more or less success.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can I make my ad hoc iPhone application's icon show up in iTunes? I've got an iPhone app with icon file Icon.png.
This icon shows up properly when the app is on the phone itself, but it doesn't show up in the applications pane in iTunes.
What do I need to do to get it to show up properly?
A: In order to make it easier for those arriving at this post, here are the actual instructions (straight from the blog post linked from the accepted answer).
There has been some talk on twitter about how to create your own IPA file for your iPhone app, so I thought I would give the instructions that I have used to build an IPA before. Enjoy.
*
*Create a folder on your desktop called “working”. Open that and create another folder inside of it called “Payload” (case-sensitive)
*Move your iTunesArtwork file into the “working” folder and your .app into the Payload folder.
*Open Terminal and run the following command: chmod -R 775 ~/Desktop/working/Payload
*Go into your ProgName.app folder within Payload.
*Double-click the Info.plist file. Make sure there is a item called: SignerIdentity with a value of: Apple iPhone OS Application Signing. If there is not, add it.
*Zip it all up. Zip the iTunesArtwork and Payload folder. (So zip up what is inside of the working folder)
*Rename the zip file to have the name you want, and the extension of ipa.
*Double click to install with iTunes
A: The cleanest way to do this is described in the official Apple documentation, in a section called Publishing Applications for Testing. Below is the exact instructions given to you on that page:
The iTunes artwork your testers see should be your application’s icon. This artwork must be a 512 x 512 JPEG or PNG file named iTunesArtwork. Note that the file must not have an extension.
After generating the file of your application’s icon, follow these steps to add it to your application:
*
*Open your project in Xcode.
*In the Groups & Files list, select the Resources group.
*Choose Project > Add to Project, navigate to your iTunesArtwork file, and click Add.
*In the dialog that appears, select the ”Copy items” option and click Add.
Note that the PNG or JPEG file is just 'iTunesArtwork', with no suffix.
If you try to copy the file into the application bundle after you have built it, it will break the app signing, and you will get a verification error when trying to sync it to your device. Ensure that the artwork file is included in the "Copy Bundle Resources" folder, within your project's target in XCode (step 4, above).
A: Actually, it is possible to provide iTUnes icons for iPhone software released as ad-hoc. See this blog post for more information.
A: Create a 512x512 png of your icon, name it "iTunesArtwork" (no extension, no quotes) and add it to your project under Resources. Then build.
More details here:
http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/ios_development_workflow/000-Introduction/introduction.html
A: I'll just add my recent experience. I had fooled around trying to get my ad hoc app to show up in iTunes with an icon (strictly, iTunesArtwork). Finally, I was convinced I had followed the instructions to a 'T' but it still wouldn't show up in the grid view. However, my artwork was properly displayed in the Cover Flow view. I deleted and reinstalled my app from/to iTunes to no avail. Then I quit iTunes and restarted - and, voila! - my artwork was correct in all places. It appears there is some kind of caching that is not reset in Grid view.
A: If you see a black square instead of you icon in iTunes, be sure that file type of iTunesArtwork in Xcode isn't "image.png". If so, in the copy resource build phase, CopyPNGFile will crash the file which invalid save for iOS divices.
A: *
*Open your project in Xcode.
*Copy iTunesArtwork.png file into project folder.
*Edit iTuneArtwork.png file and remove .png from iTunesArtwork.
*Generate build.
You can see image on iTunes.
A: The application icon only shows up in iTunes if your app is distributed through the app store.
I assume you are asking about a developer or ad hoc build. Those get the default black "A" icon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: Writing ID3v2 Tag parsing code, need Good examples to test I am writing software to parse ID3v2 tags in Java. I need to find some files with good examples of the tag with lots of different frames. Ideally the tags will contain an embedded picture because that is what is kicking my butt right now.
Does anyone know where I can find some good free (legal) ID3v2 tagged files (ID3v2.2 and ID3v2.3)?
A: You can create the example files with a tagger by yourself. I'm the author of the Windows freeware tagger Mp3tag which is able to write ID3v2.3 (UTF-16 and ISO-8859-1) and ID3v2.4 tags in UTF-8 (both along with APIC frames). You can find a list of supported frames here.
To create ID3v2.2 tags, I think the only program out there is iTunes which interpretes the ID3 spec in it's very own way and writes numerous iTunes specific frames that are not in the spec.
A: This maybe be obvious and not what you are seeking, but what about ripping some of your legally obtained CDs and editing them using iTunes? iTunes would also allow you to add embedded picture. There are of course many open source programs that will also do this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Example code for Resizing an image using DirectX I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
A: You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
A: Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Apache POI HWPF - Output a table to Microsoft Word I've been Googling for quite awhile and haven't found a definitive answer. Is it possible to output a table using Apache POI? It looks like it hasn't been implemented, since the main developer stopped working on it like 5 years ago.
Is there an open source alternative to POI that can do this?
A: I think you're right in that Apache POI is dead in the water. Clearly it wasn't glamourous enough.
The only alternative that I'm aware of is iText, which can generate RTF documents, which MS Word (and every other similar application) can read. It includes full table support.
And, of course, iText can generate PDF also.
A: If docx and java are both ok for you, try docx4j
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Hooking syscalls from userspace on Windows I'm patching connect() to redirect network traffic as part of a library (NetHooker) and this works well, but it depends on ws2_32.dll remaining the same and doesn't work if the syscall is used directly. So what I'm wondering is if there's a way to catch the syscall itself without a driver. Anyone know if this is possible?
A: Cody,
Instead of system call hooking, you might want to look into writing a layered service provider.
http://www.microsoft.com/msj/0599/LayeredService/LayeredService.aspx
A: Cody, maybe you could take a look at http://research.microsoft.com/en-us/projects/detours/
Also, I wrote some code that, given the name of dll export will redirect it to another function pointer by patching the image in memory, let me know if you want the code.
A: Apart from Detours library, you might also take a look at easyhook library. Both libraries are designed to patch the image in memory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Direct3D Camera aspect ratio/scaling problem I'm using SlimDX/C# to write a Direct3D application. I configured the camera as per textbox way:
private float cameraZ = 5.0f;
camera = new Camera();
camera.FieldOfView =(float)(Math.PI/2);
camera.NearPlane = 0.5f;
camera.FarPlane = 1000.0f;
camera.Location = new Vector3(0.0f, 0.0f, cameraZ);
camera.Target = Vector3.Zero;
camera.AspectRatio = (float)InitialWidth / InitialHeight;
The drawing and rotational method are all decent Matrix.RotationYawPitchRoll and mesh.DrawSubset(0). Everything else appear normal.
My Problem is that my 3d mesh (thin square box), when look from the side, and stand vertically, it appear thicker than when it's horizontal. I've tried to change the AspectRatio to 1, it's worse. So through trial and error, I found out that it's looks much normal when the AspectRatio is around 2.6. Why is that and what could be wrong?
A: I've figured out the problem and answer already.
Apparently I did scale the mesh, and to match the aspect ratio, and I apply the Matrix.Scaling after Matrix.RotationYawPitchRoll. When I rotate the mesh facing forward only I realize that it looks the same no matter vertically or horizontally, the scaling is stretching it sideway no mather how I rotate. Swap the 2 matrix does fix my problem.
Thanks anyway
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: C#/.NET, what to look at? I see lots of job ads for C#/.NET programmers, so I thought it could be a good idea to have had a look at it.
After looking at a few tutorials I found nothing really new to me. Just a language with a syntax somewhere between Java and C++ (arguably nicer than both though).
So, what features in particular should I look at? What are some special features? What's the reason that C#/.NET is so large? What are some killer features or perhaps some really evil language gotchas?
Links and code examples are very welcome.
I am using the Mono-implementation on Linux.
A: The .Net Framework library is more important than the language.
A: Compared with Java:
*
*The "using" statement (try/finally is rarely explicit in C#) (C# 1)
*Delegates as a first class concept (C# 1)
*Properties and events as first class concepts (C# 1)
*User-defined value types (C# 1)
*Operator overloading (use with care!) (C# 1)
*Iterator blocks (C# 2)
*Generics without type erasure (C# 2)
*Anonymous methods (C# 2)
*Partial types (good for code generation) (C# 2)
*Object and collection initializers (C# 3)
*Lambda expressions (C# 3)
*Extension methods (C# 3)
*Expression trees (C# 3)
*Query expressions (aka query comprehensions) (C# 3)
*Anonymous types (mostly used in query expessions) (C# 3)
They're the things I miss from C# when I write in Java, anyway. (That's not an exhaustive list of differences, of course.) Which ones are most important to you is subjective, of course. From a simple "getting things done" point of view the using statement is probably the single biggest pragmatic gain, even though it only builds a try/finally block for you.
EDIT: For quick examples of the C# 2 and 3 features, you might want to look at my Bluffer's Guide to C# 2 and the equivalent for C# 3.
A: Killer feature: super fast Windows programming with Visual Studio.
A: In C# 3.0 Linq (Language Integrated Query) is worth looking at.
A: Exception handling, garbage collection, reflection, a unified type system, machine architecture independence and performance are the main advantages the .NET CLR. The Base Class Libraries are quite comprehensive and comprehensible. Both C# and VB.NET are first class languages for building applications on this platform. Consider learning both.
A: You can find some of the not so obvious features here
Hidden Features of C#?
And yes, the framework is the largest selling point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there a way to prevent a SystemExit exception raised from sys.exit() from being caught? The docs say that calling sys.exit() raises a SystemExit exception which can be caught in outer levels. I have a situation in which I want to definitively and unquestionably exit from inside a test case, however the unittest module catches SystemExit and prevents the exit. This is normally great, but the specific situation I am trying to handle is one where our test framework has detected that it is configured to point to a non-test database. In this case I want to exit and prevent any further tests from being run. Of course since unittest traps the SystemExit and continues happily on it's way, it is thwarting me.
The only option I have thought of so far is using ctypes or something similar to call exit(3) directly but this seems like a pretty fugly hack for something that should be really simple.
A: As Jerub said, os._exit(1) is your answer. But, considering it bypasses all cleanup procedures, including finally: blocks, closing files, etc, it should really be avoided at all costs. So may I present a safer(-ish) way of using it?
If your problem is SystemExit being caught at outer levels (i.e., unittest), then be the outer level yourself! Wrap your main code in a try/except block, catch SystemExit, and call os._exit() there, and only there! This way you may call sys.exit normally anywhere in the code, let it bubble out to the top level, gracefully closing all files and running all cleanups, and then calling os._exit.
You can even choose which exits are the "emergency" ones. The code below is an example of such approach:
import sys, os
EMERGENCY = 255 # can be any number actually
try:
# wrap your whole code here ...
# ... some code
if x: sys.exit()
# ... some more code
if y: sys.exit(EMERGENCY) # use only for emergency exits
... # yes, this is valid python!
# Might instead wrap all code in a function
# It's a common pattern to exit with main's return value, if any
sys.exit(main())
except SystemExit as e:
if e.code != EMERGENCY:
raise # normal exit, let unittest catch it at the outer level
else:
os._exit(EMERGENCY) # try to stop *that*!
As for e.code that some readers were unaware of, it is documented, as well as the attributes of all built-in exceptions.
A: You can call os._exit() to directly exit, without throwing an exception:
import os
os._exit(1)
This bypasses all of the python shutdown logic, such as the atexit module, and will not run through the exception handling logic that you're trying to avoid in this situation. The argument is the exit code that will be returned by the process.
A: You can also use quit, see example below:
while True:
print('Type exit to exit.')
response = input()
if response == 'exit':
quit(0)
print('You typed ' + response + '.')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
} |
Q: python dictionary update method I have a list string tag.
I am trying to initialize a dictionary with the key as the tag string and values as the array index.
for i, ithTag in enumerate(tag):
tagDict.update(ithTag=i)
The above returns me {'ithTag': 608} 608 is the 608th index
My problem is that while the i is being interpreted as a variable, Python is treating the "ithTag" as a string instead of a variable.
I'm confused, it is kind of hard to google these kind of specific questions. I hope I worded the title of this question correctly,
Thanks!
A: You actually want to do this:
for i, tag in enumerate(tag):
tagDict[tag] = i
The .update() method is used for updating a dictionary using another dictionary, not for changing a single key/value pair.
A: I think this is what you want to do:
d = {}
for i, tag in enumerate(ithTag):
d[tag] = i
A: Try
tagDict[ithTag] = i
A: I think what you want is this:
for i, ithTag in enumerate(tag):
tagDict.update({ithTag: i})
A: If you want to be clever:
tagDict.update(map(reversed, enumerate(tag)))
Thanks to Brian for the update. This is apparently ~5% faster than the iterative version.
(EDIT: Thanks saverio for pointing out my answer was incorrect (now fixed). Probably the most efficient/Pythonic way would be Torsten Marek's answer, slightly modified:
tagDict.update((t, i) for (i,t) in enumerate(tag))
)
A: It's a one-liner:
tagDict = dict((t, i) for i, t in enumerate(tag))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Right-click to select a datagridview row How do you select a datagridview row on a right-click?
A: the cool thing is add a menu on that right click, for example with option like "View client information", "verify last invoices", "Add a log entry to this client", etc.
you just need to add a ContextMenuStrip object, add your menu entries, and in the DataGridView properties just select the ContextMenuStrip of it.
This would create a new menu in the row the user right clicked with all the options, then all you need to do is make your magic :)
remember that you need JvR code to get what row was the user in, then grab the cell that contains the Client ID for example and pass that info.
hope it helps improving your application
http://img135.imageshack.us/img135/5246/picture1ku5.png
http://img72.imageshack.us/img72/6038/picture2lb8.png
A: Subclass the DataGridView and create a MouseDown event for the grid,
private void SubClassedGridView_MouseDown(object sender, MouseEventArgs e)
{
// Sets is so the right-mousedown will select a cell
DataGridView.HitTestInfo hti = this.HitTest(e.X, e.Y);
// Clear all the previously selected rows
this.ClearSelection();
// Set as selected
this.Rows[hti.RowIndex].Selected = true;
}
A: Make it behave similarly to the left mouse button? e.g.
private void dataGridView_CellMouseDown(object sender, DataGridViewCellMouseEventArgs e)
{
if (e.Button == MouseButtons.Right)
{
dataGridView.CurrentCell = dataGridView[e.ColumnIndex, e.RowIndex];
}
}
A: // Clear all the previously selected rows
foreach (DataGridViewRow row in yourDataGridView.Rows)
{
row.Selected = false;
}
// Get the selected Row
DataGridView.HitTestInfo info = yourDataGridView.HitTest( e.X, e.Y );
// Set as selected
yourDataGridView.Rows[info.RowIndex].Selected = true;
A: from @Alan Christensen code convert to VB.NET
Private Sub dgvCustomers_CellMouseDown(sender As Object, e As DataGridViewCellMouseEventArgs) Handles dgvCustomers.CellMouseDown
If e.Button = MouseButtons.Right Then
dgvCustomers.CurrentCell = dgvCustomers(e.ColumnIndex, e.RowIndex)
End If
End Sub
I am tested on VS 2017 it working for me!!!
A: If e.Button = MouseButtons.Right Then
DataGridView1.CurrentCell = DataGridView1(e.ColumnIndex, e.RowIndex)
End If
code works in VS2019 too
A: You can use JvR's code in the MouseDown event of your DataGridView.
A: You have to do two things:
*
*Clear all rows and Select the current. I loop through all rows and use the Bool Expression i = e.RowIndex for this
*If you have done Step 1 you still have a big pitfall:
DataGridView1.CurrentRow does not return your previously selected row (which is quite dangerous). Since CurrentRow is Readonly you have to do
Me.CurrentCell = Me.Item(e.ColumnIndex, e.RowIndex)
Protected Overrides Sub OnCellMouseDown(
ByVal e As System.Windows.Forms.DataGridViewCellMouseEventArgs)
MyBase.OnCellMouseDown(e)
Select Case e.Button
Case Windows.Forms.MouseButtons.Right
If Me.Rows(e.RowIndex).Selected = False Then
For i As Integer = 0 To Me.RowCount - 1
SetSelectedRowCore(i, i = e.RowIndex)
Next
End If
Me.CurrentCell = Me.Item(e.ColumnIndex, e.RowIndex)
End Select
End Sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/173295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.