text stringlengths 8 267k | meta dict |
|---|---|
Q: Does the cost of MSDN Subscriptions represent a deterrent to .NET adoption I know just the question is a bit of heresey, but I'm curious...
Sure, there are the express editions. But when Microsoft is effectively competing for 'hearts and minds' in an OSS world, it seems more than a bit counterproductive to charge devs who wholeheartedly support .NET high subscription fees for Microsoft software. It's hard to imagine that, in the context of Microsoft's overall sales, dev licenses represent such a significant revenue stream as to justify the downsides.
So my question is: do you know of any instances where MSDN subscription rates have deterred a team from adopting .NET for a project - where cost played a role in a decision to go OSS instead?
A: I think MS has made huge inroads to making .Net cheaper to access and work with. With competent Express versions of Visual Studio and Sql Server, the only thing you need to pay for is Windows itself (both in your dev environment and server/production environment).
The only thing holding .Net back now is it may not be the right tool for every job regardless of cost.
A: It does to me. It makes me ask maybe I should try and become a Microsoft MVP because they get all the software for free.
You can't buy Expression Blend and Design for anything but the highest level on the License and that just ticks me off.
A: I don't think so, especially with the empower program for small ISVs -- $375 gets you 5 MSDN licenses and other goodies. After that there are Microsoft Action Packs as well as the entire partner program.
A: I've always gotten legal, free copies of Visual Studio. You can either download the Express versions which will handle most people's needs or go to the Launch events where they literally give out copies to everyone who shows up.
.NET Framework is a free download, so really the only thing left is a box running Windows.. and I'd be willing to bet that you've got one of those kicking around somewhere.
There's no reason to purchase an MSDN subscription.
A: It's not a barrier to entry, but it certainly represents a glass ceiling. You get a lot of things with the Express editions, but not EVERYTHING. There's a lot of little perks that come with the Pro versions - addins for instance +cough+ Resharper +cough+. I'd say you need Visual Studio 20xx Pro at a minimum to do any mid-range to Enterprise level development.
The cost of MS developer tools was the sole reason behind my Year of Linux. It's tough seeing all the free development tools for Linux, OS X and Java. If my job didn't depend on keeping up to date with .NET, I'd leave it for dead in a heartbeat.
A: In my view MSDN subscriptions are not a huge deterrant, as not only are there express editions, there are also trial versions of most products, and I think a basic MSDN subscription is not that expensive.
However licencing costs and licencing complexity of certain developer tools as well as certain products can be a huge obstacle which unfortunately is often not thought about at the beginning of projects.
I am aware of a number of projects which have chosen alternative technologies due to licencing costs and licencing complexity.
A: I'm currently on the Empower program but I'll be paying the full fare when it runs out
for the same reason I pay A$800/year for an AutoCAD subscription - it easily saves me more time & hassle than it costs in the long run by having everything I need at my fingertips.
I think I'm getting value for money when I consider both the licenses as well as the community - most of which I suppose is free anyway.
I consider it a legitimate cost of running my business and it's tax-deductible anyway.
A: I am trying to restart my career, my life, etc and my previous MSDN Universal sub expired in 2005. At the time it expired, I stopped working for a while. That coincided with Microsoft changing the cost and structure of that subscription program. To acquire a similar subscription today is out of the question. I do not have the funds. I am currently developing with old technology (VB6 and ASP) and will do so until I have the funds to purchase the MSDN sub that I want. I have downloaded the "express" versions of VS 2008 and SQL 2008 but, lets be frank, any serious developer is going to want to utilize the features that arent available in "express". In exploring this issue here on StackOverflow, I have seen others talk about the Empower program. It looks promising and I shall investigate it.
But, yes, the barrier to entry is the cost. Hopefully Empower lowers that, for a while. I agree with the requirement that after it expires I will need to pay full fare. I think thats only fair.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Communicating with a flash server using rtmp without Flash I want to talk to a flash server which uses RTMP, but I don't want to use Flash, but rather c# or java.
I was looking at Red5 but their client API seems to be a bit wobbly.
Does anyone have any other ideas?
A: Take a look at commercial JUV Client (http://www.smaxe.com/juvclient.jsf) library
that lets you communicate with rtmp enabled servers.
A: "RTMP: Flash video streaming protocol" discusses libraries and applications for communicating with RTMP servers.
The main protocol code from the RTMPDump utility for downloading RTMP video streams is now available in its own library, librtmp (used by FFmpeg, MPlayer, and XBMC media center).
Note: the RTMPDump utility was originally based on the the libRTMP library, a part of the XBMC project.
A: There's a python implementation of the RTMP protocol, RTMPy. Other than that and Red5, I don't know of any other RTMP client implementations. (Well, besides flash itself of course).
What flash server are you using? Some of them allow you to communicate with other protocols as well, such as text-based or XML-based, and those might be better to use than RTMP if your client is not flash-based.
A: I also started developing a C++ RTMP server. I'll make a C++ client library as well in the near future and, of course, C#,Java and Lua wrappers. Stay tuned on this site or you can become a group member here and get informed right away.
A: You can find a c# rtmp implementation at https://code.google.com/p/rtmp-mediaplayer/
It is tested to work on Windows, iOS and Android. You need bass (http://www.un4seen.com/bass.html) to output audio.
A: If you like you can use Opencv. Then you can do all sort of real time video processing. I have answered same king of question here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: What exactly does "attach context" mean in Eclipse Mylyn the mylyn plugin in Eclipse (Trac connector) contains the option to attach and then retrive the "context" of an issue. Attaching the context results in attaching a zipped XML file to the issue entry in the Trac system. However, I don't quite understand what is this context is. Initially I thought that this was all the opened files and cursor positions in those files. But apparently I was wrong. Searching the net did not help.
A: The Mylyn context is a collection of "landmarks" in your code. A "landmark" is a source file or method, or resource file that is "interesting" (typically means you have opened it when task was active). This article on Mylyn Mylyn 2.0, Part 2: Automated context management may help clear up any confusion on what a context is.
Sharing the context should allow others to see what parts of the code you have been viewing. If you are having issues sharing a context, the Mylyn FAQ on Team support may help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Can I test a floppy drive using WMI & System.Management namespace? I would find out the floppy inserted state:
*
*no floppy inserted
*unformatted floppy inserted
*formatted floppy inserted
Can this determined using "WMI" in the System.Management namespace?
If so, can I generate events when the floppy inserted state changes?
A: This comes from Scripting Center @ MSDN:
strComputer = "."
Set objWMIService = GetObject( _
"winmgmts:\\" & strComputer & "\root\cimv2")
Set colItems = objWMIService.ExecQuery _
("Select * From Win32_LogicalDisk Where DeviceID = 'A:'")
For Each objItem in colItems
intFreeSpace = objItem.FreeSpace
If IsNull(intFreeSpace) Then
Wscript.Echo "There is no disk in the floppy drive."
Else
Wscript.Echo "There is a disk in the floppy drive."
End If
Next
You'll also be able to tell if it's formatted or not, by checking other members of the Win32_LogicalDisk class.
A: Using Bob Kings idea I wrote the following method.
It works great on CD's, removable drives, regular drives.
However for a floppy it always return "Not Available".
public static void TestFloppy( char driveLetter ) {
using( var searcher = new ManagementObjectSearcher( @"SELECT * FROM Win32_LogicalDisk WHERE DeviceID = '" + driveLetter + ":'" ) )
using( var logicalDisks = searcher.Get() ) {
foreach( ManagementObject logicalDisk in logicalDisks ) {
var fs = logicalDisk[ "FreeSpace" ];
Console.WriteLine( "FreeSpace = " + ( fs ?? "Not Available" ) );
logicalDisk.Dispose();
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to stop Eclipse from hanging on a long autocompletion list? I'm using the GL class from JOGL, which basically contains all OpenGL functions. Now I just installed the Javadoc for JOGL, because it's nice to have the parameter names if you can't remember the order.
However, with this Javadoc installed, it takes about half a minute to show the autocompletion list whenever I type GL.. Since I'm making a lot of OpenGL calls, this is hugely annoying.
Apart from uninstalling or disabling the JOGL Javadoc, is there any way to make the list appear faster, or not at all?
A: Preferences -> Java -> Editor -> Content Assist
Try some different settings in the "Auto-Activation" frame.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I create XML from Perl? I need to create XML in Perl. From what I read, XML::LibXML is great for parsing and using XML that comes from somewhere else. Does anyone have any suggestions for an XML Writer? Is XML::Writer still maintained? Does anyone like/use it?
In addition to feature-completeness, I am interested an easy-to-use syntax, so please describe the syntax and any other reasons why you like that module in your answer.
Please respond with one suggestion per answer, and if someone has already answered with your favorite, please vote that answer up. Hopefully it will be easy to see what is most popular.
Thanks!
A: If you want to take a data structure in Perl and turn it into XML, XML::Simple will do the job nicely.
At its simplest:
my $hashref = { foo => 'bar', baz => [ 1, 2, 3 ] };
use XML::Simple;
my $xml = XML::Simple::XMLout($hashref);
As its name suggests, its basic usage is simple; however it does offer a lot of features if you need them.
Naturally, it can also parse XML easily.
EDIT: I wrote this back in Oct 2008, 14 years ago this year. Things have changed since then. XML::Simple's own documentation carries a clear warning:
The use of this module in new code is strongly discouraged. Other modules are available which provide more straightforward and consistent interfaces. In particular, XML::LibXML is highly recommended and you can refer to for a tutorial introduction. XML::Twig is another excellent alternative.
These days, I'd strongly recommend checking those out rather than using XML::Simple in new code.
A: I don't do much XML, but XML::Smart looks like it might do what you want. Take a look at the section Creating XML Data in the doc and it looks very simple and easy to use.
Paraphrasing the doc:
use XML::Smart;
## Create a null XML object:
my $XML = XML::Smart->new() ;
## Add a server to the list:
$XML->{server} = {
os => 'Linux' ,
type => 'mandrake' ,
version => 8.9 ,
address => [ '192.168.3.201', '192.168.3.202' ] ,
} ;
$XML->save('newfile.xml') ;
Which would put this in newfile.xml:
<server os="Linux" type="mandrake" version="8.9">
<address>192.168.3.201</address>
<address>192.168.3.202</address>
</server>
Cool. I'm going to have to play with this :)
A: Just for the record, here's a snippet that uses XML::LibXML.
#!/usr/bin/env perl
#
# Create a simple XML document
#
use strict;
use warnings;
use XML::LibXML;
my $doc = XML::LibXML::Document->new('1.0', 'utf-8');
my $root = $doc->createElement('my-root-element');
$root->setAttribute('some-attr'=> 'some-value');
my %elements = (
color => 'blue',
metal => 'steel',
);
for my $name (keys %elements) {
my $tag = $doc->createElement($name);
my $value = $elements{$name};
$tag->appendTextNode($value);
$root->appendChild($tag);
}
$doc->setDocumentElement($root);
print $doc->toString();
and this outputs:
<?xml version="1.0" encoding="utf-8"?>
<my-root-element some-attr="some-value">
<color>blue</color>
<metal>steel</metal>
</my-root-element>
A: XML::Writer is still maintained (at least, as of February of this year), and it's indeed one of the favorite Perl XML writers out there.
As for describing the syntax, one is better to look at the module's documentation (the link is already in the question). To wit:
use XML::Writer;
my $writer = new XML::Writer(); # will write to stdout
$writer->startTag("greeting",
"class" => "simple");
$writer->characters("Hello, world!");
$writer->endTag("greeting");
$writer->end();
# produces <greeting class='simple'>Hello world!</greeting>
A: XML::Smart looks nice, but I don't remember it being available when I was using XML::Simple many years ago. Nice interface, and works well for reading and writing XML.
A: I like XML::TreeBuilder because it fits the way I think. Historically, I've used it more for parsing than emitting.
A few weeks after this question was posted, I had occasion to generate some XML from Perl. I surveyed the other modules listed here, assuming one of them would work better than XML::TreeBuilder, but TreeBuilder was still the best choice for what I wanted.
One drawback was there is no way to represent processing instructions and declarations in an XML::TreeBuilder object.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: How do you manage growing eclipse configurations? I use eclipse for quite a lot of work, including:
*
*multiple "utility" projects that include code that most of my java work makes use of
*various plugin-related projects that I sync and use periodically (eg: the Git plugin)
*plugin projects I'm actually developing
*the occasional pydev / non-java project
*etc...
It is becoming quite difficult to keep all these things straight, particularly since I never need to use them all at once. I've tried using Mylyn (and I'm trying it again) but in the past it has caused eclipse to run extremely slow, and I am notoriously horrible at remembering to tell mylyn that I've switched tasks, so it tends to learn very odd (and largely useless) sets of resources.
I've considered using multiple workspaces, but that is problematic when multiple projects need to exist in multiple workspaces, and when I need to synchronize the eclipse metadata directories across workspaces.
What is the best way to manage complex working environments in eclipse? Other development environments aren't a viable option because there aren't any sane alternatives when it comes to developing eclipse plugins (and that is a requirement).
(I think a very similar question was asked a month or two ago, but I haven't been able to find it...)
A: It isn't quite clear to me what your need is. But have you tried using working sets in the Package Explorer?
Open the Package Explorer view, open its menu, and Select Working Set. That lets you give a name to a subset of all the projects loaded in your workspace.
Switch working sets using the package Explorer menu. Use working sets to limit the scope of Search, errors, problems, etc.
Define as many working sets as you need to group your projects. A project can be part of any number of working sets.
A: Here's a screencast about working sets -- this does look like the right answer.
http://www.peterfriese.de/eclipse-working-sets-part-i/
A: You want to use "Working Sets".
A: I would recommend using different workspaces, and then adding the common projects to each workspace (you can specify the location of the project to be outside of the workspace). I believe this will work, but I haven't tried it, so I can't be sure.
As @JesperE and @Dennis S suggested, working sets will help you organize your projects, but they may not make eclipse run any faster, since the projects will all still be loaded into the workspace.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is there a way to handle a variable number of parameters in a template class? I have a set of callback classes that I use for handling callbacks with variable numbers of parameters. Right now I have about 6 different instances of it to handle differing numbers of arguments. Is there a way to make one instance than can handle a variable number of arguments?? Ultimately I would love to have each parameter be a POD type or a class pointer, or a struct pointer. Any ideas?
template <class T>
class kGUICallBackPtr
{
public:
kGUICallBackPtr() {m_obj=0;m_func=0;}
void Set(void *o,void (*f)(void *,T *));
inline void Call(T *i) {if(m_func) m_func(m_obj,i);}
inline bool IsValid(void) {return (m_func!=0);}
private:
void *m_obj;
void (*m_func)(void *,T *);
};
template <class T,class U>
class kGUICallBackPtrPtr
{
public:
kGUICallBackPtrPtr() {m_obj=0;m_func=0;}
void Set(void *o,void (*f)(void *,T *,U *));
inline void Call(T *i, U *j) {if(m_func) m_func(m_obj,i,j);}
inline bool IsValid(void) {return (m_func!=0);}
private:
void *m_obj;
void (*m_func)(void *,T *,U *j);
};
A: Not yet in the language itself but C++0x will have support for variadic templates.
A: C++0x variatdic templates is your best bet, but it will also be a while before you can use them.
If you need sequences of types today, take a look at MPL's vector of types, as well as other type sequence types. It's part of the Boost library. It allows you to provide a template argument that is a sequence of types, instead of just a single type.
A: How about sidestepping this issue through the use of Boost Bind? You could make your code accept a single argument, or none at all, and bind arguments you need at the call site.
A: My first choice would be to use boost::bind, boost::function, or std::bind/std::function and/or c++11 lambda's to achieve your goal. But if you need to roll your own functor then I would use boost fusion to create a 'fused functor' that takes a single template argument.
http://www.boost.org/doc/libs/1_41_0/libs/fusion/doc/html/fusion/functional/generation/functions/mk_fused.html
Ultimately all of these libraries use pre-processor macros to enumerate all possible options to work around lack of varidic templates.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Java TreeNode: How to prevent getChildCount from doing expensive operation? I'm writing a Java Tree in which tree nodes could have children that take a long time to compute (in this case, it's a file system, where there may be network timeouts that prevent getting a list of files from an attached drive).
The problem I'm finding is this:
*
*getChildCount() is called before the user specifically requests opening a particular branch of the tree. I believe this is done so the JTree knows whether to show a + icon next to the node.
*An accurate count of children from getChildCount() would need to perform the potentially expensive operation
*If I fake the value of getChildCount(), the tree only allocates space for that many child nodes before asking for an enumeration of the children. (If I return '1', I'll only see 1 child listed, despite that there are more)
The enumeration of the children can be expensive and time-consuming, I'm okay with that. But I'm not okay with getChildCount() needing to know the exact number of children.
Any way I can work around this?
Added: The other problem is that if one of the nodes represents a floppy drive (how archaic!), the drive will be polled before the user asks for its files; if there's no disk in the drive, this results in a system error.
Update: Unfortunately, implementing the TreeWillExpand listener isn't the solution. That can allow you to veto an expansion, but the number of nodes shown is still restricted by the value returned by TreeNode.getChildCount().
A: http://java.sun.com/docs/books/tutorial/uiswing/components/tree.html#data
scroll a little down, there is the exact tutorial on how to create lazy loading nodes for the jtree, complete with examples and documentation
A: I'm not sure if it's entirely applicable, but I recently worked around problems with a slow tree by pre-computing the answers to methods that would normally require going through the list of children. I only recompute them when children are added or removed or updated. In my case, some of the methods would have had to go recursively down the tree to figure out things like 'how many bytes are stored' for each node.
A: If you need a lot of access to a particular feature of your data structure that is expensive to compute, it may make sense to pre-compute it.
In the case of TreeNodes, this means that your TreeNodes would have to store their Child count. To explain it a bit more in detail: when you create a node n0 this node has a childcount (cc) of 0. When you add a node n1 as a child of this one, you n1.cc + cc++.
The tricky bit is the remove operation. You have to keep backlinks to parents and go up the hierarchy to subtract the cc of your current node.
In case you just want to have the a hasChildren feature for your nodes or override getChildCount, a boolean might be enough and would not force you to go up the whole hierarchy in case of removal. Or you could remove the backlinks and just say that you lose precision on remove operations. The TreeNode interface actually doesn't force you to provide a remove operation, but you probably want one anyway.
Well, that's the deal. In order to come up with precomputed precise values, you will have to keep backlinks of some sorts. If you don't you'd better call your method hasHadChildren or the more amusing isVirgin.
A: There are a few parts to the solution:
*
*Like Lorenzo Boccaccia said, use the TreeWillExpandListener
*Also, need to call nodesWereInserted on the tree, so the proper number of nodes will be displayed. See this code
*I have determined that if you don't know the child count, TreeNode.getChildCount() needs to return at least 1 (it can't return 0)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to tell when an MXML component has totally finished creation? An MXML component can be quite complex, containing many nested controls, including asynchronously loaded content such as Image/SWFLoader.
Is there one event I can watch for on my component that will only be raised when every control and sub-component has loaded, including SWFs and Images?
A: CreationComplete will NOT do the trick if you are talking about loading swf content or anything really external like that. CreationComplete gets fired when the MXML components have been laid out as defined in MXML (IE nested components, buttons, boxes, canvasses, etc.), so content that needs to get loaded externally (an image, a swf) does not count.
What you need to do is keep track of everything that you're waiting for and fire off a custom event once all of those elements have loaded.
One possible hackish way to do it would be to listen for whatever load complete event is relevant for each element, then have them call back to the same function that increments a value equal to the number of components you're waiting for. This means you have to pay more attention if you're modifying it, but it also means you don't have to check a boolean for every element that needs to load (IE "if (image1Loaded && image2Loaded && swfLoaded)" etc.)
A: The onApplicationComplete event?
A: The creationComplete event should do the trick - creationComplete is called on the parent component after it is called on the children.
You can get some more info on the component lifecycle in the Adobe docs.
A: In some complex cases, like when your component is considered "finished" only when some data has been retrieved via HTTP or something like that, custom event is your best bet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Priority of C++ operators "&" and "->" Given the following:
&row->count
Would &(row->count) be evaluated or (&row)->count be evaluated in C++?
EDIT: Here's a great link for C++ precedence.
A: &(row->count)
A: As far as precedence rules go, I've always liked the one put forth by Steve Oualline in "Practical C":
There are fifteen precedence rules in
C (&& comes before || comes before
?:). The practical programmer reduces
these to two:
1) Multiplication and division come
before addition and subtraction.
2) Put parentheses around everything
else.
A: This is already asked. But here is a link.
Edit:
Ok this question is very similar. And possibly there is an other one.
A: C operator precendence is explained here
As per the table, -> is higher priority than the & operator, so it's &(row->count)
A: May I suggest that you resolve such questions using a test programme? That has the advantage that you will know for sure that the answer is correct for your implementation, and you are not exposed to the risk of badly answered questions.
A: &(row->count)
A: -> has a higher priority than & (address of). So your expression would be evalutated as &(row->count)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What's The Best Option For Rendering Complex Fonts? I'm working on a game (using Ruby) and planning to have it available in several languages. I was wondering what's the best option for rendering text. In particular, whatever I use should be able to render complex fonts (Arabic and Persian in particular).
I've been looking around and have stumbled upon freetype, graphite, and using windows native api functions (I'm fine with it being not cross-platform) to name a few. What should I go with and what are the different trade-offs?
A: For a cross platform solution, check out Pango.
A: Check out www.freetype.org
It handles non-left-to-right fonts.
It generates the glyphs for you, but you must render them to the device (OpenGL/DirectX, etc).
A: If you're looking for a way to rasterise fonts into bitmaps, both Freetype and the Windows API will handle this nicely.
If you're looking for a way to draw text onscreen, go with Window's native APIs if you can; bidirectional text and accents and all that stuff are quite difficult and time-consuming to get right, so you ideally want an API that handles it for you. Whether or not this is viable for your game depends on what graphics APIs you're using, which you didn't specify.
Edit: To rasterise entire strings of text, you need to use win32's DrawText, which means you need to get your hands slightly dirty.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I put a File (Excel) online (Apache Server) with Password Protection but with the Option for Users to alter the File and save the changes? I have to put an excel file online on a web server (Apache) with some basic password protection (basic http auth). User, however, should be able to open the file from within excel and save their changes back to the server.
Is there any simple and effective solution for it? I am not very experienced with webdav.
A: Maybe use Google Docs to share the document. That would be as turn-key as it gets. :)
A: Funny enough, if you just need "basic-level" security, you can set it up in the Excel sheet itself:Go to the Tools -> Protection menu and you can lock down the whole sheet, or just specific ranges.Note This is not "really good" security and will not stand up against an even moderately-skilled attack...but it will pass muster for the most basic uses.
A: Webdav can do this. Might be a hassle to set it up.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: LinqtoSQL filter and order by syntax My Techie Bretheren (and Sisteren, of course!),
I have a LinqToSql data model that has the following entities:
data model http://danimal.acsysinteractive.com/images/advisor.jpg
I need to retrieve all advisors for a specific office, ordered by their sequence within the office. I've got the first part working with a join:
public static List<Advisor>GetOfficeEmployees(int OfficeID)
{
List<Advisor> lstAdvisors = null;
using (AdvisorDataModelDataContext _context = new AdvisorDataModelDataContext())
{
var advisors = from adv in _context.Advisors
join advisoroffice in _context.OfficeAdvisors
on adv.AdvisorId equals advisoroffice.AdvisorId
where advisoroffice.OfficeId == OfficeID
select adv;
lstAdvisors = advisors.ToList();
}
return lstAdvisors;
}
However, I can't seem to wrap my weary brain around the order by clause. Can anyone give some suggestions?
A: from adv in _context.Advisors
where adv.OfficeAdvisor.Any(off => off.OfficeId == officeID)
order adv by adv.OfficeAdvisor.First(off => off.OfficeId = officeID).Sequence
select adv;
A: public static List<Advisor>GetOfficeEmployees(int OfficeID)
{
List<Advisor> lstAdvisors = null;
using (AdvisorDataModelDataContext _context = new AdvisorDataModelDataContext())
{
var advisors = from adv in _context.Advisors
join advisoroffice in _context.OfficeAdvisors
on adv.AdvisorId equals advisoroffice.AdvisorId
where advisoroffice.OfficeId == OfficeID
group adv by adv.OfficeId into g
order by g.Sequence
select g;
lstAdvisors = advisors.ToList();
}
return lstAdvisors;
}
Note: I am not able to currently test this on Visual Studio but should work.
A: You can add an order by clause like this:
var advisors = from adv in _context.Advisors
join advisoroffice in _context.OfficeAdvisors
on adv.AdvisorId equals advisoroffice.AdvisorId
where advisoroffice.OfficeId == OfficeID
orderby advisoroffice.Sequence // < -----
select adv;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: App.config for dll We have an "engine" that loads dlls dynamically (whatever is located in a certain directory) and calls Workflow classes from them by way of reflection.
We now have some new Workflows that require access to a database, so I figured that I would put a config file in the dll directory.
But for some reason my Workflows just don't see the config file.
<configuration>
<appSettings>
<add key="ConnectString" value="Data Source=officeserver;Database=mydatabase;User ID=officeuser;Password=officeuser;" />
</appSettings>
</configuration>
Given the above config file, the following code prints an empty string:
Console.WriteLine(ConfigurationManager.AppSettings["ConnectString"]);
I think what I want is to just specify a config filename, but I'm having problems here. I'm just not getting results.
Anyone have any pointers?
A: If your code sample for reading the AppSettings is in your DLL, then it will attempt to read the config file for the application and not the config file for the DLL. This is because you're using Reflection to execute the code.
A: Funny, where I'm at we're doing something very similar and the config file loads just fine. In our case I think each new config file's name matches that of it's associated assembly. So MyLibrary.dll would have a file named MyLibrary.dll.config with information for that file assembly. Also, the example I have handy is using VB.Net rather than C# (we have some of each) and all the settings in there are for the VB-specific My.Settings namespace, so we don't use the ConfigurationManager class directly to read them.
The settings themselves look like this:
<applicationSettings>
<MyLibrary.My.MySettings>
<setting name="SomeSetting" serializeAs="String">
<value>12345</value>
</setting>
</MyLibrary.My.MySettings>
</applicationSettings>
A: I wrote this for a similar system. My recollection is that I used Assembly.GetExecutingAssembly to get the file path to the DLL, appended .config to that name, loaded it as an XmlDocument, navigated to the <appSettings> node and passed that to a NameValueSectionHandler's Create method.
A: Here is one way -
AppDomain.CurrentDomain.SetData ("APP_CONFIG_FILE", "path to config file");
Call in constructor.
A: If I recall correctly, the app.config will be loaded from your application directory, so if you are loading dlls from some other directory, you'll want the keys they need in your application's config file.
A: I'm not totally sure but I think that class only works with the path of the entry method of the AppDomain (the path of the exe most of the time) by default.
You need to call OpenExeConfiguration(string exePath) (Framework 2.0 and later) first to point to a different config file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Inherit IEnumerable from Object consequences I posted an answer to this question, including a very short rant at the end about how String.Split() should accept IEnumerable<string> rather than string[].
That got me thinking. What if the base Object class from which everything else inherits provided a default implementation for IEnumerable such that everything now returns an Enumerator over exactly one item (itself) -- unless it's overridden to do something else like with collections classes.
The idea is that then if methods like String.Split() did accept IEnumerable rather than an array I could pass a single string to the function and it would just work, rather than having to much about with creating a separator array.
I'm sure there are all kinds of reasons not to do this, not the least of which is that if everything implemented IEnumerable, then the few classes where the implementation strays from the default could behave differently than you'd expect in certain scenarios. But I still thought it would be a fun exercise: what other consequences would there be?
A: public static IEnumerable<object> ToEnumerable(this object someObject)
{
return System.Linq.Enumerable.Repeat(someObject, 1);
}
A: the index operator ([]) isn't part of the contract of IEnumerable<T>
After looking at the code in reflector, the code uses the index operator heavily, which is always part of Array.
Well, simply put, Array will always have an enumerator.
The idea is that then if methods like String.Split() did accept IEnumerable rather than an array I could pass a single string to the function and it would just work, rather than having to much about with creating a separator array.
That isn't true, string inherits IEnumerable<char> And you don't have generic variance, so you can't cast IEnumerable<char> to IEnumerable<string>. You would be getting the IEnumerable<char> version(s) of split, rather then the required string.
A: Have you thought about using a helper object and implementing a Split for the IEnumerable class?
I did once make a GetDescription for the Enum class so to get the DescriptionAttribute easily. It works amazingly well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Secure communication between Flash and PHP script I have little knowledge of Flash but for a little Flash game I have to store score and successful tries of users in a database using PHP. Now the Flash runs locally on the users computer and connects to a remote server. How can I secure against manipulation of game scores. Is there any best practice for this use case?
A: You might want to check these other questions:
*
*Q46415 Passing untampered data from Flash app to server?
*Q73947 What is the best way to stop people hacking the PHP-based highscore table of a Flash game.
*Q25999 Secure Online Highscore Lists for Non-Web Games
A: What you are asking is inherently impossible. The game runs on the client and is therefore completely at the user's mercy. Only way to be sure is running a real time simulation of the game on the server based on user's input (mouse movement, keypresses), which is absolutely ridiculous.
A: This topic has been covered here @ stackoverflow, at least in part
What is the best way to stop people hacking the PHP-based highscore table of a Flash game
A: As ssddw pointed out, this is fundamentally impossible. The code to send the score is running on the user's computer, and they have control over it and everything that runs there.
The best you can do is to periodically alter the encryption mechanism so that it takes score-manipulators a while to figure it out again. You can only minimize the damage, never eliminate it, but on a site like the one I work for, if we've got only a hundred people sending fake scores, out of the hundreds of thousands we see every day, we consider that well within the realm of acceptable. (We still crush those we catch cheating, but we don't consider it much of a problem.)
A: You could at least throw out scores that are above some threshold that you would deem legitimate. It still leaves room for more subtle maniputaion of a high scores list, but will at least help relieve the obvious frustration of seeing an impossible to achieve score topping the charts.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Simple way to trim Dollar Sign if present in C# I have a DataRow and I am getting one of the elements which is a Amount with a dollar sign. I am calling a toString on it. Is there another method I can call on it to remove the dollar sign if present.
So something like:
dr.ToString.Substring(1, dr.ToString.Length);
But more conditionally in case the dollar sign ever made an appearance again.
I am trying to do this with explicitly defining another string.
A: You could also use
string trimmed = (dr as string).Trim('$');
or
string trimmed = (dr as string).TrimStart('$');
A: If you are using C# 3.0 or greater you could use extension methods.
public static string RemoveNonNumeric(this string s)
{
return s.Replace("$", "");
}
Then your code could be changed to:
((String)dr[columnName]).RemoveNonNumeric();
This would allow you to change the implementation of RemoveNonNumeric later to remove things like commas or $ signs in foreign currency's, etc.
Also, if the object coming out of the database is indeed a string you should not call ToString() since the object is already a string. You can instead cast it.
A: Regex would work.
Regex.Replace(theString, "$", "");
But there are multiple ways to solve this problem.
A: dr[columeName].ToString().Replace("$", String.Empty)
A: Convert.ToString(dr(columnName)).Replace("$", String.Empty)
--
If you are working with a data table, then you have to unbox the value (by default its Object) to a string, so you are already creating a string, and then another with the replacement. There is really no other way to get around it, but you will only see performance differences when dealing with tens of thousands of operations.
A: Why don't you update the database query so that it doesn't return the dollar sign? This way you don't have to futz with it in your C# code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to update an Access DB from the web? I'm looking for a way to create an online form that will update an Access database that has just a few tables. Does anyone know of a simple solution for this?
A: ASP.NET should be able to do it just fine.
A: It depends on what web technology you use.
With Classic ASP, you can connect to the database the JET DB engine COM object that comes with any windows machine.
With ASP.NET, you can connect using OLEDB data connectors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you remove Subversion control for a folder? I have a folder, c:\websites\test, and it contains folders and files that were checked out from a repository that no longer exists. How do I get Subversion to stop tracking that folder and any of the subfolders and files?
I know I could simply delete the .svn folder, but there are a lot of sub-folders in many layers.
A: It worked well for me:
find directory_to_delete/ -type d -name '*.svn' | xargs rm -rf
A: Use the svn export command:
cd c:\websites\test
svn export c:\websites\test_copy
All files under version control will be exported. Double check to make sure you haven't missed anything.
A: Just remove the .svn folder inside the required folder then the control will be automatically removed.
A: On Windows, you can add a quicklink for that to your explorer right click menu.
Just start this registry script:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\Folder\shell\DeleteSVN]
@="Delete SVN Folders"
[HKEY_CLASSES_ROOT\Folder\shell\DeleteSVN\command]
@="cmd.exe /c \"TITLE Removing SVN Folders in %1 && COLOR 9A && FOR /r \"%1\" %%f IN (.svn) DO RD /s /q \"%%f\" \""
This will add an item called "Delete SVN Folders" to your right click menu. This will delete all .svn folders in this folder and all subfolders.
Source (German): http://www.sjmp.de/software/alle-svn-ordner-und-dateien-loeschen/
A: If you are running Windows then you can do a search on that folder for .svn and that will list them all. Pressing Ctrl + A will select all of them and pressing delete will remove all the 'pesky' Subversion stuff.
A: Also, if you are using TortoiseSVN, just export to the current working copy location and it will remove the .svn folders and files.
http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-export.html#tsvn-dug-export-unversion
Updated Answer for Subversion 1.7:
In Subversion 1.7 the working copy has been revised extensively. There is only one .svn folder, located in the base of the working copy. If you are using 1.7, then just deleting the .svn folder and its contents is an easy solution (regardless of using TortoiseSVN or command line tools).
A: On Linux the command is:
svn delete --keep-local file_name
A: I found that you don't even need to copy to a temporary location. You can do a
svn export --force .
and the .svn files will be removed in situ, leaving the other files as is. Very convenient and less prone to clutter.
A: You can use "svn export" for creating a copy of that folder without svn data, or you can add that folder to ignore list
A: For those using NetBeans with SVN, there is an option 'Subversion > Export'.
A: There's also a nice little open source tool called SVN Cleaner which adds three options to the Windows Explorer Context Menu:
*
*Remove All .svn
*Remove All But Root .svn
*Remove Local Repo Files
A: On Windows 7 one can just open the project folder and do a search for ".svn" if hidden files are enabled and delete all found .svn folders.
A: On Linux, this will work:
find . -iname ".svn" -print0 | xargs -0 rm -r
A: Try svn export.
You should be able to do something like this:
svn export /path/to/old/working/copy /path/to/plain/code
And then just delete the old working copy.
TortoiseSVN also has an export feature, which behaves the same way.
A: Without subshells in Linux to delete .svn folders:
find . -name .svn -exec rm -r -f {} +
rm = remove
-r = recursive (folders)
-f = force, avoids a lot of "a your sure you want to delete file XY".
A: None of these answers was satisfactory for my situation. I'm on subversion 1.8 and I had a working copy that only had a single .svn folder at the very first folder, root. However, I wanted to remove some branches from working copy.
No matter what I did, whenever I ran an 'update' it would restore those files and bring them all back. I didn't want to remove them from the repository, just from my computer -- but I needed to keep the rest of the working copy in tact (thus couldn't just remove the .svn folder).
Solution? svn update --set-depth exclude <dir>
This is a client-side "update" that excludes a specific directory. It can be found in the manuals at svnbook.com. In short, it describes this as:
Beginning with Subversion 1.6, you can take a different approach. First, check out the directory in full. Then run svn update --set-depth exclude on the one subdirectory you don't care about.
For TortoiseSVN, you can also do the same thing by right-clicking the folder you don't want, click on Update to revision..., and then set the 'Update Depth' to Exclude, as seen in this screen shot:
A: Check this, http://www.hacktrix.com/how-to-delete-svn-folders-from-your-project-on-windows-linux-and-mac
A: Another (simpler) Linux solution:
rm -r `find /path/to/foo -name .svn`
A: The answer is surprisingly simple - export the folder to itself! TortoiseSVN detects this special case and asks if you want to make the working copy unversioned. If you answer yes the control directories will be removed and you will have a plain, unversioned directory tree.
A:
THE BEST AND EASIEST WAY
If you think that you could win with a simple magic command you are failed! SVN is really tricky and always come back somehow with a new error message in Xcode. Sooner or later, promise... so you have to do it smart!
As you know the regular and best practice under Xcode is deleting a file on the project pane on the left. If you missed it and somehow deleted it in Finder you are in trouble. Big trouble! But you could solve it and spare time if you do it well.
First, you need to delete the SVN reference to the file or folder before you could delete it actually
*
*If you could just put back the file/folder from the trash or undo the last step when you deleted it, then...
*Go to Terminal - yes, the good old terminal - and go to that location.
Best way just type cd then pull the folder/file to the Terminal. You will get something similar
cd /Users/UserName/Documents/Apps_Developing/...
You could check where are you with
ls
command which list your files.
*Then you need to delete the svn reference with an SVN command:
svn delete --keep-local fileName_toDelete
This will delete the file from the SVN repository, BUT you have to delete it manually in Finder.
A: NetBeans IDE users can do it as below:
*
*Open the SVN project in your IDE
*Select the project
right click
Subversion
Export
*In the dialog box
export to folder
/var/tmp/projectname
press export
wait
will show complete
will ask do you want to open it do open on the fly
*You can now switch to Git :)
A: My idea is to remove the .svn folder and then move all other files to a new folder. It is as simple as that.
A: I use rsync:
# copy folder src to srcStripped excluding subfolders named '.svn'. retain dates, verbose output
rsync -av --exclude .svn src srcStripped
A: When you are using the Windows OS, go to your folder location and check hidden files are open, and then you can see the SVN folder in there. Just remove that folder.
A: svn export works fine, but I think this is:
svn rm --keep-local <folder/file>
It removes it from your local repository.
A: As a vital point, when you use the shell to delete .svn folders, you need the -depth argument to prevent the find command entering the directory that was just deleted and showing error messages like e.g.
"find: ./.svn: No such file or directory"
As a result, you can use the find command like below:
cd [dir_to_delete_svn_folders]
find . -depth -name .svn -exec rm -fr {} \;
A: From a Windows command line:
rmdir .svn /s /q
A: Use the following:
svn rm --keep-local <folder name> to remove the folder and everything within it.
svn rm --keep-local <folder name>/* to keep the folder, but remove everything within the folder.
Here is an example of what happens:
~/code/web/sites/testapp $ svn rm --keep-local includes/data/*
D includes/data/json
D includes/data/json/index.html
D includes/data/json/oembed
D includes/data/json/oembed/1.0
D includes/data/json/oembed/1.0/embed1.json
D includes/data/json/oembed/1.0/embed2.json
D includes/data/json/oembed/1.0/embed3.json
A: On Windows 10, we need to go to Windows Explorer, and then go to View and check the checkbox for View hidden files.
Then navigate to the folder that has the SVN linked on Windows Explorer and delete the .svn folder/file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "266"
} |
Q: Convert JavaScript String to be all lowercase How can I convert a JavaScript string value to be in all lowercase letters?
Example: "Your Name" to "your name"
A: Methods or functions: toLowerCase() and toUpperCase()
Description: These methods are used to cover a string or alphabet from lowercase to uppercase or vice versa. E.g., "and" to "AND".
Converting to uppercase:
Example code:
<script language=javascript>
var ss = " testing case conversion method ";
var result = ss.toUpperCase();
document.write(result);
</script>
Result: TESTING CASE CONVERSION METHOD
Converting to lowercase:
Example Code:
<script language=javascript>
var ss = " TESTING LOWERCASE CONVERT FUNCTION ";
var result = ss.toLowerCase();
document.write(result);
</script>
Result: testing lowercase convert function
Explanation: In the above examples,
*
*toUpperCase() method converts any string to "UPPER" case letters.
*toLowerCase() method converts any string to "lower" case letters.
A: Note that the function will only work on string objects.
For instance, I was consuming a plugin, and was confused why I was getting a "extension.tolowercase is not a function" JavaScript error.
onChange: function(file, extension)
{
alert("extension.toLowerCase()=>" + extension.toLowerCase() + "<=");
Which produced the error "extension.toLowerCase is not a function". So I tried this piece of code, which revealed the problem!
alert("(typeof extension)=>" + (typeof extension) + "<=");;
The output was "(typeof extension)=>object<=" - so aha, I was not getting a string var for my input. The fix is straightforward though - just force the darn thing into a String!:
var extension = String(extension);
After the cast, the extension.toLowerCase() function worked fine.
A: Use either toLowerCase or toLocaleLowerCase methods of the String object. The difference is that toLocaleLowerCase will take current locale of the user/host into account. As per § 15.5.4.17 of the ECMAScript Language Specification (ECMA-262), toLocaleLowerCase…
…works exactly the same as toLowerCase
except that its result is intended to
yield the correct result for the host
environment’s current locale, rather
than a locale-independent result.
There will only be a difference in the
few cases (such as Turkish) where the
rules for that language conflict with
the regular Unicode case mappings.
Example:
var lower = 'Your Name'.toLowerCase();
Also note that the toLowerCase and toLocaleLowerCase functions are implemented to work generically on any value type. Therefore you can invoke these functions even on non-String objects. Doing so will imply automatic conversion to a string value prior to changing the case of each character in the resulting string value. For example, you can apply toLowerCase directly on a date like this:
var lower = String.prototype.toLowerCase.apply(new Date());
and which is effectively equivalent to:
var lower = new Date().toString().toLowerCase();
The second form is generally preferred for its simplicity and readability. On earlier versions of IE, the first had the benefit that it could work with a null value. The result of applying toLowerCase or toLocaleLowerCase on null would yield null (and not an error condition).
A: Option 1: Using toLowerCase()
var x = 'ABC';
x = x.toLowerCase();
Option 2: Using your own function
function convertToLowerCase(str) {
var result = '';
for (var i = 0; i < str.length; i++) {
var code = str.charCodeAt(i);
if (code > 64 && code < 91) {
result += String.fromCharCode(code + 32);
} else {
result += str.charAt(i);
}
}
return result;
}
Call it as:
x = convertToLowerCase(x);
A: Yes, any string in JavaScript has a toLowerCase() method that will return a new string that is the old string in all lowercase. The old string will remain unchanged.
So, you can do something like:
"Foo".toLowerCase();
document.getElementById('myField').value.toLowerCase();
A: Simply use JS toLowerCase()
let v = "Your Name"
let u = v.toLowerCase(); or
let u = "Your Name".toLowerCase();
A: toLocaleUpperCase() or lower case functions don't behave like they should do. For example, on my system, with Safari 4, Chrome 4 Beta, and Firefox 3.5.x, it converts strings with Turkish characters incorrectly. The browsers respond to navigator.language as "en-US", "tr", "en-US" respectively.
But there isn't any way to get user's Accept-Lang setting in the browser as far as I could find.
Only Chrome gives me trouble although I have configured every browser as tr-TR locale preferred. I think these settings only affect the HTTP header, but we can't access to these settings via JavaScript.
In the Mozilla documentation it says "The characters within a string are converted to ... while respecting the current locale. For most languages, this will return the same as ...". I think it's valid for Turkish, and it doesn't differ if it's configured as en or tr.
In Turkish it should convert "DİNÇ" to "dinç" and "DINÇ" to "dınç" or vice-versa.
A: var lowerCaseName = "Your Name".toLowerCase();
A: Just an example for toLowerCase(), toUpperCase() and a prototype for the not yet available toTitleCase() or toProperCase():
String.prototype.toTitleCase = function() {
return this.split(' ').map(i => i[0].toUpperCase() + i.substring(1).toLowerCase()).join(' ');
}
String.prototype.toPropperCase = function() {
return this.toTitleCase();
}
var OriginalCase = 'Your Name';
var lowercase = OriginalCase.toLowerCase();
var upperCase = lowercase.toUpperCase();
var titleCase = upperCase.toTitleCase();
console.log('Original: ' + OriginalCase);
console.log('toLowerCase(): ' + lowercase);
console.log('toUpperCase(): ' + upperCase);
console.log('toTitleCase(): ' + titleCase);
A: I paid attention that lots of people are looking for strtolower() in JavaScript. They are expecting the same function name as in other languages, and that's why this post is here.
I would recommend using a native JavaScript function:
"SomE StriNg".toLowerCase()
Here's the function that behaves exactly the same as PHP's one (for those who are porting PHP code into JavaScript)
function strToLower (str) {
return String(str).toLowerCase();
}
A: const str = 'Your Name';
// convert string to lowercase
const lowerStr = str.toLowerCase();
// print the new string
console.log(lowerStr);
A: Try
<input type="text" style="text-transform: uppercase"> <!-- uppercase -->
<input type="text" style="text-transform: lowercase"> <!-- lowercase -->
Demo - JSFiddle
A: Try this short way:
var lower = (str+"").toLowerCase();
A: You can use the in built .toLowerCase() method on JavaScript strings. Example:
var x = "Hello";
x.toLowerCase();
A: In case you want to build it yourself:
function toLowerCase(string) {
let lowerCaseString = "";
for (let i = 0; i < string.length; i++) {
// Find ASCII charcode
let charcode = string.charCodeAt(i);
// If uppercase
if (charcode > 64 && charcode < 97) {
// Convert to lowercase
charcode = charcode + 32
}
// Back to char
let lowercase = String.fromCharCode(charcode);
// Append
lowerCaseString = lowerCaseString.concat(lowercase);
}
return lowerCaseString
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1404"
} |
Q: Validating and reparing xml Is there a way to get more useful information on validation error? XmlSchemaException provides the line number and position of the error which makes little sense to me. Xml document after all is not about its transient textual representation. I'd like to get an enumerated error (or an error code) specifying what when wrong, node name (or an xpath) to locate the source of the problem so that perhaps I can try and fix it.
Edit: I'm talking about valid xml documents - just not valid against a particular schema!
A: In my experience, you are lucky to get a line number and parse position.
A: You might consider validating via a DTD which can sometimes give slightly more interesting errors, however, on a project I currently work on, we validate using XSLTs. The transform checks the syntax and reports errors as outputted transform text. I would consider that route if you want more friendly error checking. For us, an empty output means no errors, otherwise we get some nice detail from the XSLT processing on what the error was and where.
A: personally I'm not sure how to get a more detailed error, typcially f you open the document and go to the location mentioned you can easily find the error.
If the code isn't able to parse the file as valid XML, it is pretty hard for it to give an XPATH or other named XML detail.
A: You can accomplish this, sort of, by setting up an XmlReader whose XmlReaderSettings contain the schema and then using it to read through the input stream node by node. You can keep track of the last node read and have a pretty good idea of where you are in the document when a validation error happens.
I think that if you try this exercise, you'll discover that there are a lot of validation errors (e.g. required element missing) where the concept of the error node doesn't make much sense. Yes, the parent element is clearly what's in error in that case, but what really triggered the error was the reader encountering the end tag without ever seeing the required element, which is why the error line and position point at the end tag.
A: It seems this is no easy task. Robert Rossney's answer comes closest to programmaticaly solving my problem so I'll accept that for now. I'll continue using the xsl solution. Anyone finding a better way to resolve validation errors can respond to this thread.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What do you do if you cannot resolve a bug? Did you ever had a bug in your code, you could not resolve? I hope I'm not the only one out there, who made this experience ...
There exist some classes of bugs, that are very hard to track down:
*
*timing-related bugs (that occur during inter-process-communication for example)
*memory-related bugs (most of you know appropriate examples, I guess !!!)
*event-related bugs (hard to debug, because every break point you run into makes your IDE the target for mouse release/focus events ...)
*OS-dependent bugs
*hardware dependent bugs (occurs on
release machine, but not on
developer machine)
*...
To be honest, from time to time I fail to fix such a bug on my own ... After debugging for hours (or sometimes even days) I feel very demoralized.
What do you do in this situation (apart from asking others for help which is not always possible)?
Do you
*
*use pencil and paper instead of a debugger
*face for another thing and return to
this bug later
*...
Please let me know!
A: Create an automated way to cause the bug. The worst bug to fix is one that takes hours to reproduce.
A: Quote taken from "The Cryptonomicon":
"Intuition, like a flash of lightning, lasts only for a second. It generally comes when one is tormented by a difficult decipherment and when one reviews in his mind the fruitless experiments already tried. Suddenly the light breaks through and one finds after a few minutes what previous days of labor were unable to reveal."
A: I usually ask someone else to take a look at the code. While I'm explaining what the code is supposed to do, I sometimes see the bug just as I talk.
When a bug is a tough one, I sit and work until I figure it out and solve the problem. Interestingly enough, there are times when catching a mysterious bug is more enjoyable than everything running smoothly. And the relief and feeling when a bug is resolved, well, not many other things can beat that (except the obvious ones).
A: Some things that help:
1) Take a break, approach the bug from a different angle.
2) Get more aggressive with tracing and logging.
3) Have another pair of eyes look at it.
4) A usual last resort is to figure out a way to make the bug irrelevant by changing the fundamental conditions in which it occurs
5) Smash and break things. (Stress relief only!)
A: If all else fails, don't tackle it directly. Rewrite the problem area code in a more refactored way.
A: I have definitely had bugs which I worked on for 4-5 days continuously before finding a solution. Other bugs have sat in the bug tracker for months, as I put in a few hours spread out over a long period of time. I think this sort of bug is inevitable in any complex software project.
Some stuff that works well for me:
*
*binary search through the program flow with logging
*use Trace statements along with DbgView to search for bugs which show up in release mode
*find an alternate way to reproduce the bug without changing the code
*(works against logic, but...) change the code so that the bug is more easily reproducible (the failure condition is more readily achieved)
*sleep on it and try again tomorrow with a fresh pair of eyes :)
The worst sort of bug in my opinion is a concurrency bug which disappears when logging is inserted.
A: I once worked for a company that sold a client-server application that was basically a file transfer and synchronization tool. Both the client and the server were custom applications we had designed.
We had a persistent bug that was very hard to duplicate in the lab. Our server could only handle a certain number of incoming client connections per box, so many of our customers would "cluster" multiple servers together to handle large user populations. The back end data for the cluster was on a file server they all shared. In this cluster configuration there was a bug that would happen under load where we would get a low-level file system error code on a file sharing call involving one of the back end files. Nobody could get this to repeat reliably in the lab, and even when they could they couldn't narrow down what was happening.
(I forget the exact error, it was probably 59 ERROR_UNEXP_NET_ERR or maybe 65 ERROR_NETWORK_ACCESS_DENIED. As I recall it was not even one of the documented error codes you were supposed to be able to get from the API we were calling, which was usually a lock or unlock call on a file section).
Since it involved the communication between the server and the back-end file store, and I was the "network transport" guy, I was tasked with looking at it. Many others had looked at it with no luck.
The one solid thing I had was I knew where in the code the error was being detected, but not what to do about it. So I needed to find the root cause. So I set up an appropriate hardware environment to duplicate it, and I put a custom build of the server software that instrumented the section of code in question.
The instrumentation was as follows: I added a test for the troublesome error code, and had it call a piece of code to send a UDP packet to a predetermined network address when the error occurred. The UDP packet contained a unique string in it to key on.
I then set a packet sniffing tool on the network. (At the time I was using Microsoft Network Monitor). I positioned it where it would be able to "see" this UDP packet when it was sent as well as all the communication between the cluster servers and the file server.
Most good sniffers have a mode where you can have it capture until it sees a particular piece of traffic, then stop. I turned on that mode and set it to look for that UDP packet my code would send. The goal was to end up with a packet capture of all the file server traffic right before the bug occurred. The very last network packets to and from the system where the UDP packet originated would presumably be a big clue as to what was happening.
I set the "stress test" configuration going and went home for the weekend.
When I got back on Monday, lo and behold I had my data. The sniffer had stopped just as expected after many hours of running and contained a capture. After studying the capture, what I found was that the Server Message Block or SMB (aka CIFS aka SAMBA) connection between our server and the file server was actually timing out at the TCP level due to extreme loading on the server. Because all of Microsoft's stuff is heavily layered, it would percolate back up through the file sharing stack as an "unexpected" error instead of returning a more intelligible error code that said "hey, you lost your connection at the TCP level".
I did a little more research on the TCP settings for Windows, and lo and behold the defaults for the version of Windows we were using (probably NT 4 in that era) were none too generous. It was only allowing for a very small number of failures on the TCP connection and boom, you were dead. Once you lost your SMB connection to the file server, all your file locks were toast and there was no way to recover.
So I ended up writing an appendix to the user manual that explained how to alter the TCP settings in Windows to make your cluster server a bit more tolerant of high load situations. And that was it. The fix to the bug was zero change in code, merely some additional documentation on how to properly configure the OS for use by this product.
What have we learned?
*
*Be prepared to run altered versions of your code to investigate the problem
*Consider using non-traditional tools to solve the problem (sniffers)
*Not all bug fixes require code changes
*Sometimes you can diagnose a bug while at home having a beer
A: Lots of great answers here. One thing that's worked for me in the past is to ask "what can I do to make it totally obvious when this problem has occured?".
For example, if the problem is a corrupted value in a data structure, try building a consistency-check routine that you can run periodically. Also consider implementing all access to the shared data through a set of functions that log each change.
Or, if the problem is a "random" memory overwrite, use a replacement malloc()/free() implementation that traps writing to "free" memory (like electric fence or dmalloc).
Someone else mentioned automating the process of triggering the bug. This is greeat if you can do it. Even having a routine that randomly exercises the program might help in these cases.
A: Seriously? I do things in this order.
*
*Go to bed
*Ask a colleague
*Rewrite so the area isn't affected.
*Ask SO
*Raise a support ticket with your 3rd party library vendor.
A: "What do you do in this situation (apart from asking others for help which is not always possible)?"
When is it not possible to ask for help?
There are always others you can turn to for assistance - your coworkers, your boss, friends here at Stack Overflow, etc.
Understanding when to seek help shouldn't be demoralizing!
A: There are a lot of good tips here.
One that I absolutely do not agree with is the concept of changing the code hoping that it will go away. First off, you a probably going to introduce new bugs. Seconds, you can easily change things enough to hide the bug only to have it resurface again with the next patch.
Memory corruption bugs are especially likely to vanish as magically as they turn up. However, the memory corruption bug isn't fix, it is only that non-fatal areas of memory are getting trashed.
1) Try a different debugger. For example, I use WinDbg more and more often. When you load a program in a debugger, memory layout for your application will change slightly. Maybe a different debugger cause the error to manifest slightly differently.
2) If you resort to changing code without knowing exactly what the problem is, then if the bug goes away, YOU MUST go back and understand why the change fixed the bug. Otherwise, you are probably just hiding the bug.
3) Talk to others about the bug, maybe they have seen different versions of the same problem (i.e. other ways to recreate it)
4) Logging.
A: I've had bugs that took weeks or months before a solution was found, but eventually all bugs do get fixed. Aside from the classical non-debugger bug-tracking techniques like disabling parts of the system until you get a minimal test case, I've used these techniques:
*
*Looking for better debugging tools. A new perspective goes a long way. Xdebug is something I started using in PHP only because of a performance bug that I wasn't making headway on.
*Studying the technology that the bug is located in. This has helped to debug an outlook add-in. It had random errors that made no sense and that google searches turned up zilch about. By researching outlook add-in best practices, COM and MAPI programming, we got a clearer picture of what could go wrong, and thought of new things to try to fix the bugs, which eventually did fix them.
*Trying to exacerbate the problem. If there's an issue that only happens occasionally, I'll try to find ways to make it happen constantly. This has helped to track down errors in web apps under IE and also to narrow down a crashing bug in the flash plugin.
*When all else failed, I've rewritten the subsystem that caused problems from scratch. This may take a few days, or even weeks, but if you're stuck on a bug, and can't resolve it, and customers won't take no for an answer, what else can you do? This doesn't always fix things, but if it doesn't, you usually get a clearer picture of what's going wrong.
I've noticed a few commonalities in these bugs that I get stuck on for weeks:
*
*Asking 3rd parties for help rarely helps, and it's generally not a good idea to wait for someone else to come save the day.
*Almost always the fault is inside some 3rd party closed source technology, especially when using obscure parts. IE had nasty bugs when trying to use client certificates. Flash didn't deal well with randomly generated drawing instructions (some of which were nonsensical). Outlook doesn't like it when you try to change form layout dynamically from code. These days I've learned to respect the "comfort zones" of proprietary tech.
A: I give it more time. I once had a bug (in a personal project) that I just could not figure out. I tried every debugging method I could think of, including Google, with no success. Six months later, I came back and found the bug within an hour or so. It wasn't something simple (something apparently undocumented was going on deep inside Swing), but I just looked at it in a way I hadn't before.
A: I do a number of different things:
*
*throw out all my assumptions and start from scratch. Remember, a bug exists because something which appears correct is actually wrong. Even those lines or functions or classes that you are absolutely certain are correct may be incorrect. Until you can convince yourself of the correctness you can't assume anything is right.
*keep putting in print statements and assert statements to eliminate things and allow me to reform new assumptions.
*step through code in the debugger if the problem is a control flow problem. Don't step over functions. Step in them and go through all the detail of their execution to confirm they are working right. Confirm the arguments and return values.
*If a line or function or class is suspect but I can't prove it in situ, then write a small test case that does what you think the problem construct does. This may locate the problem or give some insights as to where to look next.
*stop for the day. It's amazing what kind of offline processing your brain will do overnight. Often the answer or a key insight will appear the next day while I'm doing something mindless like showering or driving.
A: I've had this problem before, I believe everyone has, I have flat out given up before, it was simply impossible to find, yet it kept crashing, when theres some kind of bug in the code, what I do is just sit down and concentrate on every bit of the code little by little until I find it, it's hard and it takes patience but it's all you can do in such a situation.
Hope this helps.
A: I honestly cannot recall a bug that I couldn't fix. It may cause a lot of refactoring, or may take a while, but I've never had one that I can't get rid of. If it takes me more than an hour to track it down then it's almost always something really stupid and small like looking right past that : that should've been a ;, etc.
In python, if I'm using an editor that isn't mine, or maybe it's someone else's code, I use retab! in vim, or paste into something like pastie to check indentation (if I don't have vim available).
If it's not a crasher/deal breaker, then I move on and come back with a fresh pair of eyes.
Oh, and you can never, ever have too much logging.
A: I add as much debug as possible (write to log file, message boxes, etc.), and test.
I don't think this is the worst bug you can find. The worst ones are those you can't reproduce deterministically or in the testing environment.
A: I get a bit demoralized too when unable to solve a bug. Usually when I hit a wall with a bug, I would just take note on my findings and stop working on it. I would jump on another bug that is easier to solve and then came back to the bug. By doing this, I would have a fresh mind and attitude in tackling the bug. Sometimes you might have tendency to overcomplicate things when spending too much times on a bug. Having a break, helps in breaking the wall.
RWendi
A: First off, is it reproducible? That's a HUGE plus if it is. I want things bugs to always/never happen... its the intermittent ones that are the troublesome ones.
And it is going to depend on the problem, but at my shop we'll generally tag-team such a problem figuring that 2 heads (or 3 or 4) is better than 1.
Occasionally the bug won't even be in MY code, but it generally is. There have been issues where a 3rd party library was the culprit or a particular implementation on a particular platform was the cause - those stink.
I'll use anything and everything to at least track it down: debuggers, trace output, whatever.
Typically, if I can isolate it to a class or module I'll write a test harness to duplicate the real world and try to duplicate it there. I generally write my test code first, but sometimes legacy code (or other developer's code) exists that doesn't have tests already.
I generally will talk the design and problem through, out loud with the team and whiteboard anything that isn't clear. Often the solution will bubble to the surface once we talk about it as a group.
That's what I do.
A: I usually, try hard solving it. But, if that is not possible for reasonable windows of time, I leave it for some time to braincells to solve it while i sleep ;) Sometime it works...
A: I've considered asking for help on this website called StackOverflow that I've been frequenting lately...
A: This is what I did today...
I debug HW/SW interaction and its often the case logging (instrumentation) changes or hides the bug. Hence tests are performed "at-speed". I call these bugs "roaches" as they run away from any light I can shine on them.
So I have to:
Find the transaction that causes the bug. List the HW interaction via logging (this test passes, but it illustrates the flow).
Instrument before and after the bug to print state changes.
The bug I'm solving now of course is worst case as the HW locks up. The HW includes the CPU so its like being in a well lit room then the power fails and its pitch black.
I have a special backdoor view into memory, but of course this is locked up also. I tried power cycling in the hopes that the memory would stay non-volatile long enough to reenable the backdoor. No such luck. This is possible though.
I very very carefully wrote all the steps I went through to characterize this bug (what works, what fails etc). Sent this to developers with similar HW to verify it just wasn't me or my HW.
I took a few hours break to let this info settle and see if any lightbulbs lit elsewhere.
No replies, this bug is mine to solve...
This HW SW interaction is a loop tha does some setup then enters a polling loop that reads when the transaction is finished. Many transactions should occur. Which transaction fails? Is it the first one (indicating I can debug the transaction and not some noise in the HW). Is it the always the Nth transaction? What makes the Nth different than the first or the (N-1)th. The SW is single threaded and built to be predictable. No preemption, no interrupts enabled.
This SW has worked before, whats new? All the HW is new. In this case all the silicon is new as its an ASIC. Even the embedded CPU is new and customized so the ISA is new.
So I suspect everything and I'm blind. I'll have to sneak up on this roach.
I enabled just the log that reports how many times the SW polls the HW for completion. In this way the first transaction runs at speed, I get an idea how often I touch the HW in a tight polling loop. The test passes. I know its the Nth transaction and I recorded the peak number of polls for all transactions (perhaps meaningless data).
After modifing anythin, I have to put it back the way it was to verify the bug still exists. After all the earth has rotated and the solar winds are not as strong ;)
Looked at all the checkins, saw a contractor changed some important setup parameters with no explanation. These (outsourced) people are still under evaluation. This will not help.
Found there was no spinwait in the polling loop. Bad for the loop timeout as without it the timeout depends on CPU speed. Added spinwait, still no happiness.
Limited the number of transactions to see where it fails, somewhere before 1000.
Setup the HW to run slower, still hangs.
Hate to leave anyone reading this hanging too, but this diatribe will have to wait till tomorrow.
A: There is no bug that can't be fixed, since there is no bug that can't be fixed with a total rewrite.
An unfixable bug is just a bug you aren't willing to replace.
A: For memory related bugs i have found that the Memory Profiling options of Ants Profiler have helped me quite a bit on finding bugs.
A: use more creative methods of tracking the bug down.
using remote debugging on the machine where its reproducable.
using profiling tools.
introduce more logging to the app.
A: I find detailed logging to be critical to catch these type of bugs that are "intermittent" on your production machine but that you can't reproduce on your dev machine due to the differences in the environment and load on the machine.
So when I hit one of these and can't reproduce / fix it in a reasonable time, I saturate the potential areas of the code with log messages and let it run in production until it happens again - and hopefully I will have enough info in the logs to point me in the right direction.
Other techniques which may or may not be available to you in your production environment is to enable "remote debugging" so you can actually debug the code from a process running on your production box - that's very handy if you can swing it.
A: Going away for a while and then coming back to a problem is one common approach I do and have heard.
How easily reproduced the bug is can be a factor as well since if the error only occurs in one in a zillion runs of a program that could be considered a negligible gain for fixing it by breaking something else.
There is also the question of nailing down where the bug is, is it in some configuration so that it occurs on a server but not my local XP Pro machine which runs IIS 5.0. Some other bugs may involve having to change the resolution of my machine that can be annoying to try to reproduce a bug that others have reported.
You left out the "occurs under another O/S" category of bugs so that a web page that is fine in IE and Firefox on PC may look like crap on Safari on a Mac. Do I get my hands dirty in trying to fix a CSS issue using my machine as a server and the Mac that is over a row or two in the cubicles of the floor in order to see this issue or is it so low a priority it gets swept under the rug? Alternatively, if a bug was on Linux and there aren't any Linux machines near me, what should I do?
I'm sorry to have left with some questions but these seem to be difficult questions for me at times.
A: In addition to the debugger, I've also used logging and old fashioned paper and pencil. On occasion I've found really hard bugs, like code that runs fine in debug mode, but breaks in release mode. I've even occasionally rewritten perfectly good code that for whatever reason, doesn't work reliably, figuring that it's better to be reliable than elegant.
I sometimes try to redefine what others term a bug as really being a feature, but that seldom works!
A: I have a bug that shows up every few months on a customer site. It usually happens at 3am and it's not discovered until early the next morning when the customer arrives at their site. And usually when they discover it, they want everything to get working immediately, so our support people generally just reboot the computer. It's been driving me nuts for years. It never happens on my test machine or in the QA lab, only at certain customer sites. Over time, I've
*
*refactored some of the code that I thought was causing it
*added more debugging printouts around where it appears to be crashing
*redirected stdout so that next time I see it I can "kill -3" the process
*given support some new tools to dump out the current state of database locks and the like.
*added diagnostics to make it more obvious when it does happen
It hasn't happened in a few months, and I've got my fingers crossed that I might have fixed it this time, but I'm not counting on it.
A: If it's not critical, don't fix it, you'll just spend too much time!
Keep the bug open. comment/work on it when you can. It might get fix by accident (or by someone else) later on!
A: Sometimes it takes a little lateral thinking, but every bug is fixable. Sometimes you need to leave it and sleep over it, sometimes it's good to ask someone else to have a quick look (they may see something you haven't), but mostly it's about trying different things, calling up on previous experience. It can be frustrating, but the buzz you get when you do fix it, is like no other!
A: If the bug is so subtle that it takes more than three day to find out, then I usually change the design, because the main point of delivering software is not being called three years later to debug it, so the easier the interaction between components, the better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: What does `$hash{$key} |= {}` do in Perl? I was wrestling with some Perl that uses hash references.
In the end it turned out that my problem was the line:
$myhash{$key} |= {};
That is, "assign $myhash{$key} a reference to an empty hash, unless it already has a value".
Dereferencing this and trying to use it as a hash reference, however, resulted in interpreter errors about using a string as a hash reference.
Changing it to:
if( ! exists $myhash{$key}) {
$myhash{$key} = {};
}
... made things work.
So I don't have a problem. But I'm curious about what was going on.
Can anyone explain?
A: I think your problem was using "|=" (bitwise-or assignment) instead of "||=" (assign if false).
Note that your new code is not exactly equivalent. The difference is that "$myhash{$key} ||= {}" will replace existing-but-false values with a hash reference, but the new one won't. In practice, this is probably not relevant.
A: Try this:
my %myhash;
$myhash{$key} ||= {};
You can't declare a hash element in a my clause, as far as I know. You declare the hash first, then add the element in.
Edit: I see you've taken out the my. How about trying ||= instead of |=? The former is idiomatic for "lazy" initialisation.
A: Perl has shorthand assignment operators. The ||= operator is often used to set default values for variables due to Perl's feature of having logical operators return the last value evaluated. The problem is that you used |= which is a bitwise or instead of ||= which is a logical or.
As of Perl 5.10 it's better to use //= instead. // is the logical defined-or operator and doesn't fail in the corner case where the current value is defined but false.
A: The reason you're seeing an error about using a string as a hash reference is because you're using the wrong operator. |= means "bitwise-or-assign." In other words,
$foo |= $bar;
is the same as
$foo = $foo | $bar
What's happening in your example is that your new anonymous hash reference is getting stringified, then bitwise-ORed with the value of $myhash{$key}. To confuse matters further, if $myhash{$key} is undefined at the time, the value is the simple stringification of the hash reference, which looks like HASH(0x80fc284). So if you do a cursory inspection of the structure, it may look like a hash reference, but it's not. Here's some useful output via Data::Dumper:
perl -MData::Dumper -le '$hash{foo} |= { }; print Dumper \%hash'
$VAR1 = {
'foo' => 'HASH(0x80fc284)'
};
And here's what you get when you use the correct operator:
perl -MData::Dumper -le '$hash{foo} ||= { }; print Dumper \%hash'
$VAR1 = {
'foo' => {}
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Syntax error with different gcc version? I wrote a program in C with Ubuntu Linux and now I need to port it over to a UNIX machine (or what I believe to be a UNIX box). It compiles fine on my Ubuntu with GCC but when I try to compile it with GCC on the UNIX box, it gives this error:
a.c: In function `goUpDir':
a.c:44: parse error before `char'
a.c:45: `newDir' undeclared (first use in this function)
a.c:45: (Each undeclared identifier is reported only once
a.c:45: for each function it appears in.)
a.c: In function `goIntoDir':
a.c:54: parse error before `char'
a.c:57: `newDir' undeclared (first use in this function)
a.c:57: `oldDir' undeclared (first use in this function)
The main problems seem to be the parse error before char (the others are related)
44 char newDir[50] = "";
54 char* oldDir = (char*)get_current_dir_name();
These are just simple C-style strings declarations. Is there a header file that I need to include to get it to work in UNIX?
P.S. what is the command to see what version of unix and which version of gcc you are using? Knowing this will allow me to be more specific in my question.
Thanks
A: If you are compiling pure C, variables must be declared on the beggining of the functions. I mention this because most people compile their C programs using C++ compilers, which offers then some resources not normally available to pure C compilers, the most common example being the // comment lines.
A: If you want to make sure your code is portable always use the -pedantic or -pedantic-errors.
This will provide warnings/errors where your code strays from standards compliance.
While we are on the subject. You should probably also turn on all the warnings. There are good reasons why the compiler warns you; when moving code from one platform to another these warnings are the source of potential bugs as the new hardware/OS/Compiler may not act the same as your current one.
Also use the correct GCC frontend executable: g++ Will treat *.c files like C++ files unless you explicitly tell it not too. So if you are compiling real C then use gcc not g++.
gcc -pedantic -Wall -Werror *.c
g++ -pedantic -Wall -Werror *.cpp
To help with your specific problem it may be nice to see line 43. Though the error says line 44 a lot of problems are caused by the proceeding line having a problem and the problem not being detected by the parser until you get to the first lexeme on the next line.
A: How did you copy the file over? Is it possible that you inserted something that shouldn't be there?
BTW: Please fix up your use of the code tag in your code - it's currently nearly impossible to read without using "view source" in my browser.
As for you end questions:
uname -a
gcc -v
A: When trying to write portable code, the following compiler flags will tell you about a lot of problems before you get as far as trying to compile the code on the next platform:
-std=c89 -pedantic -Wall
If you only need to target GCC on other platforms, not any other compilers, then you could try:
-std=gnu89 -pedantic -Wall
But I'd guess this might allow GNU extensions on a newer GCC that aren't supported on an older one. I'm not sure.
Note that although it would be nice if -pedantic was guaranteed to warn about all non-standard programs, it isn't. There are still some things it misses.
A: You will have to provide us with more context for the errors...at least one, probably several lines before lines 44 and 54. At a guess, if you give us the code from the start of the function definition before line 44 (perhaps line 40 or so) through to line 54 (or a few lines later - maybe line 60 - then we may be able to help. Something is up with the information before line 44 that is causing it to expect something other than 'char' at line 44; ditto (probably the same problem) at line 54.
A: The information is insufficient. The code above is at least as intersting and one has to
know if ones talks about ANSI C89 or ANSI C99. The first answer is wrong in that broad sense.
Regards
Friedrich
A: Steve,
The first error message says "parse error before 'char'". What is the code that precedes char? Is it a function declaration? Does it include any user-defined types or anything of the sort?
The most likely source of this error is that something shortly above line 44 uses a type or macro that's declared in a header file... that header file may differ between your own Ubuntu system and the one you're trying to compile on.
A: What UNIX is it? AIX, Ultrix, Minix, Xenix?
GCC has a "--version" flag:
gcc --version
A: To display the GCC version:
gcc --version
It might help to show the function so we could see the surrounding code. That is often the problem with "parse error before" type errors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I fix connection manager error that causes the package to fail in production? I have created an SSIS package and it works great on my dev machine. But, when I try to run it on the production server, it errors out on me.
Here is the error:
Error: The AcquireConection method call to the connection manager
"DestinationConnectionOLEDB" failed with error code 0xC0202009.
I have figured out the cause, but am not sure how to get it fixed. The password isn't in the connection string. But I have set the password in the SSIS project. For some reason though, when I deploy and run this on the production server, it won't run since the password isn't part of the connection string.
Is there some setting in the SSIS project that I need to change in order to get this to work right?
Thanks.
A: Disable the password by setting the ProtectionLevel in the package properties to DontSaveSensitive.
I also recommend moving the connection string to a package variable and make an expression on the connection. Enable package configurations.
Then you are free to change the connection and to use integrated security or not without changing the package. You can either put the connection string in the configuration or then provide it on the command line.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Apache/Rails/Passenger Displaying Site Index? I have a Rails app that I have successfully tested with Mongrel and Webkit. Now I want to test deployment. I set up a VMWare Image using Ubuntu 8.04. I have installed Rails following this method https://help.ubuntu.com/community/RubyOnRails with the exception of using Gems 1.3 instead of 1.2. I have configured and installed Passenger. However, when I visit my sites index (http://some.ip.that.i'm.testing/) I simply get the directory index of my rails site. I should note that since I'm testing I just dumped my app in /var/www.
My Apache2 error.log file shows this and this only:
[Tue Sep 30 15:10:41 2008] [notice] Apache/2.2.8 (Ubuntu) Phusion_Passenger/2.0.3 configured -- resuming normal operations
Any idea what could be causing this problem? It seems Passenger is configured properly, but I'm not sure why my rails app is not displaying and why the site's directory listing is.
Thanks.
A: Two questions:
1) Is Rails running at all on the server? Passenger should start Rails automatically on first request - if you do a ps, do you see it running?
2) Which directory are you seeing - is it your rails directory or the public/ directory? If it's the former, your symlink is likely pointing the wrong place (it should go to public/).
(I've seen this problem before and am trying to remember how I debugged it... these are my first two thoughts.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do you avoid adding timestamp fields to your tables? I have a question regarding the two additional columns (timeCreated, timeLastUpdated) for each record that we see in many solutions. My question: Is there a better alternative?
Scenario: You have a huge DB (in terms of tables, not records), and then the customer comes and asks you to add "timestamping" to 80% of your tables.
I believe this can be accomplished by using a separate table (TIMESTAMPS). This table would have, in addition to the obvious timestamp column, the table name and the primary key for the table being updated. (I'm assuming here that you use an int as primary key for most of your tables, but the table name would most likely have to be a string).
To picture this suppose this basic scenario. We would have two tables:
PAYMENT :- (your usual records)
TIMESTAMP :- {current timestamp} + {TABLE_UPDATED, id_of_entry_updated, timestamp_type}
Note that in this design you don't need those two "extra" columns in your native payment object (which, by the way, might make it thru your ORM solution) because you are now indexing by TABLE_UPDATED and id_of_entry_updated. In addition, timestamp_type will tell you if the entry is for insertion (e.g "1"), update (e.g "2"), and anything else you may want to add, like "deletion".
I would like to know what do you think about this design. I'm most interested in best practices, what works and scales over time. References, links, blog entries are more than welcome. I know of at least one patent (pending) that tries to address this problem, but it seems details are not public at this time.
Cheers,
Eduardo
A: I think I prefer adding the timestamps to the individual tables. Joining on your timestamp table on a composite key -- one of which is a string -- is going to be slower and if you have a large amount of data it will eventually be a real problem.
Also, a lot of the time when you are looking at timestamps, it's when you're debugging a problem in your application and you'll want the data right there, rather than always having to join against the other table.
A: One nightmare with your design is that every single insert, update or delete would have to hit that table. This can cause major performance and locking issues. It is a bad idea to generalize a table like that (not just for timestamps). It would also be a nightmare to get the data out of.
If your code would break at the GUI level from adding fields you don't want the user to see, you are incorrectly writing the code to your GUI which should specify only the minimum number of columns you need and never select *.
A: While you're at it, also record the user who made the change.
The flaw with the separate-table design (in addition to the join performance highlighted by others) is that it makes the assumption that every table has an identity column for the key. That's not always true.
If you use SQL Server, the new 2008 version supports something they call Change Data Capture that should take away a lot of the pain you're talking about. I think Oracle may have something similar as well.
Update: Apparently Oracle calls it the same thing as SQL Server. Or rather, SQL Server calls it the same thing as Oracle, since Oracle's implementation came first ;)
http://www.oracle.com/technology/oramag/oracle/03-nov/o63tech_bi.html
A: I have used a design where each table to be audited had two tables:
create table NAME (
name_id int,
first_name varchar
last_name varchar
-- any other table/column constraints
)
create table NAME_AUDIT (
name_audit_id int
name_id int
first_name varchar
last_name varchar
update_type char(1) -- 'U', 'D', 'C'
update_date datetime
-- no table constraints really, outside of name_audit_id as PK
)
A database trigger is created that populates NAME_AUDIT everytime anything is done to NAME. This way you have a record of every single change made to the table, and when. The application has no real knowledge of this, since it is maintained by a database trigger.
It works reasonably well and doesn't require any changes to application code to implement.
A: The advantage of the method you suggest is that it gives you the option of adding other fields to your TIMESTAMP table, like tracking the user who made the change. You can also track edits to sensitive fields, for example who repriced this contract?
Logging record changes in a separate file means you can show multiple changes to a record, like:
mm/dd/yy hh:mm:ss Added by XXX
mm/dd/yy hh:mm:ss Field PRICE Changed by XXX,
mm/dd/yy hh:mm:ss Record deleted by XXX
One disadvantage is the extra code the will perform inserts into your TIMESTAMPS table to reflect changes in your main tables.
A: If you set up the time-stamp stuff to run off of triggers, than any action that can set off a trigger (Reads?) can be logged. Also there might be some locking advantages.
(Take all that with a grain of salt, I'm no DBA or SQL guru)
A: Yes, I like that design, and use it with some systems. Usually, some variant of:
LogID int
Action varchar(1) -- ADDED (A)/UPDATED (U)/DELETED (D)
UserID varchar(20) -- UserID of culprit :)
Timestamp datetime -- Date/Time
TableName varchar(50) -- Table Name or Stored Procedure ran
UniqueID int -- Unique ID of record acted upon
Notes varchar(1000) -- Other notes Stored Procedure or Application may provide
A: I think the extra joins you will have to perform to get the Timestamps will be a slight performance hit and a pain the neck. Other than that I see no problem.
A: We did exactly what you did. It is great for the object model and the ability to add new stamps and differant types of stamps to our model with minimal code. We were also tracking the user that made the change, and a lot of our logic was heavily based on these stamps. It woked very well.
One drawback is reporting, and/or showing a lot of differant stamps on on screen. If you are doing it the way we did it, it caused a lot of joins. Also,back ending changes was a pain.
A: Our solution is to maintain a "Transaction" table, in addition to our "Session" table. UPDATE, INSERT and DELETE instructions are all managed through a "Transaction" object and each of these SQL instruction is stored in the "Transaction" table once it has been successfully executed on the database. This "Transaction" table has other fields such as transactiontType (I for INSERT, D for DELETE, U for UPDATE), transactionDateTime, etc, and a foreign key "sessionId", telling us finally who sent the instruction. It is even possible, through some code, to identify who did what and when (Gus created the record on monday, Tim changed the Unit Price on tuesday, Liz added an extra discount on thursday, etc).
Pros for this solution are:
*
*you're able to tell "what who and when", and to show it to your users! (you'll need some code to analyse SQL statements)
*if your data is replicated, and replication fails, you can rebuild your database through this table
Cons are
*
*100 000 data updates per month mean 100 000 records in Tbl_Transaction
*Finally, this table tends to be 99% of your database volume
Our choice: all records older than 90 days are automatically deleted every morning
A: Philippe,
Don't simply delete those older than 90 days, move them first to a separate DB or write them to text file, do something to preserve them, just move them out of the main production DB.
If ever comes down to it, most often it is a case of "he with the most documentation wins"!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Reading .resx files programmatically I have an application where the contents of e-mails that get sent are stored in a .resx file.
This is an ASP.Net application, the .resx file lives in /App_GlobalResources
When I need to send an e-mail, i'm reading this using:
HttpContext.GetGlobalResourceObject("MailContents", "EmailID").ToString
Now, I need to use the same mailing method from another project (not a website). The mailing method is in a DLL that all the projects in the solution share.
In this other project, I obviously don't have an HttpContext.
How can I read these resources?
My current approach is, inside the Mailing class, check whether HttpContext.Current is null, and if so, use a separate method.
The separate method I'm looking at right now (after resigning myself to the fact that there's nothing better) is to have the path to the .resx file of the website stored in the app.config file, and somehow read that file.
I started trying with System.Resources.ResourceReader, but it looks like it wants a .resources file, not a .resx one.
A: I think I answered my own question...
There's a ResXResourceReader class. I couldn't find it because it's in the Windows Forms namespace, which is not included in my current DLL references.
Unfortunately, this will only let me iterate through results, so i'll implement some cute caching (read: memoization) over it...
A:
The mailing method is in a DLL that
all the projects in the solution
share.
In this other project, I obviously
don't have an HttpContext.
Yes you do. HttpContext is available in any dll called from a website, as long as the class library references System.Web.
A: Something is not right with your placement of resources.
Either your resource belongs to the website and should be sent to the mailing method by parameter.
Or, your resource belongs to the mailing API dll and should be kept there (.dll projects can have .resx files too). Then the mailing method should have no problem finding the resource, especcially if you embed it in the dll itself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Beginner Help for Developing Web Pages for Smart Phones I have just started authoring web pages for use on "smart phones". I need to target Blackberry, WinCE, iPhone, etc. What resources or books would you recommend for someone with ample web and software development experience but no experience developing UI for these devices? What emulation kits would you recommend, and how accurately do they represent the real thing?
Edit: To clarify, I have a web application built in ASP.Net. I want a limited subset of the functionality available in the app to be available to mobile devices. I am writing a separate set of pages to accomplish this. I am starting with two, simple chunks of functionality. In the future I believe I might get requirements for more functionality to be ported.
A: Check out WURFL - the Wireless Universal Resource File
The WURFL is an XML configuration file
which contains information about
capabilities and features of many
mobile devices.
The main scope of the file is to
collect as much information as we can
about all the existing mobile devices
that access WAP pages so that
developers will be able to build
better applications and better
services for the users
Also Checkout the Wireless FAQ
A: Telling us the language you are using/know would be very helpful.
From an emulator standpoint, there are good ones out there, but honestly NOTHING beats having the actual device, yes it is expensive, but the user experience on a mobile device is much different than any emulator can illustrate. if you are serious about this, get a device or two for testing!
A: Documentation on developing web pages for iPhone can be found at Apple's iPhone Dev Center
You can test your site with the iPhone Simulator to get an idea of how it will look on an actual iPhone. Note: You need a Mac to run the iPhone Simulator.
If you are serious, you really need to test on actual devices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the easiest way to find out how much memory an object uses in .NET? What is the easiest way to find out how much memory an object uses in .NET?
Preferably without having to resort to a third party tool. Marshal.SizeOf or the sizeof operator look useful but only work with a restricted range of types.
Some related posts:
*
*Object Memory Analysis in .NET
*Does an empty array in .NET use any space?
A: Asked and answered here: Determine how much memory a class uses?
The quick summary is that if you don't want to use a tool, you need to use the .NET Profiling API
The Profiling API is amazingly powerful, but I don't think it would qualify as "easy" by any stretch of the imagination, so I would strongly recommend using a memory profiling tool - there are some free ones that are OK, and some not-too-expensive commercial ones (JetBrains dotTrace in particular) that are really good.
A: you could also do something like this:
int startMem = GC.GetTotalMemory(true);
YourClass c = new YourClass();
int endMem = GC.GetTotalMemory(true);
int usedMeme = endMem - startMem;
A: Because of .NET's garbage-collected nature, it's somewhat difficult to measure how much memory is really being used. If you want to measure the size of a class instance, for example, does it include the memory used by instances that your instance points to?
If the answer is no, add up the size of all of the fields:
Using reflection, iterate through all members of the class; use Marshal.Sizeof(member.Type) for anything that typeof(ValueType).IsAssignableFrom(member.Type) - this measures primitive types and structs, all of which reside in the class's instance allocation. Every reference type (anything that isn't assignable to a valuetype) will take IntPtr.Size. There are a disgusting number of exceptions to this, but it might work for you.
If the answer is yes, you have a serious problem. Multiple things can reference a single instance, so if 'a' and 'b' both point to 'c', then RecursiveSizeOf(a) + RecursiveSizeOf(b) would be larger than SimpleSizeOf(a) + SimpleSizeOf(b) + SimpleSizeOf(c).
Even worse, measuring recursively can lead you down circular references, or lead you to objects you don't intend to measure - if a class is referencing a mutex, for example, that mutex may point to the thread that owns it. That thread may point to all of its local variables, which point to some C# framework structures... you may end up measuring your entire program.
It might help to understand that a garbage-collected language like C# is somewhat "fuzzy" (from a completely non-technical sense) in the way it draws distinctions between objects and units of memory. This is a lot of what Marshal mitigates - marshaling rules ensure that the struct you're marshaling (or measuring) has a well-defined memory layout, and therefore a well-defined size. In which case, you should be using well-defined, marshalable structs if you intend on using Marhsal.SizeOf().
Another option is serialization. This won't tell you specifically how much memory is in use, but it will give you a relative idea of the size of the object. But again, in order to serialize things, they have to have a well-defined layout (and therefore a well-defined size) - you accomplish by making the class appropriately serializable.
I can post implementation examples if any of the above options appeal to you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SQLAlchemy and kinterbasdb in separate apps under mod_wsgi I'm trying to develop an app using turbogears and sqlalchemy.
There is already an existing app using kinterbasdb directly under mod_wsgi on the same server.
When both apps are used, neither seems to recognize that kinterbasdb is already initialized
Is there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions?
A: I thought I posted my solution already...
Modifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file
and patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True
before calling self.dbapi.init(... was the only way I could manage to get this scenario up and running.
The SQLAlchemy 0.4.7 patch:
diff -Naur SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py
--- SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py 2008-07-26 12:43:52.000000000 -0400
+++ SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py 2008-10-01 10:51:22.000000000 -0400
@@ -291,7 +291,8 @@
global _initialized_kb
if not _initialized_kb and self.dbapi is not None:
_initialized_kb = True
- self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)
+ if not self.dbapi.initialized:
+ self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)
return ([], opts)
def create_execution_context(self, *args, **kwargs):
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I have two different drop down menus depend on the same parent menu? I have two drop down menus that I want populated with identical data depending on what is selected on the parent drop down menu. Right now, I am using a javascript library that populates one child drop down menu based on a parent, but I need to have two drop down menus populated simultaneously.
This javascript library contains a function called PrintOptions that is supposed to populate the dropdown menu when something is selected from the parent menu. I have tried calling the same function twice one for each drop down menu, but it doesn't seem to be working.
This is where I got the library: http://www.javascripttoolbox.com/lib/dynamicoptionlist/documentation.php
A: Reading the document you list, it seems there's a section that allows you to specify multiple child components from the parent:
To create the DynamicOptionList object, pass the names of the fields that are dependent on each other, with the parent field first.
Create the object by passing field names
var dol = new DynamicOptionList("Field1","Child1","Child2");
Or create an empty object and then pass the field names
var dol = new DynamicOptionList();
dol.addDependentFields("Field1","Child1","Child2");
Instead of trying to call the function more than once, just add the 2nd child component's name to the DynamicOptionList constructor, as in the first example above. As I read the docs that means whatever happens to Child1 will also happen to Child2 when Field1 is selected.
A: In the event handler you have for your parent drop down, you are probably having some other code populate that child drop down. Simply add the code again, but instead reference the second drop down. That's the rough approach. There are some details and style guidance I'm leaving out, but that'll get the job done.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Novell client and windows SSO Does the novell gina install a specific security provider that can be used via SSPI? Does it have to called out specifically or is SPNEGO good enough? Will that support single sign on if the novell gina is installed on the remote server?
A: I do not know SSPI or the low level things, but I think the way the Novell GINA works, is less about Single Sign On, and more about passing through the credentails.
That is, when a physical user (as opposed to a programmatic user) logs in, the Novell Gina uses the credentials and passes them on to the Windows Gina, and any other Gina in line to receive them, until all are done, and everything is logged in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: mysql timestamp column Is it possible to define a timestamp column in a MySQL table that will automatically be updated every time a field in the same row is modified? Ideally this column should initially be set to the time a row was inserted.
Cheers,
Don
A: You can use the timestamp column as other posters mentioned. Here is the SQL you can use to add the column in:
ALTER TABLE `table1` ADD `lastUpdated` TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ;
This adds a column called 'lastUpdated' with a default value of the current date/time. When that record is updated (lets say 5 minutes later) that timestamp will automatically update to the current time.
A: That is the default functionality of the timestamp column type. However, note that the format of this type is yyyymmddhhmmss (all digits, no colons or other separation).
EDIT: The above comment about the format is only true for versions of MySQL < 4.1... Later versions format it like a DateTime
A: This is what I have observed (MySql 5.7.11) -
The first TIMESTAMP column in the table gets current timestamp as the default value. So, if you do an INSERT or UPDATE without supplying a value, the column will get the current timestamp.
Any subsequent TIMESTAMP columns should have a default value explicitly defined. If you have two TIMESTAMP columns and if you don't specify a default value for the second column, you will get this error while trying to create the table -
ERROR 1067 (42000): Invalid default value for 'COLUMN_NAME'
A: A MySQL timestamp is set with creation or update time only if their default value is set as it. ALTER TABLE some_table ADD when TIMESTAMP DEFAULT CURRENT_TIMESTAMP.
Otherwise it works just like a DateTime field, only that it's relative to 1970/01/01 UTC, so it's an absolute point in time not depending on a specific timezone as is DateTime.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Development PC: AMD vs Intel and 32-bit vs 64-bit I am looking to purchase a new development PC. My budget is not more than $1,000 USD (including monitor). I am open to laptop (desktop replacement type) or the traditional desktop PC would do just fine.
My primary development environment will be Microsoft, Visual Studio 2008 (and support of older Visual Studio 6 code as well). SQL Server 2005, 2008 as well as legacy support of SQL Server 2000. Microsoft Office 2003, potential to install 2007 but support as far back as Office 2000. The software I will wrote and support will be Windows XP mostly, but some Vista. I am going to have to assume there are 64-bit implementations out there to install to.
My first confusion begins with choosing AMD or Intel. My concern is that there is a compatibility issue with building software using Visual Studio in an AMD environment. I dont have any evidence, its just a concern that hopefully someone will clear up for me.
Last, I am confused about 32-bit and 64-bit installations. Should I stick with the least common denominator (32-bit) even though 64-bit is steadily gaining ground? I am aware that the 64-bit operating systems will address over 4G of RAM and that I like because I would like to set up as many Virtual Machines for test environments as possible, and may have many active at once..
I am not looking for the dream machine, just a machine with a monitor and the best processor for about $1000 that will allow me to write software for the majority of machines out there.
A: There are some instruction level differences between AMD and Intel but nothing that Visual Studio is going to uncover. Perhaps if you were developing with Sun Studio you might run into them (I have!).
I would go for a 64 bit machine and run 32 bit VMs on it if you feel the need to do testing in that environment. The common feeling around here seems to be that the highest level of Vista you can afford is the platform on which to develop.
A: With 32-bit XP and Vista, you might not have access to much more than 3GB or RAM, but possibly quite less (My home machine could only access 2.25GB with Vista 32). If you can afford getting a machine with 4GB of RAM, I would recommend using Vista-64 (Home Premium or Ultimate).
Depending on what kind of development you are doing hard drive speed can make a big difference in compile times. Get 10,000 RPM hard drives if possible for a desktop machine and 7200 RPM drives for a laptop, but they do cost more.
A: AMD smoothed out their incompatibilities long ago. Your decision on that should simply be which brand you feel has better performance/features. I would definitely go with 64 bit because you can always emulate 32 bit for VM's and apps and so on. The ability to use extra memory will pay dividends later when you're just spending $100 for another 2-4 gigs instead of another $1000 to finally buy a 64 bit machine.
A: Given you're interested in running multiple VM's RAM is going to be key, as is the CPU.
Currently Intel are ahead on performance for dollar (especially if you are interested in overclocking) however AMD's options are acceptable and the batch of phenoms seem to be better at true quad core applications than the Intel quads.
The quality and speed of the RAM is largely unimportant. Generic DDRII 800mhz will be fine, just make sure you've got 4 or 8 GB of it.
In terms of operating systems, xp 64bit is fairly wanting on driver support even though it's been around for a while. Vista 64bit however has almost all the driver support of Vista 32bit. While this means that some of your older devices wont work, you should have much less hassles with Vista than XP. In terms of versioning, I recommend premium, however you'd need to look into the added feature list to determine if it's worth it or not (to me, it's not worth it at all).
In terms of issues that may occur due to specific processors? I agree with stimms that while there may be slight differences, it's not something you'd encounter in VS development. However my experience in that arena is by no means extensive.
A: If you look for a not-too-expensive dev machine, AMD should be better.
AMD 780G/790G mainboard has on-board integrated VGA, out-perform most nvidia/intel video integrated mainboard at a reasonable price. AMD Phenom CPU's performance is not as good as those of Intel. But considering you can get a AMD 3-core CPU at the price that Intel offers you only 2-core, it's a good deal.
Intel's CPU has great overclock potential. However as a developer, I suppose you like a solid-as-a-rock machine and not like to take risk geting a blue death screen while compiling your code.
Hardware virtualization is important if you like to paly with X64 virutal machine for testing. Most modern AMD CPUs have hardware virtualization feature built in, while Intel cut this feature from its low-end CPUs.
A: Get 4 gigs rams minimum equal that you need a system that can handle more than 3 gigs (so 64bits OS). Rams is cheap and IDE with all others software (debugging, testing, database client, etc) will require you some rams if you want something fast.
A: For the cpu, you can get a Quad Core for less than 190$, with a board that can handle it (about 125$) you have a strong start. You do not need to have the latest video card...
A: A lot of already build PC can be nice for you under your budget (under 720$). See this example:
*
*Vista Home Premium 64-bit
*320 gig hard drive
*3 gig rams
*GeForce 7100 graphics
*22" Acer LCD included
*Core 2 Duo E4700
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: brokenImageSkin dimensions in Flex I am trying to implement a custom "broken image" icon to appear if I cannot load an image. To accomplish this, I used the brokenImageSkin parameter, but it renders the image at its true resolution, which ends up cutting off the image if the size of the control is constrained.
<mx:Image brokenImageSkin="@Embed('/assets/placeholder.png')" source="http://www.example.com/bad_url.png"/>
How can I scale the brokenImageSkin to a custom width and height?
A: I see that in this example,
http://blog.flexexamples.com/2008/03/02/setting-a-custom-broken-image-skin-for-the-image-control-in-flex/#more-538, there is an IO error event where you could set the width and height of the image.
A: *
*Make a new class that extends ProgrammaticSkin. Embed your image using the [Embed] meta keyword and associate it with a variable of type Class (see the documentation for this)
*Override updateDisplaylist.
*Call graphics.clear() in this function.
*Call graphics.beginBitmapFill and then apply the appropriate dimensions and scaling based on the unscaledWidth and unscaledHeight passed in.
This is way more complicated but it's the only way I know of to get more control out of a custom skinning operation like that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How does one get started with procedural generation? Procedural generation has been brought into the spotlight recently (by Spore, MMOs, etc), and it seems like an interesting/powerful programming technique.
My questions are these:
*
*Do you know of any mid-sized projects that utilize procedural generation techniques?
*What language/class of languages is best for procedural generation?
*Can you use procedural generation for "serious" code? (i.e., not a game)
A: Procedural generation is used heavily in the demoscene to create complex graphics in a small executable. Will Wright even said that he was inspired by the demoscene while making Spore. That may be your best place to start.
http://en.wikipedia.org/wiki/Demoscene
A: There is an excellent book about the topic:
http://www.amazon.com/Texturing-Modeling-Third-Procedural-Approach/dp/1558608486
It is biased toward non-real-time visual effects and animation generation, but the theory and ideas are usable outside of these fields, I suppose.
It may also worth to mention that there is a professional software package that implements a complete procedural workflow called SideFX's Houdini. You can use it to invent and prototype procedural solutions to problems, that you can later translate to code.
While it's a rather expensive package, it has a free evaluation licence, which can be used as a very nice educational and/or engineering tool.
A: If you want an example of a world generator simulation plates tectonics, erosion, rain-shadow, etc. take a look at:
https://github.com/ftomassetti/lands
On top of that there is also a civilizations evolution simulator:
https://github.com/ftomassetti/civs
A blog full on interesting resource is:
dungeonleague.com/
It is abandoned now but you should read all its posts
A: I'm not an expert on this, but I can try and contribute a few answers:
*
*NetHack and it's brethern are open source and rely heavily on procedural generation of levels (maps). Link to the downloads of it.
If you are more interested in landscape/texture/cloud generation, I'd recommend you search Gamasutra and GameDev which have quite a few articles on those subjects.
*AFAIK I don't think there is much difference between languages. Most of the code you see will be in C/CPP because it's still very much the official language of Game Developers, but you can use anything you want...
*Well it depends if you have a project that can benefit from such technology. I saw procedural generation used in simulators for the army (which can be considered a game, although they are not very playable :)).
And a small note - my definition if procedural generation is anything generating a lot of data from a small amount of rules or patterns and lots of randomness, your results may vary :)
A: (More than 10 years later ...)
Procedural generation only means that code is used to generate the data instead of it being hand made. For example if you want to generate a forest with various trees you are not going to design each tree by hand, thus coding is more efficient to generate the variations. It could be the generation of the tree graphics, size, structure, placement ...
In general there is some kind of interation with a few rules, in addition to that you can add some randomness and logic of your own, combine all of these techniques ...
Anything somewhat chaotic but not too chaotic can yield interesting results.
Here are a few notable techniques:
*
*https://en.wikipedia.org/wiki/Cellular_automaton
*https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life (notable cellular automaton)
*Book: https://en.wikipedia.org/wiki/A_New_Kind_of_Science
*Fractals: https://en.wikipedia.org/wiki/Fractal
*Plants: https://en.wikipedia.org/wiki/L-system
*Terrain or maps, textures, smoke, clouds, turbulence: https://en.wikipedia.org/wiki/Perlin_noise
*Terrain or maps: https://en.wikipedia.org/wiki/Voronoi_diagram
A few games famous for their procedural generation:
*
*https://en.wikipedia.org/wiki/Rogue_(video_game) and roguelikes in general
*https://en.wikipedia.org/wiki/Elite_(video_game) random seed ...
*https://en.wikipedia.org/wiki/.kkrieger The entire game uses only 97,280 bytes of disk space.
Video tutorial about Procedural Landmass Generation.
Conference on procedural content generation for games, has a lot of videos on the topic: everythingprocedural
Have fun.
A: Procedural Content Generation wiki:
http://pcg.wikidot.com/
if what you want isn't on there, then add it ;)
A: the most important thing is to analyze how roads, cities, blocks and buildings are structured. find out what all eg buildings have in common. look at photos, maps, plans and reality. if you do that you will be one step ahead of people who consider city building as a merely computer-technological matter.
next you should develop solutions on how to create that geometry in tiny, distinct steps. you have to define rules that make up a believable city. if you are into 3d modelling you have to rethink a lot of what you have learned so the computer can follow your instructions in any situation.
in order to not loose track you should set up a lot of operators that are only responsible for little parts of the whole process. that makes debugging, expanding and improving your system much easier. in the next step you should link those operators and check the results by changing parameters.
i have seen too many "city generators" that mainly consist of random-shaped boxes with some window textures on them : (
A: Procedural content generation is now all written for the GPU, so you'll need to know a shader language. That means GLSL or HLSL. These are languages tied to OpenGL and DirectX respectively.
While my personal preference is for Dx11 / HLSL due to speed, an easier learning curve and Frank D Luna, OpenGL is supported on more platforms.
You should also check out WebGL if you want to jump right into writing shaders without having to spend the (considerable) time it takes to setup an OpenGL / DirectX game engine.
Procedural content starts with noise.
So you'll need to learn about Perlin noise (and its successor Simplex noise).
Shadertoy is a superb reference for learning about shader programming. I would recommend you come to it once you've given shader coding a go yourself, as the code there is not for the mathematically squeamish, but that is how procedural content is done.
Shadertoy was created by a procedural genius, Inigo Quilez, a product of the demo scene who works at Pixar. He has some youtube videos (great example) of live coding sessions and I can also recommend these.
A: You should probably start with a little theory and simple examples such as the midpoint displacement algorithm. You should also learn a little about Perlin Noise if you are interested in generating graphics. I used this to get me started with my final year project on procedural generation.
Fractals are closely related to procedural generation.
Terragen and SpeedTree will show you some amazing possibilities of procedural generation.
Procedural generation is a technique that can be used in any language (it is definitely not restricted to procedural languages such as C, as it can be used in OO languages such as Java, and Logic languages such as Prolog). A good understanding of recursion in any language will strengthen your grasp of Procedural Generation.
As for 'serious' or non-game code, procedural generation techniques have been used to:
*
*simulate the growth of cities in order to plan for traffic management
*to simulate the growth of blood vessels
*SpeedTree is used in movies and architectural presentations
A: Answering "Do you know of any mid-sized projects that utilize procedural generation techniques?" - the Wilder World project is a Web3, virtual world being created that utilizes procedural generation techniques to create the NFT assets and I believe the virtual world which will be created using UnReal Engine. Visit wilderworld.com for the main site or zine.wilderworld.com for more info.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "154"
} |
Q: Simple Debounce Routine Do you have a simple debounce routine handy to deal with a single switch input?
This is a simple bare metal system without any OS.
I would like to avoid a looping construct with a specific count, as the processor speed might fluctuate.
A: There's no single simple solution that works for all types of buttons. No matter what someone here tells you to use, you'll have to try it with your hardware, and see how well it works. And look at the signals on a scope, to make sure you really know what's going on. Rich B's link to the pdf looks like a good place to start.
A: I have used a majority vote method to debounce an input. I set up a simple three state shift register type of data structure, and shift each sample and take the best two out of three as the "correct" value. This is obviously a function of either your interrupt handler, or a poller, depending on what method is used to actually read the hardware.
But, the best advice is to ask your friendly hardware designer to "latch" the value and allow you to clear this value when you get to it.
A: I think you could learn a lot about this here: http://www.ganssle.com/debouncing.pdf
Your best bet is always to do this in hardware if possible, but there are some thoughts on software in there as well.
Simple example code from TFA:
#define CHECK_MSEC 5 // Read hardware every 5 msec
#define PRESS_MSEC 10 // Stable time before registering pressed
#define RELEASE_MSEC 100 // Stable time before registering released
// This function reads the key state from the hardware.
extern bool_t RawKeyPressed();
// This holds the debounced state of the key.
bool_t DebouncedKeyPress = false;
// Service routine called every CHECK_MSEC to
// debounce both edges
void DebounceSwitch1(bool_t *Key_changed, bool_t *Key_pressed)
{
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
bool_t RawState;
*Key_changed = false;
*Key_pressed = DebouncedKeyPress;
RawState = RawKeyPressed();
if (RawState == DebouncedKeyPress) {
// Set the timer which allows a change from current state.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
} else {
// Key has changed - wait for new state to become stable.
if (--Count == 0) {
// Timer expired - accept the change.
DebouncedKeyPress = RawState;
*Key_changed=true;
*Key_pressed=DebouncedKeyPress;
// And reset the timer.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
}
}
}
A: Simplest solutions are often the best, and I've found that simply only reading the switch state every N millseconds (between 10 and 50, depending on switches) has always worked for me.
I've stripped out broken and complex debounce routines and replaced them with a simple slow poll, and the results have always been good enough that way.
To implement it, you'll need a simple periodic timer interrupt on your system (assuming no RTOS support), but if you're used to programming it at the bare metal, that shouldn't be difficult to arrange.
Note that this simple approach adds a delay to detection of the change in state. If a switch takes T ms to reach a new steady state, and it's polled every X ms, then the worst case delay for detecting the press is T+X ms. Your polling interval X must be larger than the worst-case bounce time T.
A: To debounce, you want to ignore any switch up that lasts under a certain threshold. You can set a hardware timer on switch up, or use a flag set via periodic interrupt.
A: If you can get away with it, the best solution in hardware is to have the switch have two distinct states with no state between. That is, use a SPDT switch, with each pole feeding either the R or S lines of a flip/flop. Wired that way, the output of the flip/flop should be debounced.
A: What I usually do is have three or so variables the width of the input register. Every poll, usually from an interrupt, shift the values up one to make way for the new sample. Then I have a debounced variable formed by setting the logical-and of the samples, and clearing the inverse logical-or. i.e. (untested, from memory)
input3 = input2;
input2 = input1;
input1 = (*PORTA);
debounced |= input1 & input2 & input3;
debounced &= (input1 | input2 | input3);
Here's an example:
debounced has xxxx (where 'x' is "whatever")
input1 = 0110,
input2 = 1100,
input3 = 0100
With the information above,
We need to switch only bit 2 to 1, and bit 0 to 0. The rest are still "bouncing".
debounced |= (0100); //set only bit 2
debounced &= (1110); //clear only bit 0
The result is that now debounced = x1x0
A: The algorithm from ganssle.com could have a bug in it. I have the impression the following line
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
should read
static uint8_t Count = PRESS_MSEC / CHECK_MSEC;
in order to debounce correctly the initial press.
A: At the hardware level the basic debouncing routine has to take into account the following segments of a physical key's (or switch's) behavior:
Key sitting quietly->finger touches key and begins pushing down->key reaches bottom of travel and finger holds it there->finger begins releasing key and spring pushes key back up->finger releases key and key vibrates a bit until it quiesces
All of these stages involve 2 pieces of metal scraping and rubbing and bumping against each other, jiggling the voltage up and down from 0 to maximum over periods of milliseconds, so there is electrical noise every step of the way:
(1) Noise while the key is not being touched, caused by environmental issues like humidity, vibration, temperature changes, etc. causing voltage changes in the key contacts
(2) Noise caused as the key is being pressed down
(3) Noise as the key is being held down
(4) Noise as the key is being released
(5) Noise as the key vibrates after being released
Here's the algorithm by which we basically guess that the key is being pressed by a person:
read the state of the key, which can be "might be pressed", "definitely is pressed", "definitely is not pressed", "might not be pressed" (we're never really sure)
loop while key "might be" pressed (if dealing with hardware, this is a voltage sample greater than some threshold value), until is is "definitely not" pressed (lower than the threshold voltage)
(this is initialization, waiting for noise to quiesce, definition of "might be" and "definitely not" is dependent on specific application)
loop while key is "definitely not" pressed, until key "might be" pressed
when key "might be" pressed, begin looping and sampling the state of the key, and keep track of how long the key "might be" pressed
- if the key goes back to "might not be" or "definitely is not" pressed state before a certain amount of time, restart the procedure
- at a certain time (number of milliseconds) that you have chosen (usually through experimenting with different values) you decide that the sample value is no longer caused by noise, but is very likely caused by the key actually being held down by a human finger and you return the value "pressed"
while(keyvalue = maybepressed){
//loop - wait for transition to notpressed
sample keyvalue here;
maybe require it to be "notpressed" a number of times before you assume
it's really notpressed;
}
while(keyvalue = notpressed){
//loop - wait for transition to maybepressed
sample keyvalue
again, maybe require a "maybepressed" value a number of times before you
transition
}
while(keyvalue=maybepressed){
presstime+=1;
if presstime>required_presstime return pressed_affirmative
}
}
return pressed_negative
A: use integration and you'll be a happy camper. Works well for all switches.
just increment a counter when read as high and decrement it when read as low and when the integrator reaches a limit (upper or lower) call the state (high or low).
A: The whole concept is described well by Jack Ganssle. His solution posted as an answer to the original question is very good, but I find part of it not so clear how does it work.
There are three main ways how to deal with switch bouncing:
- using polling
- using interrupts
- combination of interrupts and pooling.
As I deal mostly with embedded systems that are low-power or tend to be low-power so the answer from Keith to integrate is very reasonable to me.
If you work with SPST push button type switch with one mechanically stable position then I would prefer the solution which works using a combination of interrupt and pooling.
Like this: use GPIO input interrupt to detect first edge (falling or rising, the opposite direction of un-actuated switch state). Under GPIO input ISR set flag about detection.
Use another interrupt for measuring time (ie. general purpose timer or SysTick) to count milliseconds.
On every SysTick increment (1 ms):
IF buttonFlag is true then call function to poll the state of push button (polling).
Do this for N consecutive SysTick increments then clear the flag.
When you poll the button state use logic as you wish to decide button state like M consecutive readings same, average more than Z, count if the state, last X readings the same, etc.
I think this approach should benefit from responsiveness on interrupt and lower power usage as there will be no button polling after N SysTick increments. There are no complicated interrupt modifications between various interrupts so the program code should be fairly simple and readable.
Take into consideration things like: do you need to "release" button, do you need to detect long press and do you need action on button release. I don't like button action on button release, but some solutions work that way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Stored Procedure Default Value I'm a newbie when it comes to SQL. When creating a stored procedure with parameters as such:
@executed bit,
@failure bit,
@success bit,
@testID int,
@time float = 0,
@name varchar(200) = '',
@description varchar(200) = '',
@executionDateTime nvarchar(max) = '',
@message nvarchar(max) = ''
This is the correct form for default values in T-SQL? I have tried to use NULL instead of ''.
When I attempted to execute this procedure through C# I get an error referring to the fact that description is expected but not provided. When calling it like this:
cmd.Parameters["@description"].Value = result.Description;
result.Description is null. Should this not default to NULL (well '' in my case right now) in SQL?
Here's the calling command:
cmd.CommandText = "EXEC [dbo].insert_test_result @executed,
@failure, @success, @testID, @time, @name,
@description, @executionDateTime, @message;";
...
cmd.Parameters.Add("@description", SqlDbType.VarChar);
cmd.Parameters.Add("@executionDateTime", SqlDbType.VarChar);
cmd.Parameters.Add("@message", SqlDbType.VarChar);
cmd.Parameters["@name"].Value = result.Name;
cmd.Parameters["@description"].Value = result.Description;
...
try
{
connection.Open();
cmd.ExecuteNonQuery();
}
...
finally
{
connection.Close();
}
A: A better approach would be to change the CommandText to just the name of the SP, and the CommandType to StoredProcedure - then the parameters will work much more cleanly:
cmd.CommandText = "insert_test_result";
cmd.CommandType = CommandType.StoredProcedure;
This also allows simpler passing by name, rather than position.
In general, ADO.NET wants DBNull.Value, not null. I just use a handy method that loops over my args and replaces any nulls with DBNull.Value - as simple as (wrapped):
foreach (IDataParameter param in command.Parameters)
{
if (param.Value == null) param.Value = DBNull.Value;
}
However! Specifying a value with null is different to letting it assume the default value. If you want it to use the default, don't include the parameter in the command.
A: If you aren't using named parameters, MSSQL takes the parameters in the order received (by index). I think there's an option for this on the cmd object.
so your SQL should be more like
EXEC [dbo].insert_test_result
@executed = @executed,
@failure = @failure,
@success = @success,
@testID = @testID,
@time = @time,
@name = @name,
@description = @description,
@executionDateTime = @executionDateTime,
@message = @message;
A:
cmd.CommandText = "insert_test_result";
cmd.Parameters.Add(new SQLParameter("@description", result.Description));
cmd.Parameters.Add(new SQLParameter("@message", result.Message));
try
{
connection.Open();
cmd.ExecuteNonQuery();
}
finally
{
connection.Close();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Separator attribute not working in SolPartMenu on DotNetNuke skin.ascx I can get rootmenuitemlefthtml and rootmenuitemrighthtml to emit but not separator. Tried CDATA wrapping and setting SeparatorCssClass. I just want pipes between root menu items.
<dnn:SOLPARTMENU runat="server" id="dnnSOLPARTMENU" Separator="<![CDATA[|]]>" SeparatorCssClass="MainMenu_SeparatorCSS"
usearrows="false"
userootbreadcrumbarrow="false" usesubmenubreadcrumbarrow="false"
rootmenuitemlefthtml=" <span> " rootmenuitemrighthtml=" </span>" rootmenuitemcssclass="rootmenuitem"
rootmenuitemselectedcssclass="rootmenuitemselected" rootmenuitembreadcrumbcssclass="rootmenuitembreadcrumb"
submenucssclass="submenu" submenuitemselectedcssclass="submenuitemselected" submenuitembreadcrumbcssclass="submenuitembreadcrumb"
CSSNodeSelectedRoot="rootmenuitembreadcrumb" CSSNodeSelectedSub="submenuitembreadcrumb"
MouseOverAction="False" MouseOutHideDelay="0"
delaysubmenuload="true" level="Root" />
A: While not a direct answer - you might want to shift to the DotNetNuke menu rather than using SolPart. SolPart is no longer officially supported and development work on this menu ceased almost two years ago. Jon Henning, the author of SolPart, wrote the DotNetNuke menu from the ground up and tried to address many of the shortcomings in the original SolPart menu.
A: Check this for Solpartmenu:
<dnn:SOLPARTMENU runat="server" ID="dnnHorizontalSolpart" ProviderName="SolpartMenuNavigationProvider"
ClearDefaults="True" MenuBarCssClass="Hmain_dnnmenu_bar" MenuContainerCssClass="Hmain_dnnmenu_container"
MenuItemCssClass="Hmain_dnnmenu_rootitem" MenuItemSelCssClass="Hmain_dnnmenu_itemhoverRoot"
MenuIconCssClass="Hmain_dnnmenu_icon" MenuBreakCssClass="Hmain_dnnmenu_break"
SubMenuCssClass="Hmain_dnnmenu_submenu" SubMenuItemSelectedCssClass="Hmain_dnnmenu_subselected"
CSSNodeSelectedRoot="Hmain_dnnmenu_rootselected" MenuEffectsMouseOverDisplay="None"
Separator="|" SeparatorCssClass="Hmain_dnnmenu_separator" UseArrows="False" UseRootBreadCrumbArrow="False" />
.Hmain_dnnmenu_separator
{
background-color: Transparent;
color: #C55203;
font-family: Arial;
font-size: 11px;
}
.Hmain_dnnmenu_bar
{
cursor: pointer;
cursor: hand;
height: 30px;
background-color: Transparent;
}
.Hmain_dnnmenu_container
{
background-color: Transparent;
}
.Hmain_dnnmenu_rootitem
{
background-color: #DBDBDB;
cursor: pointer;
cursor: hand;
color: #C55203;
font-family: Arial;
font-size: 11px;
_height: 30px;
_padding: 5px;
vertical-align: middle;
text-decoration:underline;
}
.Hmain_dnnmenu_rootitem td
{
font-family: Arial;
font-size: 11px;
_height: 30px;
_padding: 5px;
vertical-align: middle;
}
.Hmain_dnnmenu_itemhoverRoot
{
background-color: #DBDBDB;
color: #C55203;
cursor: pointer;
cursor: hand;
font-family: Arial;
font-size: 11px;
_height: 30px;
_padding: 5px;
text-decoration:underline;
vertical-align: middle;
}
.Hmain_dnnmenu_icon
{
cursor: pointer;
cursor: hand;
}
.Hmain_dnnmenu_submenu
{
background-color: #DBDBDB;
border: solid 1px #B7B7B7;
cursor: pointer;
cursor: hand;
color: #C55203;
font-family: Arial;
font-size: 11px;
text-align: left;
text-decoration:none;
z-index: 1000;
}
.Hmain_dnnmenu_submenu td
{
border-bottom: solid 1px #B7B7B7;
font-family: Arial;
font-size: 11px;
text-align: left;
text-decoration:none;
}
.Hmain_dnnmenu_break
{
font-family: Arial;
font-size: 11px;
}
.Hmain_dnnmenu_rootselected
{
color: #C55203;
cursor: pointer;
cursor: hand;
font-size: 11px;
font-weight: lighter;
font-style: normal;
font-family: Arial;
white-space: nowrap;
vertical-align: middle;
text-decoration: None;
}
.Hmain_dnnmenu_submenu_itemhover
{
background-color: #C55203;
color: #FFFFFF;
font-family: Arial;
font-size: 11px;
}
.Hmain_dnnmenu_subselected
{
background-color: #C55203;
color: #FFFFFF;
font-family: Arial;
font-size: 11px;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Flash rendering: optimisation tips and tricks I'm about to push out a website soon and so I've gotten in the last stages. Time to optimize the baby! The website performs pretty good overall, with an average framerate of 32fps. But at some heavy animation parts it likes to drop a couple of frames to about 22fps. Which is not that horrible. But I'm tweaking it as much as possible to keep it running at the highest speed possible.
I might overlooked some tips and tricks to make this baby run even smoother.
So hereby I open this thread to share whatever ninja tricks ever helped you in the past. A couple of mine which I can think of right now:
Sequencing the animation:
Let as less as possible transitions happen at the same time, try to make it act more as a transformer, one thing at a time. Next to gaining speed in animation, you probably end up gaining more flow.
Keep the animating objects as small as possible:
So flash has to calculate less pixels at the same time.
cacheAsBitmap = true:
Those big movieclips, vector shapes being moved around, are probably quicker moved when they are cached as a bitmap. Might take up some space in your memory, but anything for higher framerates ;)
Destroy everything you do not use:
Set those unused movieclips to null and then remove it as a child. So your garbage collector takes care of it.
A: Another consideration is what tween engine you're using. If you're using the one that comes with Flash you'll probably gain some performance by switching to something like TweenLite (there are multiple other good ones too).
Keep in mind that cacheAsBitmap can be very dangerous. If you're scaling, rotating or updating the clip itself (such as modifying the alpha of something inside it) flash will have to generate a new snapshot, which will slow everything down. As long as you're moving the clips on x and y only it's good to always have on (if you need to rotate, turn it off and then back on when you're done). Note also that if you're using filters cacheAsBitmap is always automatically on -> may be slow.
A: Keep things simple,
Flash renders graphics as vectors (and very well). The more complex an object is, the more time it will take to render.
Also try to track the graphics display tree. Every child of the stage has to be rendered separately, so if you have 1000 children this can make things really slow.
A solution is to render once in a single object, like a display handler. You can lose your 'objectyness' but you make it up in faster rendering. Keep this in mind when making tiles or many little 'additions' to a sprite.
A: Alpha transparency can be intensive to render...
From what I've heard, the glow filter will wreak havoc if you are animating it.
Use visible = false instead of alpha = 0 where possible.
A: Only use cacheAsBitmap = true: if you are not animating the transformation of the Sprite/MovieClip (e.g. scale/rotation etc), otherwise it will actually make it slower.
Where possible use PNGs instead of vector shapes.
A: You might want to make use of the scrollRect property of movieclips/sprites etc... It basically acts as a mask but with the bonus that you can scroll the masked clip by some offset.
A: Large chunks of text, if they don't change, can often be replaced with a bitmap (or transparent PNG). This makes the content a pain to maintain, but it can have significant impact on performance. (Note: this mainly applies for embedded fonts, especially curvy ones like Asian fonts, as such fonts are rendered as vector shapes. Device fonts are rendered by the OS and incur much less overhead.)
A: Profile, profile, profile.
If scripts are running slow, start tracing out timing reports to figure out which class, which function, which loop, which statement is making you slow. If graphical effects are slowing you down, trace out detailed time FPS reports and start tweaking. Does it speed up when you remove this or that layer? Or when you change that clip not to be transparent? And so on. Isolate what's slow before trying to fix it.
Just poking around and refactoring rarely gets you any real performance improvements.
A: Flash (8 - Actionscript 2 or below) will render a clip even if it's visibility is set to false - to stop it being rendered you need to move it off the 'visible' screen (i.e. x = -2000, provided the clips width is less than 2000).
A: Bitmap caching only gives you real returns when the DisplayObject you're caching has complex inner parts but tends to sit there without changing - such as a pulldown menu, which internally has all kinds of skinnable elements, but only needs to be re-rendered when it's opened or closed. Be careful of turning on caching just because objects are large.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can you inherit from a sealed class using reflection in .Net? Before you start firing at me, I'm NOT looking to do this, but someone in another post said it was possible. How is it possible? I've never heard of inheriting from anything using reflection. But I've seen some strange things...
A: Without virtual functions to override, there's not much point in subclassing a sealed class.
If you try write a sealed class with a virtual function in it, you get the following compiler error:
// error CS0549: 'Seal.GetName()' is a new virtual member in sealed class 'Seal'
However, you can get virtual functions into sealed classes by declaring them in a base class (like this),
public abstract class Animal
{
private readonly string m_name;
public virtual string GetName() { return m_name; }
public Animal( string name )
{ m_name = name; }
}
public sealed class Seal : Animal
{
public Seal( string name ) : base(name) {}
}
The problem still remains though, I can't see how you could sneak past the compiler to let you declare a subclass. I tried using IronRuby (ruby is the hackiest of all the hackety languages) but even it wouldn't let me.
The 'sealed' part is embedded into the MSIL, so I'd guess that the CLR itself actually enforces this. You'd have to load the code, dissassemble it, remove the 'sealed' bit, then reassemble it, and load the new version.
A: I'm sorry for posting incorrect assumptions in the other thread, I failed to recall correctly. Using the following example, using Reflection.Emit, shows how to derive from another class, but it fails at runtime throwing an TypeLoadException.
sealed class Sealed
{
public int x;
public int y;
}
class Program
{
static void Main(string[] args)
{
AppDomain ad = Thread.GetDomain();
AssemblyName an = new AssemblyName();
an.Name = "MyAssembly";
AssemblyBuilder ab = ad.DefineDynamicAssembly(an, AssemblyBuilderAccess.Run);
ModuleBuilder mb = ab.DefineDynamicModule("MyModule");
TypeBuilder tb = mb.DefineType("MyType", TypeAttributes.Class, typeof(Sealed));
// Following throws TypeLoadException: Could not load type 'MyType' from
// assembly 'MyAssembly' because the parent type is sealed.
Type t = tb.CreateType();
}
}
A: It MIGHT (would increase the size if I could). According to the guys on freenode, it would involved modifying the byte code, using Reflection.Emit, and handing the JIT a new set of byte code.
Not that I KNOW how ... it was just what they thought.
A: The other poster may have been thinking more along the lines of Reflection.Emit rather than the more usual read-only Reflection APIs.
However, still isn't possible (at least according to this article). But it is certainly possible to screw somethings up with Reflection.Emit that are not trapped until you try to actually execute the emitted code.
A: Create a new class called GenericKeyValueBase
put this in it
public class GenericKeyValueBase<TKey,TValue>
{
public TKey Key;
public TValue Value;
public GenericKeyValueBase(TKey ItemKey, TValue ItemValue)
{
Key = ItemKey;
Value = ItemValue;
}
}
And inherit from that plus you can add additional extension methods for Add/Remove (AddAt and RemoveAt) to your new derived class (and make it a collection/dictionary) if you are feeling really cool.
A simple implentation example where you would use a normal System.Collections.Generic.KeyValuePair for a base, but instead can use the above code
class GenericCookieItem<TCookieKey, TCookieValue> : GenericKeyValueBase<TCookieKey,TCookieValue>
{
public GenericCookieItem(TCookieKey KeyValue, TCookieValue ItemValue) : base(KeyValue, ItemValue)
{
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Microsoft Excel mangles Diacritics in .csv files? I am programmatically exporting data (using PHP 5.2) into a .csv test file.
Example data: Numéro 1 (note the accented e).
The data is utf-8 (no prepended BOM).
When I open this file in MS Excel is displays as Numéro 1.
I am able to open this in a text editor (UltraEdit) which displays it correctly. UE reports the character is decimal 233.
How can I export text data in a .csv file so that MS Excel will correctly render it, preferably without forcing the use of the import wizard, or non-default wizard settings?
A: UTF-8 doesn't work for me in office 2007 without any service pack, with or without BOM
(U+ffef or 0xEF,0xBB,0xBF , neither works)
installing sp3 makes UTF-8 work when 0xEF,0xBB,0xBF BOM is prepended.
UTF-16 works when encoding in python using "utf-16-le" with a 0xff 0xef
BOM prepended, and using tab as seperator.
I had to manually write out the BOM, and then use "utf-16-le" rather then "utf-16",
otherwise each encode() prepended the BOM to every row written out which
appeared as garbage on the first column of the second line and after.
can't tell whether UTF-16 would work without any sp installed, since
I can't go back now. sigh
This is on windows, dunno about office for MAC.
for both working cases, the import works when launching a download directly from the
browser and the text import wizard doesn't intervence, it works like you would expect.
A: Prepending a BOM (\uFEFF) worked for me (Excel 2007), in that Excel recognised the file as UTF-8. Otherwise, saving it and using the import wizard works, but is less ideal.
A: As Fregal said \uFEFF is the way to go.
<%@LANGUAGE="JAVASCRIPT" CODEPAGE="65001"%>
<%
Response.Clear();
Response.ContentType = "text/csv";
Response.Charset = "utf-8";
Response.AddHeader("Content-Disposition", "attachment; filename=excelTest.csv");
Response.Write("\uFEFF");
// csv text here
%>
A: Below is the PHP code I use in my project when sending Microsoft Excel to user:
/**
* Export an array as downladable Excel CSV
* @param array $header
* @param array $data
* @param string $filename
*/
function toCSV($header, $data, $filename) {
$sep = "\t";
$eol = "\n";
$csv = count($header) ? '"'. implode('"'.$sep.'"', $header).'"'.$eol : '';
foreach($data as $line) {
$csv .= '"'. implode('"'.$sep.'"', $line).'"'.$eol;
}
$encoded_csv = mb_convert_encoding($csv, 'UTF-16LE', 'UTF-8');
header('Content-Description: File Transfer');
header('Content-Type: application/vnd.ms-excel');
header('Content-Disposition: attachment; filename="'.$filename.'.csv"');
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
header('Content-Length: '. strlen($encoded_csv));
echo chr(255) . chr(254) . $encoded_csv;
exit;
}
UPDATED: Filename improvement and BUG fix correct length calculation. Thanks to TRiG and @ivanhoe011
A: You can save an html file with the extension 'xls' and accents will work (pre 2007 at least).
Example: save this (using Save As utf8 in Notepad) as test.xls:
<html>
<meta http-equiv="Content-Type" content="text/html" charset="utf-8" />
<table>
<tr>
<th>id</th>
<th>name</th>
</tr>
<tr>
<td>4</td>
<td>Hélène</td>
</tr>
</table>
</html>
A: I've also noticed that the question was "answered" some time ago but I don't understand the stories that say you can't open a utf8-encoded csv file successfully in Excel without using the text wizard.
My reproducible experience:
Type Old MacDonald had a farm,ÈÌÉÍØ into Notepad, hit Enter, then Save As (using the UTF-8 option).
Using Python to show what's actually in there:
>>> open('oldmac.csv', 'rb').read()
'\xef\xbb\xbfOld MacDonald had a farm,\xc3\x88\xc3\x8c\xc3\x89\xc3\x8d\xc3\x98\r\n'
>>> ^Z
Good. Notepad has put a BOM at the front.
Now go into Windows Explorer, double click on the file name, or right click and use "Open with ...", and up pops Excel (2003) with display as expected.
A: A correctly formatted UTF8 file can have a Byte Order Mark as its first three octets. These are the hex values 0xEF, 0xBB, 0xBF. These octets serve to mark the file as UTF8 (since they are not relevant as "byte order" information).1 If this BOM does not exist, the consumer/reader is left to infer the encoding type of the text. Readers that are not UTF8 capable will read the bytes as some other encoding such as Windows-1252 and display the characters  at the start of the file.
There is a known bug where Excel, upon opening UTF8 CSV files via file association, assumes that they are in a single-byte encoding, disregarding the presence of the UTF8 BOM. This can not be fixed by any system default codepage or language setting. The BOM will not clue in Excel - it just won't work. (A minority report claims that the BOM sometimes triggers the "Import Text" wizard.) This bug appears to exist in Excel 2003 and earlier. Most reports (amidst the answers here) say that this is fixed in Excel 2007 and newer.
Note that you can always* correctly open UTF8 CSV files in Excel using the "Import Text" wizard, which allows you to specify the encoding of the file you're opening. Of course this is much less convenient.
Readers of this answer are most likely in a situation where they don't particularly support Excel < 2007, but are sending raw UTF8 text to Excel, which is misinterpreting it and sprinkling your text with à and other similar Windows-1252 characters. Adding the UTF8 BOM is probably your best and quickest fix.
If you are stuck with users on older Excels, and Excel is the only consumer of your CSVs, you can work around this by exporting UTF16 instead of UTF8. Excel 2000 and 2003 will double-click-open these correctly. (Some other text editors can have issues with UTF16, so you may have to weigh your options carefully.)
* Except when you can't, (at least) Excel 2011 for Mac's Import Wizard does not actually always work with all encodings, regardless of what you tell it. </anecdotal-evidence> :)
A: The answer for all combinations of Excel versions (2003 + 2007) and file types
Most other answers here concern their Excel version only and will not necessarily help you, because their answer just might not be true for your version of Excel.
For example, adding the BOM character introduces problems with automatic column separator recognition, but not with every Excel version.
There are 3 variables that determines if it works in most Excel versions:
*
*Encoding
*BOM character presence
*Cell separator
Somebody stoic at SAP tried every combination and reported the outcome. End result? Use UTF16le with BOM and tab character as separator to have it work in most Excel versions.
You don't believe me? I wouldn't either, but read here and weep: http://wiki.sdn.sap.com/wiki/display/ABAP/CSV+tests+of+encoding+and+column+separator
A: select UTF-8 enconding when importing. if you use Office 2007 this is where you chose it :
right after you open the file.
A: Echo UTF-8 BOM before outputing CSV data. This fixes all character issues in Windows but doesnt work for Mac.
echo "\xEF\xBB\xBF";
It works for me because I need to generate a file which will be used on Windows PCs only.
A: This is just of a question of character encodings. It looks like you're exporting your data as UTF-8: é in UTF-8 is the two-byte sequence 0xC3 0xA9, which when interpreted in Windows-1252 is é. When you import your data into Excel, make sure to tell it that the character encoding you're using is UTF-8.
A: The CSV format is implemented as ASCII, not unicode, in Excel, thus mangling the diacritics. We experienced the same issue which is how I tracked down that the official CSV standard was defined as being ASCII-based in Excel.
A: Excel 2007 properly reads UTF-8 with BOM (EF BB BF) encoded csv.
Excel 2003 (and maybe earlier) reads UTF-16LE with BOM (FF FE), but with TABs instead of commas or semicolons.
A: I can only get CSV to parse properly in Excel 2007 as tab-separated little-endian UTF-16 starting with the proper byte order mark.
A: Writing a BOM to the output CSV file actually did work for me in Django:
def handlePersoonListExport(request):
# Retrieve a query_set
...
template = loader.get_template("export.csv")
context = Context({
'data': query_set,
})
response = HttpResponse()
response['Content-Disposition'] = 'attachment; filename=export.csv'
response['Content-Type'] = 'text/csv; charset=utf-8'
response.write("\xEF\xBB\xBF")
response.write(template.render(context))
return response
For more info http://crashcoursing.blogspot.com/2011/05/exporting-csv-with-special-characters.html Thanks guys!
A: Another solution I found was just to encode the result as Windows Code Page 1252 (Windows-1252 or CP1252). This would be done, for example by setting Content-Type appropriately to something like text/csv; charset=Windows-1252 and setting the character encoding of the response stream similarly.
A: Note that including the UTF-8 BOM is not necessarily a good idea - Mac versions of Excel ignore it and will actually display the BOM as ASCII… three nasty characters at the start of the first field in your spreadsheet…
A: Check the encoding in which you are generating the file, to make excel display the file correctly you must use the system default codepage.
Wich language are you using? if it's .Net you only need to use Encoding.Default while generating the file.
A: With Ruby 1.8.7 I encode every field to UTF-16 and discard BOM (maybe).
The following code is extracted from active_scaffold_export:
<%
require 'fastercsv'
fcsv_options = {
:row_sep => "\n",
:col_sep => params[:delimiter],
:force_quotes => @export_config.force_quotes,
:headers => @export_columns.collect { |column| format_export_column_header_name(column) }
}
data = FasterCSV.generate(fcsv_options) do |csv|
csv << fcsv_options[:headers] unless params[:skip_header] == 'true'
@records.each do |record|
csv << @export_columns.collect { |column|
# Convert to UTF-16 discarding the BOM, required for Excel (> 2003 ?)
Iconv.conv('UTF-16', 'UTF-8', get_export_column_value(record, column))[2..-1]
}
end
end
-%><%= data -%>
The important line is:
Iconv.conv('UTF-16', 'UTF-8', get_export_column_value(record, column))[2..-1]
A: I've found a way to solve the problem. This is a nasty hack but it works: open the doc with Open Office, then save it into any excel format; the resulting .xls or .xlsx will display the accentuated characters.
A: If you have legacy code in vb.net like I have, the following code worked for me:
Response.Clear()
Response.ClearHeaders()
Response.ContentType = "text/csv"
Response.Expires = 0
Response.AddHeader("Content-Disposition", "attachment; filename=export.csv;")
Using sw As StreamWriter = New StreamWriter(Context.Response.OutputStream, System.Text.Encoding.Unicode)
sw.Write(csv)
sw.Close()
End Using
Response.End()
A: open the file csv with notepad++
clic on Encode, select convert to UTF-8 (not convert to UTF-8(without BOM))
Save
open by double clic with excel
Hope that help
Christophe GRISON
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "200"
} |
Q: Make DocumentBuilder.parse ignore DTD references When I parse my xml file (variable f) in this method, I get an error
C:\Documents and Settings\joe\Desktop\aicpcudev\OnlineModule\map.dtd (The system cannot find the path specified)
I know I do not have the dtd, nor do I need it. How can I parse this File object into a Document object while ignoring DTD reference errors?
private static Document getDoc(File f, String docId) throws Exception{
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(f);
return doc;
}
A: A similar approach to the one suggested by @anjanb
builder.setEntityResolver(new EntityResolver() {
@Override
public InputSource resolveEntity(String publicId, String systemId)
throws SAXException, IOException {
if (systemId.contains("foo.dtd")) {
return new InputSource(new StringReader(""));
} else {
return null;
}
}
});
I found that simply returning an empty InputSource worked just as well?
A: I found an issue where the DTD file was in the jar file along with the XML. I solved the issue based on the examples here, as follows: -
DocumentBuilder db = dbf.newDocumentBuilder();
db.setEntityResolver(new EntityResolver() {
public InputSource resolveEntity(String publicId, String systemId) throws SAXException, IOException {
if (systemId.contains("doc.dtd")) {
InputStream dtdStream = MyClass.class
.getResourceAsStream("/my/package/doc.dtd");
return new InputSource(dtdStream);
} else {
return null;
}
}
});
A: Source XML (With DTD)
<!DOCTYPE MYSERVICE SYSTEM "./MYSERVICE.DTD">
<MYACCSERVICE>
<REQ_PAYLOAD>
<ACCOUNT>1234567890</ACCOUNT>
<BRANCH>001</BRANCH>
<CURRENCY>USD</CURRENCY>
<TRANS_REFERENCE>201611100000777</TRANS_REFERENCE>
</REQ_PAYLOAD>
</MYACCSERVICE>
Java DOM implementation for accepting above XML as String and removing DTD declaration
public Document removeDTDFromXML(String payload) throws Exception {
System.out.println("### Payload received in XMlDTDRemover: " + payload);
Document doc = null;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
try {
dbf.setValidating(false);
dbf.setNamespaceAware(true);
dbf.setFeature("http://xml.org/sax/features/namespaces", false);
dbf.setFeature("http://xml.org/sax/features/validation", false);
dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false);
dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);
DocumentBuilder db = dbf.newDocumentBuilder();
InputSource is = new InputSource();
is.setCharacterStream(new StringReader(payload));
doc = db.parse(is);
} catch (ParserConfigurationException e) {
System.out.println("Parse Error: " + e.getMessage());
return null;
} catch (SAXException e) {
System.out.println("SAX Error: " + e.getMessage());
return null;
} catch (IOException e) {
System.out.println("IO Error: " + e.getMessage());
return null;
}
return doc;
}
Destination XML (Without DTD)
<MYACCSERVICE>
<REQ_PAYLOAD>
<ACCOUNT>1234567890</ACCOUNT>
<BRANCH>001</BRANCH>
<CURRENCY>USD</CURRENCY>
<TRANS_REFERENCE>201611100000777</TRANS_REFERENCE>
</REQ_PAYLOAD>
</MYACCSERVICE>
A:
I know I do not have the dtd, nor do I need it.
I am suspicious of this statement; does your document contain any entity references? If so, you definitely need the DTD.
Anyway, the usual way of preventing this from happening is using an XML catalog to define a local path for "map.dtd".
A: here's another user who got the same issue : http://forums.sun.com/thread.jspa?threadID=284209&forumID=34
user ddssot on that post says
myDocumentBuilder.setEntityResolver(new EntityResolver() {
public InputSource resolveEntity(java.lang.String publicId, java.lang.String systemId)
throws SAXException, java.io.IOException
{
if (publicId.equals("--myDTDpublicID--"))
// this deactivates the open office DTD
return new InputSource(new ByteArrayInputStream("<?xml version='1.0' encoding='UTF-8'?>".getBytes()));
else return null;
}
});
The user further mentions "As you can see, when the parser hits the DTD, the entity resolver is called. I recognize my DTD with its specific ID and return an empty XML doc instead of the real DTD, stopping all validation..."
Hope this helps.
A: Try setting features on the DocumentBuilderFactory:
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setValidating(false);
dbf.setNamespaceAware(true);
dbf.setFeature("http://xml.org/sax/features/namespaces", false);
dbf.setFeature("http://xml.org/sax/features/validation", false);
dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false);
dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);
DocumentBuilder db = dbf.newDocumentBuilder();
...
Ultimately, I think the options are specific to the parser implementation. Here is some documentation for Xerces2 if that helps.
A: I'm working with sonarqube, and sonarlint for eclipse showed me Untrusted XML should be parsed without resolving external data (squid:S2755)
I managed to solve it using:
factory = DocumentBuilderFactory.newInstance();
factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
// If you can't completely disable DTDs, then at least do the following:
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-general-entities
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-general-entities
// JDK7+ - http://xml.org/sax/features/external-general-entities
factory.setFeature("http://xml.org/sax/features/external-general-entities", false);
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-parameter-entities
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-parameter-entities
// JDK7+ - http://xml.org/sax/features/external-parameter-entities
factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false);
// Disable external DTDs as well
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);
// and these as well, per Timothy Morgan's 2014 paper: "XML Schema, DTD, and Entity Attacks"
factory.setXIncludeAware(false);
factory.setExpandEntityReferences(false);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "90"
} |
Q: How come classes in subfolders in my App_Code folder are not being found correctly? I am getting the following error when I put class files in subfolders of my App_Code folder:
errorCS0246: The type or namespace name 'MyClassName' could not be found (are you missing a using directive or an assembly reference?)
This class is not in a namespace at all. Any ideas?
A: You need to add codeSubDirectories to your compilation element in web.config
<configuration>
<system.web>
<compilation>
<codeSubDirectories>
<add directoryName="View"/>
</codeSubDirectories>
</compilation>
</system.web>
</configuration>
A: Check for BuildAction property of file. This should be set to "Compile"
A: Is it possible that you haven't set the folder as an application in IIS (or your web server)? If not, then the App_Code that gets used is that from the parent folder (or the next application upwards).
Ensure that the folder is marked as an application, and uses the correct version of ASP.NET.
A: It might not be the correct way but I find it the easiest.
Create the class in the main Folder as usual, then move it with your mouse to your sub-folder. Re-compile and all should be fine.
A: As you add folders to your app_code, they are getting separated by different namespaces, if I recall correctly, using the default namespace as the root, then adding for each folder.
A: In Visual Studio (2010 at least, but I recall past versions too), you can right click on the folder, within Solution Explorer, and then choose "Include in Project".
Then on the properties tab for each file (or select them all at once), you choose "Compile" for the "Build Action" property.
This worked for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What's a good source to learn about QEMU? What book or website would you recommend to learn about QEMU? I'd like to see some usage examples as well as how to use the APIs.
A: Detailed technical info:
*
*http://www.csd.uoc.gr/~hy428/reading/qemu-internals-slides-may6-2014.pdf
*http://lists.gnu.org/archive/html/qemu-devel/2011-04/pdfhC5rVdz7U8.pdf
I was not able to find the other chapters
*http://connect.ed-diamond.com/GNU-Linux-Magazine/GLMF-147/Qemu-Visite-au-caeur-de-l-emulateur (French only unfortunately)
A: Best Resources:
*
*Main QEMU Usage Documentation
*Qemu Man Page - Invaluable resource when working with qemu.
*Quick Start Guide - Slightly ubuntu/debian specific. Covers KVM.
*Qemu Networking Guide - Great resource, super useful.
Have fun qemu's a great tool.
A: Some additional resources to understand and get started with the source code:
*
*http://vmsplice.net/~stefan/qemu-code-overview.pdf
*https://wiki.aalto.fi/download/attachments/41747647/qemu.pdf
*https://www.usenix.org/legacy/event/usenix05/tech/freenix/full_papers/bellard/bellard.pdf
*http://www.ece.cmu.edu/~ee349/f-2012/lab2/qemu.pdf
*http://events.linuxfoundation.org/sites/events/files/slides/cloudopen-liguori.pdf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Best methodology for developing c# long running processor apps I have several different c# worker applications that run various continuous tasks: sending emails from queue, importing new orders from website database to orders database, making database backups and restores, running data processing for OLTP -> OLAP, and other related tasks. Before, I released these as windows services, but currently I release them as regular console applications. They are all based on a common task runner framework I created, and I am happy with that, however I am not sure what is the best way to deploy these types of applications. I like the console version because it is quick and easy, and it is possible to quickly see program activity and output. The downside is that the worker computer has several console screens running and it gets messy. On the other hand the service method seems to take to long to deploy and I have to go through event logs to see messages. What are some experiences/comments on this?
A: I like the console app approach. I typically have things set up so I can pass a switch like -unattended that suppresses the console screen.
A: Windows Service would be a good choice, it runs in the background no matter if you close current session, also you can configure it to start automatically after windows restart when performing a patches update on the server. You can log important messages to event viewer or database table.
A: For a thing like this, the standard way of doing it is with Windows services. You want the service to run on the network account so it won't require a logged in user.
A: I worked on something a few years ago that had similar issues. Logically I needed a service, but sometimes I needed to see what was going on and generally I wanted a history. So I developed a service which did the work, any time it wanted to log, it called to it's subscribers (implemented as an observer pattern).
The service registered it's own data logger (writing to a database) and at run time, the user could run a GUI which connected to the service using remoting to become a live listener!
A: I'm going to vote for Windows Services. It's going to get to be a real pain managing those console applications.
Windows Service deployment is easy: after the initial install, you just turn them off and do an XCOPY. No need to run any complicated installers. It's only semi-complicated the first time, and even then it's just
installutil MyApp.exe
Configre the services to run under a domain account for the best security and easiest interop with other machines.
Use a combination of event logs (with Error, Warning, and Information) for important notifications, and just dump verbose logging to a text file.
A: Why not get the best of all worlds and use something like:
http://topshelf-project.com/
It will allow you to run your program as command line or a windows service.
A: I'm not sure if this applies to your applications or not, but when I have some console applications that are not dependent on user input or they are the kind of applications that just do their job and quit, I run such programs on a virtual server, this way I don't see a screen popping up when I'm working, and virtual servers are easy to create and restart.
A: We regularly use windows services as the background processes. I don't like command-line apps as you need to be logged into the server for them to run. Services run in the background all the time (assuming they're auto-start). They're also trivial to install w/the sc.exe command-line tool that's in windows. I like it better than the bloat-ware that is installutil.exe. Of course installutil does more, but I don't need what it does. I just want to register my service.
We've also created a infrastructure where we have a generic service .exe that loads .DLLs based on an interface definition, so adding a new "service" is as simple as dropping in a new DLL and restarting the service host.
However, we started to move away from services. The problem we have with them is that they lock up the DLLs (for obvious reasons) so it's a pain to upgrade them. We need to stop, upgrade and then restart. Not hard, but additional steps. Instead we're moving to special "pages" in our asp.net apps that run the actual background jobs we need done. There's still a service, but all it does it invoke the asp.net pages so it doesn't lock up any of our DLLs. Then we can replace the DLLs in the asp.net bin directory and normal asp.net rules for app-domain restart kick in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: IE hosted .net user control using an unmanaged dll What's a good way for an IE hosted .net user control (e.g., < object classid="myctrl.dll#init">) to pull down an unmanaged dll for it to use?
For Click-once, this is easy with a manifest, but ie hosted controls don't get installed in the click-once app cache and instead run out of the download cache. Copy the dll there? Or into the temp directory?
Added: I'm fine with full trust. The reason for using .net is the better security model over active-x (more kinds of evidence)
A: I'd be surprised if it was possible to do this, because it would be a massive security hole. Native code can only run at full-trust, so loading a new native COM object requires that the object is signed, that the CAB it is downloaded in is also signed, and that the class (once registered) registers appropriately - and the user gets appropriate warnings to ensure that they only run controls and components that they trust.
.NET code is allowed to circumvent some of these rules because it is verified, runs under a virtual machine and is sandboxed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Filter ASP.NET web application's files when deploying I want to deploy my web application (asp.net mvc), and I need to remove all the codebehind files from my project.
Any scripts that you guys know of to do this?
I prefer using a script since I can tweak it if need be.
A: Just select the publish web site option on your project and it should take care of that (if you use vs).
Menu Build/Publish Website
A: In Visual Sudio, right click your project and select Publish... in the appearing dialog, select "Only files needed, to run this application".
The Publishing wizard will compile all codebehind files to your assembly and remove them for publishing.
A: I'd recommend using a web deployment project. This will compile your website and copy all of the files needed to for deployment into a new folder (without code-behind files, as they don't need to be deployed). You also get a little more control this way, as you can set up pre-build and post-build events.
For instance, I've set up a post-build event on the web deployment project to execute a batch file which copies some files into the Debug/Release folder and then zips it up, ready for FTP'ing to the production server.
A: First, codebehind files are not recommended for ASP.NET MVC. Codebehind is the controller for ASP.NET standard files - but in ASP.NET MVC you have far more powerful controllers.
Second, why do you need to remove codebehind files? IIS / MVC / Web.config should be taking care of ensuring that *.cs etc. files do not get served and result in a 404.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I share application configuration in a .net application? I've got a relatively large .Net system that consists of a number of different applications. Rather than having lots of different app.config files, I would like to share a single configuration file between all the apps.
I would also like to have one version when developing on my machine, one version for someone else developing on their machine, one version for a test system and one version for a live system.
Is there an easy way of doing this?
A: For large amounts of configuration which is needed by multiple applications, I would put this configuration into a central repository, e.g. a database, file in a common location.
To use different versions of a configuration file for different environments, create a build configuration for each of the different environments and a config file named after the environment, e.g:
production production.app.config
test test.app.config
You can then use a pre build event to copy the correct config over the default app.config in your project. This will then get copied to your output directory as normal.
The pre build event would look similar to the above, just use $(Configuration) to get the appropriate file for the environment you want.
You could combine this with the above to copy the overall build specific config files into each project.
A: You could use a Post-build event (Properties -> Build Events) on your "child" projects to copy a config file from a master project to others, like this:
copy /Y c:\path\to\master\project\app.config $(TargetPath).config
exit 0
(The "exit 0" as the last line prevents a build error).
To have separate config files for different build targets ("RELEASE", "DEBUG", etc), you can edit the .csproj file (or .vbproj) in NOTEPAD.EXE to add an AppConfig tag for each of the target groups, like this:
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>.\bin\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<AppConfig>debug.app.config</AppConfig>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>.\bin\Devel\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<AppConfig>release.app.config</AppConfig>
</PropertyGroup>
Notice the new <AppConfig> tags present in each group.
A: Instead of adding <add> elements to your <appSettings> section of your config file, you can add a file= attribute to the <appSettings> element to tell it to load that data from a different file. You could then keep your common settings in that common file.
See appSettings Element (General Settings Schema) in MSDN Library.
A: It is possible to use NTFS symbolic links to share a .NET .config file. I've seen this used successfully in a solution comprising of an ASP.NET applicaton, console applications, and more.
A: You can also put the configuration settings into the machine.config to share them amongst multiple applications. This makes deployment more problematic though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Nested dropdown I'm building a form with php/mysql. I've got a table with a list of locations and sublocations. Each sublocation has a parent location. A column "parentid" references another locationid in the same table. I now want to load these values into a dropdown in the following manner:
--Location 1
----Sublocation 1
----Sublocation 2
----Sublocation 3
--Location 2
----Sublocation 4
----Sublocation 5
etc. etc.
Did anyone get an elegant solution for doing this?
A: Are you looking for something like the OPTGROUP tag?
A: NOTE: This is only psuedo-code.. I didn't try running it, though you should be able to adjust the concepts to what you need.
$parentsql = "SELECT parentid, parentname FROM table";
$result = mysql_query($parentsql);
print "<select>";
while($row = mysql_fetch_assoc($result)){
$childsql = "SELECT childID, childName from table where parentid=".$row["parentID"];
$result2 = mysql_query($childsql);
print "<optgroup label=\".$row["parentname"]."\">";
while($row2 = mysql_fetch_assoc($result)){
print "<option value=\"".$row["childID"]."\">".$row["childName"]."</option>\n";
}
print "</optgroup>";
}
print "</select>";
With BaileyP's valid criticism in mind, here's how to do it WITHOUT the overhead of calling multiple queries in every loop:
$sql = "SELECT childId, childName, parentId, parentName FROM child LEFT JOIN parent ON child.parentId = parent.parentId ORDER BY parentID, childName";
$result = mysql_query($sql);
$currentParent = "";
print "<select>";
while($row = mysql_fetch_assoc($result)){
if($currentParent != $row["parentID"]){
if($currentParent != ""){
print "</optgroup>";
}
print "<optgroup label=\".$row["parentName"]."\">";
$currentParent = $row["parentName"];
}
print "<option value=\"".$row["childID"]."\">".$row["childName"]."</option>\n";
}
print "</optgroup>"
print "</select>";
A: optgroup is definitely the way to go. It's actually what it's for,
For example usage, view source of http://www.grandhall.eu/tips/submit/ - the selector under "Grandhall Grill Used".
A: You can use and space/dash indentation in the actual HTML. You'll need a recusrive loop to build it though. Something like:
<?php
$data = array(
'Location 1' => array(
'Sublocation1',
'Sublocation2',
'Sublocation3' => array(
'SubSublocation1',
),
'Location2'
);
$output = '<select name="location">' . PHP_EOL;
function build_items($input, $output)
{
if(is_array($input))
{
$output .= '<optgroup>' . $key . '</optgroup>' . PHP_EOL;
foreach($input as $key => $value)
{
$output = build_items($value, $output);
}
}
else
{
$output .= '<option>' . $value . '</option>' . PHP_EOL;
}
return $output;
}
$output = build_items($data, $output);
$output .= '</select>' . PHP_EOL;
?>
Or something similar ;)
A: Ideally, you'd select all this data in the proper order right out of the database, then just loop over that for output. Here's my take on what you're asking for
<?php
/*
Assuming data that looks like this
locations
+----+-----------+-------+
| id | parent_id | descr |
+----+-----------+-------+
| 1 | null | Foo |
| 2 | null | Bar |
| 3 | 1 | Doe |
| 4 | 2 | Rae |
| 5 | 1 | Mi |
| 6 | 2 | Fa |
+----+-----------+-------+
*/
$result = mysql_query( "SELECT id, parent_id, descr FROM locations order by coalesce(id, parent_id), descr" );
echo "<select>";
while ( $row = mysql_fetch_object( $result ) )
{
$optionName = htmlspecialchars( ( is_null( $row->parent_id ) ) ? "--{$row->descr}" : "----{$row->desc}r", ENT_COMPAT, 'UTF-8' );
echo "<option value=\"{$row->id}\">$optionName</option>";
}
echo "</select>";
If you don't like the use of the coalesce() function, you can add a "display_order" column to this table that you can manually set, and then use for the ORDER BY.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Embedded Web Server in .NET I would like to embed a light weight web server in a Windows application developed in .NET. The web server has to support PHP.
I have looked at Cassini, but it seems it is ASP.NET only.
A: The .net class HttpListener exposes the underlying http.sys upon which IIS is built. All machines since Windows XP2 have http.sys installed by default.
Here are some links to get you started.
XML-RPC SERVER USING HTTPLISTENER
HttpListener For Dummies
As for the PHP support, I don't know how you would enable this, but there is no technical reason you couldn't build it in.
A: I would look at the likes of XAMPP Lite which you could easily start up and shutdown with your application.
There is also AppWeb which claims to be exactly what you are looking for.
A: You can always use PHP as a CGI application. CGI is well documented, and AFAIK pretty easy to implement. Use Darrel Millers suggestion, and couple it with some CGI magick, and you should be cooking with gas.
A: Mongoose embedded webserver
https://code.google.com/p/mongoose/
You can build it with VS2012/10/08 as EXE and you can use PHP and also websockets to push data to the client app. Also you can build a DLL you can do this with make or bring the code into a VS DLL project and build out a _DLLMain, DEF file, etc. Then use it direct from C# - see the mongoose.cs and example.cs files.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Connection error for sqlserver rake db:migrate I am getting the following error:
Open
OLE error code:80004005 in Microsoft OLE DB Provider for SQL Server
[DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied.
HRESULT error code:0×80020009
Exception occurred.
I have tried following the directions here with no luck.
Any ideas?
FIXED
My specific issue I believe was related to having to many mixed systems installed on my laptop. I had Visual Studio 2005 and 2008 components and SQL Server Management Standard loaded with SQL Server Express Edition as well as various other components that might have affected the stability of my environment. Once I reloaded Vista and went back through the steps from the link above it worked without issue.
I only loaded the Express Editions of SQL Server and SQL Server Management Studio.
A: Usually a authentication/permissions error.
Is the SQL Server on the same box as the web server, review the accounts they are running under, and review the type of connection you are making (integrated or otherwise)?
A: Random guess: By default SQL Server (express, at least anyway) does NOT enable network access. The SQL Admin manager tools connect to it using named pipes, however rails most likely will be trying to use TCP.
A: My specifc issue I believe was related to having to many mixed systems installed on my laptop. I had Visual Studio 2005 and 2008 componets and SQL Server Managment Standard loaded with SQL Server Express Edition as well as various other componets that might have effected the stability of my envirnomnet. Once I reloaded Vista and went back through the steps on http://wiki.rubyonrails.org/rails/pages/HowtoConnectToMicrosoftSQLServer it worked without issue.
I only loaded the Express Editions of SQL Server and SQL Server Management Studio
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Setting background colour of Silverlight Listbox How do I set the background colour of a listbox? I have a listbox with textblocks in it and there does not appear to be anyway that actually works to set the background colour of these controls, why is this seemingly so hard?
In the interests of full disclosure I asked a similar question earlier
A: You can do this using the ListBox.ItemContainerStyle property. Very nice explanation of this can be found here. Based on that example, we can set the ItemContainterStyle to have a transparent background color and then wrap the ListBox in a Border (the ListBox doesn't display its background color).
<Border Background="Green">
<ListBox Background="Red">
<ListBox.ItemContainerStyle>
<Style TargetType="ListBoxItem">
<Setter Property="Background" Value="Transparent"/>
</Style>
</ListBox.ItemContainerStyle>
<TextBlock Text="Hello" />
<TextBlock Text="Goodbye" />
</ListBox>
</Border>
If you just want to set the actual items you can set the Background to an actual color and then skip the border.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Handling exceptions raised during method called via NSObject's performSelectorOnMainThread:withObject:waitUntilDone: What happens to exceptions raised while in myMethod: if it is invoked via NSObject's performSelectorOnMainThread:withObject:waitUntilDone:?
In particular, can I catch them in the scope of the call to performSelectorOnMainThread like this...
@try {
[self performSelectorOnMainThread:@selector(myMethod) withObject:nil waitUntilDone:YES];
} @catch(NSException *e) {
//deal with exception raised in myMethod here??
}
I realize that the semantics of this are weird if waitUntilDone is NO.
A: You won't be able to catch them like that. Cocoa may catch and log the exceptions to the console, but it won't re-raise them in the thread that called -perform. Instead, you could catch them in -myMethod: (or a wrapper that calls -myMethod:) and have it store them somewhere that your other thread can read them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: I need help executing a bat file from asp.net 2.0 I have an web application using asp.net 2.0 and vb.net I wrote a bat file to use GPG to encryt a file and call it whithin asp.net shell(pathname & filename).
when I double click on the bat file from cmd windows it works fine but when I call it in the application everything command that I pass is executed perfectly except the gpg command. I make sure the user under which the application is running as all rights and privileges to run the commands I import, trust and verify all the keys and in fact the bat file works fine when double click on it but why it did successfully execute the GPG function. It did not return any error just did not encrypt any file
gpg -e --always-trust -r <> Filename
any help will be appreciated.
Thank you!
A: I had a similar problem:
C#.Net: Why is my Process.Start() hanging?
It seems that Microsoft, in all their infinite wisdom, has blocked batch files from being executed by IIS in Windows Server 2003. Brenden Tompkins has a work-around here:
http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx
A: Have you tried fully qualifying the path to the gpg executable in your batch file?
A: Not sure how you are using 'shell()' but Process.Start is the way to go.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Ideal user feedback for HTML input Let's face it: writing proper, standards compliant HTML is quite difficult to do. Writing semantic HTML is even more so, but I don't think it's possible for a computer to figure that out.
So my question to you is what would the "ideal" feedback for a user who entered HTML be? Would it be a W3C validator style list of errors and corresponding line numbers and columns? Would it be a annotated code display of highlighted lines, explanations of the errors, and possible fixes? A spell-check style mode where you handle each error separately? Would it be not giving them any error information at all? Also, what types of errors are a good idea to tell users? (Some broad classes of errors include parsing errors, nesting errors (i.e. putting a div in a b tag) and well-formedness errors.)
*
*Scottm: Good point; I've never liked the W3C way of listing all the errors either. However, there is still the question of then letting the user edit the offending HTML appropriately.
*onebyone: Ok, so looking at some screenshots it looks like HTML Validator has a W3C error list, but combined with the ability to go straight to the relevant source segment and expanded error information, as well as the fact that you don't have to go scrolly to jump from one section to another. Looks pretty good, but is it usable by the average Joe?
Edit 1: As a clarification, this is with regards to the interface, not necessarily the underlying implementation. However, interface needs to be feasible with plain HTML and JavaScript (double usability points if it just needs HTML, but I think you're going to get stuck with W3C in that case).
A: I always think syntax highlighting is great. In HTML this would be very useful too, as tags can be easily distinguished by the developer when he/she can see them appropraitely coloured.
Personally I don't like the W3C way of giving you a big boring list of problems. Visual aids in the code itself are much better.
A: The output from the Firefox "HTML validator" add-on is pretty good. It shows you the source in a big window, and a list of errors in a small window (smallness doesn't matter, since you generally only care about the first one, since you're aiming for a total of none). Click an error to highlight, and an expanded explanation is shown in a second small window, while the offending part of the code is highlighted in the big window.
The add-on doesn't include a text editor, though, so it's not a full solution to your problem. It uses both an SGML-based validator and HTML Tidy, though, and I think for local files you can get it to make the corrections suggested by Tidy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Trigger a button click with JavaScript on the Enter key in a text box I have one text input and one button (see below). How can I use JavaScript to trigger the button's click event when the Enter key is pressed inside the text box?
There is already a different submit button on my current page, so I can't simply make the button a submit button. And, I only want the Enter key to click this specific button if it is pressed from within this one text box, nothing else.
<input type="text" id="txtSearch" />
<input type="button" id="btnSearch" value="Search" onclick="doSomething();" />
A: This onchange attempt is close, but misbehaves with respect to browser back then forward (on Safari 4.0.5 and Firefox 3.6.3), so ultimately, I wouldn't recommend it.
<input type="text" id="txtSearch" onchange="doSomething();" />
<input type="button" id="btnSearch" value="Search" onclick="doSomething();" />
A: Make the button a submit element, so it'll be automatic.
<input type = "submit"
id = "btnSearch"
value = "Search"
onclick = "return doSomething();"
/>
Note that you'll need a <form> element containing the input fields to make this work (thanks Sergey Ilinsky).
It's not a good practice to redefine standard behaviour, the Enter key should always call the submit button on a form.
A: Since no one has used addEventListener yet, here is my version. Given the elements:
<input type = "text" id = "txt" />
<input type = "button" id = "go" />
I would use the following:
var go = document.getElementById("go");
var txt = document.getElementById("txt");
txt.addEventListener("keypress", function(event) {
event.preventDefault();
if (event.keyCode == 13)
go.click();
});
This allows you to change the event type and action separately while keeping the HTML clean.
Note that it's probably worthwhile to make sure this is outside of a <form> because when I enclosed these elements in them pressing Enter submitted the form and reloaded the page. Took me a few blinks to discover.
Addendum: Thanks to a comment by @ruffin, I've added the missing event handler and a preventDefault to allow this code to (presumably) work inside a form as well. (I will get around to testing this, at which point I will remove the bracketed content.)
A: event.returnValue = false
Use it when handling the event or in the function your event handler calls.
It works in Internet Explorer and Opera at least.
A: For jQuery mobile, I had to do:
$('#id_of_textbox').live("keyup", function(event) {
if(event.keyCode == '13'){
$('#id_of_button').click();
}
});
A: To add a completely plain JavaScript solution that addressed @icedwater's issue with form submission, here's a complete solution with form.
NOTE: This is for "modern browsers", including IE9+. The IE8 version isn't much more complicated, and can be learned here.
Fiddle: https://jsfiddle.net/rufwork/gm6h25th/1/
HTML
<body>
<form>
<input type="text" id="txt" />
<input type="button" id="go" value="Click Me!" />
<div id="outige"></div>
</form>
</body>
JavaScript
// The document.addEventListener replicates $(document).ready() for
// modern browsers (including IE9+), and is slightly more robust than `onload`.
// More here: https://stackoverflow.com/a/21814964/1028230
document.addEventListener("DOMContentLoaded", function() {
var go = document.getElementById("go"),
txt = document.getElementById("txt"),
outige = document.getElementById("outige");
// Note that jQuery handles "empty" selections "for free".
// Since we're plain JavaScripting it, we need to make sure this DOM exists first.
if (txt && go) {
txt.addEventListener("keypress", function (e) {
if (event.keyCode === 13) {
go.click();
e.preventDefault(); // <<< Most important missing piece from icedwater
}
});
go.addEventListener("click", function () {
if (outige) {
outige.innerHTML += "Clicked!<br />";
}
});
}
});
A: For those who may like brevity and modern js approach.
input.addEventListener('keydown', (e) => {if (e.keyCode == 13) doSomething()});
where input is a variable containing your input element.
A: In plain JavaScript,
if (document.layers) {
document.captureEvents(Event.KEYDOWN);
}
document.onkeydown = function (evt) {
var keyCode = evt ? (evt.which ? evt.which : evt.keyCode) : event.keyCode;
if (keyCode == 13) {
// For Enter.
// Your function here.
}
if (keyCode == 27) {
// For Escape.
// Your function here.
} else {
return true;
}
};
I noticed that the reply is given in jQuery only, so I thought of giving something in plain JavaScript as well.
A: document.onkeypress = function (e) {
e = e || window.event;
var charCode = (typeof e.which == "number") ? e.which : e.keyCode;
if (charCode == 13) {
// Do something here
printResult();
}
};
Heres my two cents. I am working on an app for Windows 8 and want the button to register a click event when I press the Enter button. I am doing this in JS. I tried a couple of suggestions, but had issues. This works just fine.
A: Then just code it in!
<input type = "text"
id = "txtSearch"
onkeydown = "if (event.keyCode == 13)
document.getElementById('btnSearch').click()"
/>
<input type = "button"
id = "btnSearch"
value = "Search"
onclick = "doSomething();"
/>
A: To do it with jQuery:
$("#txtSearch").on("keyup", function (event) {
if (event.keyCode==13) {
$("#btnSearch").get(0).click();
}
});
To do it with normal JavaScript:
document.getElementById("txtSearch").addEventListener("keyup", function (event) {
if (event.keyCode==13) {
document.getElementById("#btnSearch").click();
}
});
A: Use keypress and event.key === "Enter" with modern JS!
const textbox = document.getElementById("txtSearch");
textbox.addEventListener("keypress", function onEvent(event) {
if (event.key === "Enter") {
document.getElementById("btnSearch").click();
}
});
Mozilla Docs
Supported Browsers
A: One basic trick you can use for this that I haven't seen fully mentioned. If you want to do an ajax action, or some other work on Enter but don't want to actually submit a form you can do this:
<form onsubmit="Search();" action="javascript:void(0);">
<input type="text" id="searchCriteria" placeholder="Search Criteria"/>
<input type="button" onclick="Search();" value="Search" id="searchBtn"/>
</form>
Setting action="javascript:void(0);" like this is a shortcut for preventing default behavior essentially. In this case a method is called whether you hit enter or click the button and an ajax call is made to load some data.
A: In jQuery, you can use event.which==13. If you have a form, you could use $('#formid').submit() (with the correct event listeners added to the submission of said form).
$('#textfield').keyup(function(event){
if(event.which==13){
$('#submit').click();
}
});
$('#submit').click(function(e){
if($('#textfield').val().trim().length){
alert("Submitted!");
} else {
alert("Field can not be empty!");
}
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<label for="textfield">
Enter Text:</label>
<input id="textfield" type="text">
<button id="submit">
Submit
</button>
A: These day the change event is the way!
document.getElementById("txtSearch").addEventListener('change',
() => document.getElementById("btnSearch").click()
);
A: Figured this out:
<input type="text" id="txtSearch" onkeypress="return searchKeyPress(event);" />
<input type="button" id="btnSearch" Value="Search" onclick="doSomething();" />
<script>
function searchKeyPress(e)
{
// look for window.event in case event isn't passed in
e = e || window.event;
if (e.keyCode == 13)
{
document.getElementById('btnSearch').click();
return false;
}
return true;
}
</script>
A: To trigger a search every time the enter key is pressed, use this:
$(document).keypress(function(event) {
var keycode = (event.keyCode ? event.keyCode : event.which);
if (keycode == '13') {
$('#btnSearch').click();
}
}
A: Try it:
<input type="text" id="txtSearch"/>
<input type="button" id="btnSearch" Value="Search"/>
<script>
window.onload = function() {
document.getElementById('txtSearch').onkeypress = function searchKeyPress(event) {
if (event.keyCode == 13) {
document.getElementById('btnSearch').click();
}
};
document.getElementById('btnSearch').onclick =doSomething;
}
</script>
A: In jQuery, the following would work:
$("#id_of_textbox").keyup(function(event) {
if (event.keyCode === 13) {
$("#id_of_button").click();
}
});
$("#pw").keyup(function(event) {
if (event.keyCode === 13) {
$("#myButton").click();
}
});
$("#myButton").click(function() {
alert("Button code executed.");
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
Username:<input id="username" type="text"><br>
Password: <input id="pw" type="password"><br>
<button id="myButton">Submit</button>
Or in plain JavaScript, the following would work:
document.getElementById("id_of_textbox")
.addEventListener("keyup", function(event) {
event.preventDefault();
if (event.keyCode === 13) {
document.getElementById("id_of_button").click();
}
});
document.getElementById("pw")
.addEventListener("keyup", function(event) {
event.preventDefault();
if (event.keyCode === 13) {
document.getElementById("myButton").click();
}
});
function buttonCode()
{
alert("Button code executed.");
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
Username:<input id="username" type="text"><br>
Password: <input id="pw" type="password"><br>
<button id="myButton" onclick="buttonCode()">Submit</button>
A: onkeydown="javascript:if (event.which || event.keyCode){if ((event.which == 13) || (event.keyCode == 13)) {document.getElementById('btnSearch').click();}};"
This is just something I have from a somewhat recent project... I found it on the net, and I have no idea if there's a better way or not in plain old JavaScript.
A: Although, I'm pretty sure that as long as there is only one field in the form and one submit button, hitting enter should submit the form, even if there is another form on the page.
You can then capture the form onsubmit with js and do whatever validation or callbacks you want.
A: This is a solution for all the YUI lovers out there:
Y.on('keydown', function() {
if(event.keyCode == 13){
Y.one("#id_of_button").simulate("click");
}
}, '#id_of_textbox');
In this special case I did have better results using YUI for triggering DOM objects that have been injected with button functionality - but this is another story...
A: In modern, undeprecated (without keyCode or onkeydown) Javascript:
<input onkeypress="if(event.key == 'Enter') {console.log('Test')}">
A: In Angular2:
(keyup.enter)="doSomething()"
If you don't want some visual feedback in the button, it's a good design to not reference the button but rather directly invoke the controller.
Also, the id isn't needed - another NG2 way of separating between the view and the model.
A: Short working pure JS
txtSearch.onkeydown= e => (e.key=="Enter") ? btnSearch.click() : 1
txtSearch.onkeydown= e => (e.key=="Enter") ? btnSearch.click() : 1
function doSomething() {
console.log('');
}
<input type="text" id="txtSearch" />
<input type="button" id="btnSearch" value="Search" onclick="doSomething();" />
A: This in-case you want also diable the enter button from Posting to server and execute the Js script.
<input type="text" id="txtSearch" onkeydown="if (event.keyCode == 13)
{document.getElementById('btnSearch').click(); return false;}"/>
<input type="button" id="btnSearch" value="Search" onclick="doSomething();" />
A: Nobody noticed the html attibute "accesskey" which is available since a while.
This is a no javascript way to keyboard shortcuts stuffs.
The accesskey attributes shortcuts on MDN
Intented to be used like this. The html attribute itself is enough, howewer we can change the placeholder or other indicator depending of the browser and os. The script is a untested scratch approach to give an idea. You may want to use a browser library detector like the tiny bowser
let client = navigator.userAgent.toLowerCase(),
isLinux = client.indexOf("linux") > -1,
isWin = client.indexOf("windows") > -1,
isMac = client.indexOf("apple") > -1,
isFirefox = client.indexOf("firefox") > -1,
isWebkit = client.indexOf("webkit") > -1,
isOpera = client.indexOf("opera") > -1,
input = document.getElementById('guestInput');
if(isFirefox) {
input.setAttribute("placeholder", "ALT+SHIFT+Z");
} else if (isWin) {
input.setAttribute("placeholder", "ALT+Z");
} else if (isMac) {
input.setAttribute("placeholder", "CTRL+ALT+Z");
} else if (isOpera) {
input.setAttribute("placeholder", "SHIFT+ESCAPE->Z");
} else {'Point me to operate...'}
<input type="text" id="guestInput" accesskey="z" placeholder="Acces shortcut:"></input>
A: My reusable Vanilla JS solution. so you can change which button gets hit depending on what element/textbox is active.
<input type="text" id="message" onkeypress="enterKeyHandler(event,'sendmessage')" />
<input type="button" id="sendmessage" value="Send"/>
function enterKeyHandler(e,button) {
e = e || window.event;
if (e.key == 'Enter') {
document.getElementById(button).click();
}
}
A: You can try below code in jQuery.
$("#txtSearch").keyup(function(e) {
e.preventDefault();
var keycode = (e.keyCode ? e.keyCode : e.which);
if (keycode === 13 || e.key === 'Enter')
{
$("#btnSearch").click();
}
});
A: This also might help, a small JavaScript function, which works fine:
<script type="text/javascript">
function blank(a) { if(a.value == a.defaultValue) a.value = ""; }
function unblank(a) { if(a.value == "") a.value = a.defaultValue; }
</script>
<input type="text" value="email goes here" onfocus="blank(this)" onblur="unblank(this)" />
I know this question is solved, but I just found something, which can be helpful for others.
A: I have developed custom javascript to achieve this feature by just adding class
Example: <button type="button" class="ctrl-p">Custom Print</button>
Here Check it out Fiddle
// find elements
var banner = $("#banner-message")
var button = $("button")
// handle click and add class
button.on("click", function(){
if(banner.hasClass("alt"))
banner.removeClass("alt")
else
banner.addClass("alt")
})
$(document).ready(function(){
$(document).on('keydown', function (e) {
if (e.ctrlKey) {
$('[class*="ctrl-"]:not([data-ctrl])').each(function (idx, item) {
var Key = $(item).prop('class').substr($(item).prop('class').indexOf('ctrl-') + 5, 1).toUpperCase();
$(item).attr("data-ctrl", Key);
$(item).append('<div class="tooltip fade top in tooltip-ctrl alter-info" role="tooltip" style="margin-top: -61px; display: block; visibility: visible;"><div class="tooltip-arrow" style="left: 49.5935%;"></div><div class="tooltip-inner"> CTRL + ' + Key + '</div></div>')
});
}
if (e.ctrlKey && e.which != 17) {
var Key = String.fromCharCode(e.which).toLowerCase();
if( $('.ctrl-'+Key).length == 1) {
e.preventDefault();
if (!$('#divLoader').is(":visible"))
$('.ctrl-'+Key).click();
console.log("You pressed ctrl + "+Key );
}
}
});
$(document).on('keyup', function (e) {
if(!e.ctrlKey ){
$('[class*="ctrl-"]').removeAttr("data-ctrl");
$(".tooltip-ctrl").remove();
}
})
});
#banner-message {
background: #fff;
border-radius: 4px;
padding: 20px;
font-size: 25px;
text-align: center;
transition: all 0.2s;
margin: 0 auto;
width: 300px;
}
#banner-message.alt {
background: #0084ff;
color: #fff;
margin-top: 40px;
width: 200px;
}
#banner-message.alt button {
background: #fff;
color: #000;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="banner-message">
<p>Hello World</p>
<button class="ctrl-s" title="s">Change color</button><br/><br/>
<span>Press CTRL+S to trigger click event of button</span>
</div>
-- or --
check out running example
https://stackoverflow.com/a/58010042/6631280
Note: on current logic, you need to press Ctrl +
Enter
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1427"
} |
Q: Windows API commctrl.h using application doesn't work on machines without the Platform SDK I have written something that uses the following includes:
#include <math.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <commctrl.h>
This code works fine on 2 machines with the Platform SDK installed, but doesn't run (neither debug nor release versions) on clean installs of windows (VMs of course). It dies with the quite familiar:
---------------------------
C:\Documents and Settings\Someone\Desktop\DesktopRearranger.exe
---------------------------
C:\Documents and Settings\Someone\Desktop\DesktopRearranger.exe
This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem.
---------------------------
OK
---------------------------
How can I make it run on clean installs? Which dll is it using which it can't find? My bet is on commctrl, but can someone enlighten me on why it's isn't with every windows?
Further more, if anyone has tips on how to debug such a thing, as my CPP is already rusty, as it seems :)
Edit - What worked for me is downloading the Redistributable for Visual Studio 2008. I don't think it's a good solution - downloading a 2MB file and an install to run a simple 11K tool. I think I'll change the code to use LoadLibrary to get the 2 or 3 functions I need from comctl32.dll. Thanks everyone :)
A: Use Dependency Walker. Download and install from http://www.dependencywalker.com/ (just unzip to install). Then load up your executable. The tool will highlight which DLL is missing. Then you can find the redistributable pack which you need to ship with your executable.
If you use VS2005, most cases will be covered by http://www.microsoft.com/downloads/details.aspx?FamilyId=32BC1BEE-A3F9-4C13-9C99-220B62A191EE&displaylang=en which includes everything needed to run EXEs created with VS2005. Using depends.exe you may find a more lightweight solution, though.
A: Common controls is a red herring. Your problem is that the Visual C++ 8.0 runtime - I assume you're using Visual Studio 2005 - isn't installed. Either statically link to the C/C++ runtime library, or distribute the runtime DLL.
You will have this problem with any C or C++ program that uses the DLL. You could get away with it in VS 6.0 as msvcrt.dll came with the OS from Windows 2000 up, and in VS.NET 2003 as msvcr71.dll came with .NET Framework 1.1. No more. Visual Studio 2005 and later use side-by-side assemblies to prevent DLL Hell, but that means you can't rely even on .NET 2.0 installing the exact version of C runtime that your program's built-in manifest uses. .NET 2.0's mscorwks.dll binds to version 8.0.50608.0 in its manifest; a VS-generated application binds to 8.0.50727.762 as of VS2005 SP1. My recollection is it used some pre-release version in the original (RTM) release of VS2005, which meant you had to deploy a Publisher Policy merge module if you were using the merge modules, to redirect the binding to the version actually in the released C run-time merge module.
See also Redistributing Visual C++ Files on MSDN.
A: I suspect it is trying to find a version of common controls that isn't installed. You may need a manifest file to map the version of common controls to your target operating system. Also, you may need to make sure you have installed the same VC runtimes that you were linked to.
Chris Jackson blog
EDIT: A little searching and I've confirmed (mostly) that it is the version of your VC++ runtimes that is to blame. You need to distribute the versions that you built with. The platform SDK usually includes a merge module of these for that purpose, but there is often a VCRedist.exe for them as well. Try looking Microsoft's downloads.
KB94885
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: "network location cannot be reached" error in IIS6 I am troubleshooting an issue with IIS6 where all sites bound to ip addresses other than the default give an error message "network location cannot be reached" when trying to start any of these sites.
The nic has all the ip addresses configured.
When I do a httpcfg query iplisten, I see only the default ip address.
When I added them with httpcfg, then all the web sites stopped working so I figured I didn something wrong so I removed them.
Two questions:
1- Why are those websites refusing to start?
2- What should be in the result of httpcfg query iplisten? All ip addresses or just one?
The websites used to work fine and something has changed. I applied a few Windows updates but I am not sure if they broke anything (I doubt it.. otherwise hundreds of web hosting companies would be screaming)
A: The solution was to use httpcfg without specifying the port number.
A: Sometimes there are bugs wehn applying windows updates. One thing you might try is running aspnet_regiis /i or /c. I'm not sure if that's your problem but it's certainly worth a shot.
A: That message generally comes from Windows networking (it's one of ERROR_NETWORK_UNREACHABLE, ERROR_HOST_UNREACHABLE, ERROR_PROTOCOL_UNREACHABLE - you can search for error messages in WinError.h).
Have you set up virtual directories to point at network shares on another machine? If so, check connectivity to that machine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Good error handling practice What is a good error handling practice for an asp.net site? Examples? Thanks!
A: As with any .net project I find the best way is to only catch specific error types if they are may to happen on the given page.
For example you could catch Format Exceptions for a users given input (just incase JavaScript validation fails and you have not use tryparse) but always leave the catching of the top level Exception to the global error handler.
try
{
//Code that could error here
}
catch (FormatException ex)
{
//Code to tell user of their error
//all other errors will be handled
//by the global error handler
}
You can use the open source elmah (Error Logging Modules and Handlers) for ASP.Net to do this top level/global error catching for you if you want.
Using elmah it can create a log of errors that is viewable though a simple to configure web interface. You can also filter different types of errors and have custom error pages of your own for different error types.
A: One practice that I find to be especially useful is to create a generic error page, and then set your defaultRedirect on the customErrors node of the web.config to that error page.
Then setup your global.asax for logging all unhandled exceptions and then put them (the unhandled exceptions) in a static property on some class (I have a class called ErrorUtil with a static LastError property). Your error page can then look at this property to determine what to display to the user.
More details here: http://www.codeproject.com/KB/aspnet/JcGlobalErrorHandling.aspx
A: Well, that's pretty wide open, which is completely cool. I'll refer you to a word .doc you can download from Dot Net Spider, which is actually the basis for my small company's code standard. The standard includes some very useful error handling tips.
One such example for exceptions (I don't recall if this is original to the document or if we added it to the doc):
Never do a “catch exception and do nothing.” If you hide an exception, you will never know if the exception happened. You should always try to avoid exceptions by checking all the error conditions programmatically.
Example of what not to do:
try
{
...
}
catch{}
Very naughty unless you have a good reason for it.
A: You should make sure that you can catch most of the errors that are generated by your application and display a friendly message to the users. But of course you cannot catch all the errors for that you can use web.config and defaultRedirect by another user. Another very handy tool to log the errors is ELMAH. ELMAH will log all the errors generated by your application and show it to you in a very readable way. Plugging ELMAH in your application is as simple as adding few lines of code in web.config file and attaching the assembly. You should definitely give ELMAH a try it will literally save you hours and hours of pain.
http://code.google.com/p/elmah/
A: *
*Code defensively within each page for exceptions that you expect could happen and deal with them appropriately, so not to disrupt the user every time an exception occurs.
*Log all exceptions, with a reference.
*Provide a generic error page, for any unhandled exceptions, which provides a reference to use for support (support can identify details from logs). Don't display the actual exception, as most users will not understand it but is a potential security risk as it exposes information about your system (potentially passwords etc).
*Don't catch all exceptions and do nothing with them (as in the above answer). There is almost never a good reason to do this, occasionally you may want to catch a specific exception and not do any deliberately but this should be used wisely.
A: It is not always a good idea to redirect the user to a standard error page. If a user is working on a form, they may not want to be redirected away from the form they are working on. I put all code that could cause an exception inside a try/catch block, and inside the catch block I spit out an alert message alerting the user that an error has occurred as well as log the exception in a database including form input, query string, etc. I am developing an internal site, however, so most users just call me if they are having a problem. For a public site, you may wish to use something like elmah.
A: public string BookLesson(Customer_Info oCustomerInfo, CustLessonBook_Info oCustLessonBookInfo)
{
string authenticationID = string.Empty;
int customerID = 0;
string message = string.Empty;
DA_Customer oDACustomer = new DA_Customer();
using (TransactionScope scope = new TransactionScope())
{
if (oDACustomer.ValidateCustomerLoginName(oCustomerInfo.CustId, oCustomerInfo.CustLoginName) == "Y")
{
// if a new student
if (oCustomerInfo.CustId == 0)
{
oCustomerInfo.CustPassword = General.GeneratePassword(6, 8);
oCustomerInfo.CustPassword = new DA_InternalUser().GetPassword(oCustomerInfo.CustPassword, false);
authenticationID = oDACustomer.Register(oCustomerInfo, ref customerID);
oCustLessonBookInfo.CustId = customerID;
}
else // if existing student
{
oCustomerInfo.UpdatedByCustomer = "Y";
authenticationID = oDACustomer.CustomerUpdateProfile(oCustomerInfo);
}
message = authenticationID;
// insert lesson booking details
new DA_Lesson().BookLesson(oCustLessonBookInfo);
}
else
{
message = "login exists";
}
scope.Complete();
return message;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Another locking question I'm trying to get my multithreading understanding locked down. I'm doing my best to teach myself, but some of these issues need clarification.
I've gone through three iterations with a piece of code, experimenting with locking.
In this code, the only thing that needs locking is this.managerThreadPriority.
First, the simple, procedural approach, with minimalistic locking.
var managerThread = new Thread
(
new ThreadStart(this.ManagerThreadEntryPoint)
);
lock (this.locker)
{
managerThread.Priority = this.managerThreadPriority;
}
managerThread.Name = string.Format("Manager Thread ({0})", managerThread.GetHashCode());
managerThread.Start();
Next, a single statement to create and launch a new thread, but the lock appears to be scoped too large, to include the creation and launching of the thread. The compiler doesn't somehow magically know that the lock can be released after this.managerThreadPriority is used.
This kind of naive locking should be avoided, I would assume.
lock (this.locker)
{
new Thread
(
new ThreadStart(this.ManagerThreadEntryPoint)
)
{
Priority = this.managerThreadPriority,
Name = string.Format("Manager Thread ({0})", GetHashCode())
}
.Start();
}
Last, a single statement to create and launch a new thread, with a "embedded" lock only around the shared field.
new Thread
(
new ThreadStart(this.ManagerThreadEntryPoint)
)
{
Priority = new Func<ThreadPriorty>(() =>
{
lock (this.locker)
{
return this.managerThreadPriority;
}
})(),
Name = string.Format("Manager Thread ({0})", GetHashCode())
}
.Start();
Care to comment about the scoping of lock statements? For example, if I need to use a field in an if statement and that field needs to be locked, should I avoid locking the entire if statement? E.g.
bool isDumb;
lock (this.locker) isDumb = this.FieldAccessibleByMultipleThreads;
if (isDumb) ...
Vs.
lock (this.locker)
{
if (this.FieldAccessibleByMultipleThreads) ...
}
A: 1) Before you even start the other thread, you don't have to worry about shared access to it at all.
2) Yes, you should lock all access to shared mutable data. (If it's immutable, no locking is required.)
3) Don't use GetHashCode() to indicate a thread ID. Use Thread.ManagedThreadId. I know, there are books which recommend Thread.GetHashCode() - but look at the docs.
A:
Care to comment about the scoping of lock statements? For example, if I
need to use a field in an if statement and that field needs to be locked,
should I avoid locking the entire if statement?
In general, it should be scoped for the portion of code that needs the resource being guarded, and no more than that. This is so it can be available for other threads to make use of it as soon as possible.
But it depends on whether the resource you are locking is part of a bigger picture that has to maintain consistency, or whether it is a standalone resource not related directly to any other.
If you have interrelated parts that need to all change in a synchronized manner, that whole set of parts needs to be locked for the duration of the whole process.
If you have an independent, single item uncoupled to anything else, then only that one item needs to be locked long enough for a portion of the process to access it.
Another way to say it is, are you protecting synchronous or asynchronous access to the resource?
Synchronous access needs to hold on to it longer in general because it cares about a bigger picture that the resource is a part of. It must maintain consistency with related resources. You may very well wrap an entire for-loop in such a case if you want to prevent interruptions until all are processed.
Asynchronous access should hold onto it as briefly as possible. Thus, the more appropriate place for the lock would be inside portions of code, such as inside a for-loop or if-statement so you can free up the individual elements right away even before other ones are processed.
Aside from these two considerations, I would add one more. Avoid nesting of locks involving two different locking objects. I have learned by experience that it is a likely source of deadlocks, particularly if other parts of the code use them. If the two objects are part of a group that needs to be treated as a single whole all the time, such nesting should be refactored out.
A: There is no need to lock anything before you have started any threads.
If you are only going to read a variable there's no need for locks either. It's when you mix reads and writes that you need to use mutexes and similar locking, and you need to lock in both the reading and the writing thread.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Dynamic SQL - Search Query - Variable Number of Keywords We are trying to update our classic asp search engine to protect it from SQL injection. We have a VB 6 function which builds a query dynamically by concatenating a query together based on the various search parameters. We have converted this to a stored procedure using dynamic sql for all parameters except for the keywords.
The problem with keywords is that there are a variable number words supplied by the user and we want to search several columns for each keyword. Since we cannot create a separate parameter for each keyword, how can we build a safe query?
Example:
@CustomerId AS INT
@Keywords AS NVARCHAR(MAX)
@sql = 'SELECT event_name FROM calendar WHERE customer_id = @CustomerId '
--(loop through each keyword passed in and concatenate)
@sql = @sql + 'AND (event_name LIKE ''%' + @Keywords + '%'' OR event_details LIKE ''%' + @Keywords + '%'')'
EXEC sp_executesql @sql N'@CustomerId INT, @CustomerId = @CustomerId
What is the best way to handle this and maintaining protection from SQL injection?
A: You may not like to hear this, but it might be better for you to go back to dynamically constructing your SQL query in code before issuing against the database. If you use parameter placeholders in the SQL string you get the protection against SQL injection attacks.
Example:
string sql = "SELECT Name, Title FROM Staff WHERE UserName=@UserId";
using (SqlCommand cmd = new SqlCommand(sql))
{
cmd.Parameters.Add("@UserId", SqlType.VarChar).Value = "smithj";
You can build the SQL string depending on the set of columns you need to query and then add the parameter values once the string is complete. This is a bit of a pain to do, but I think it is much easier than having really complicated TSQL which unpicks lots of possible permutations of possible inputs.
A: You have 3 options here.
*
*Use a function that converts lists tables and join into it. So you will have something like this.
SELECT *
FROM calendar c
JOIN dbo.fnListToTable(@Keywords) k
ON c.keyword = k.keyword
*Have a fixed set of params, and only allow the maximum of N keywords to be searched on
CREATE PROC spTest
@Keyword1 varchar(100),
@Keyword2 varchar(100),
....
*Write an escaping string function in TSQL and escape your keywords.
A: *
*Unless you need it, you could simply strip out any character that's not in [a-zA-Z ] - most of those things won't be in searches and you should not be able to be injected that way, nor do you have to worry about keywords or anything like that. If you allow quotes, however, you will need to be more careful.
*Similar to sambo99's #1, you can insert the keywords into a temporary table or table variable and join to it (even using wildcards) without danger of injection:
This isn't really dynamic:
SELECT DISTINCT event_name
FROM calendar
INNER JOIN #keywords
ON event_name LIKE '%' + #keywords.keyword + '%'
OR event_description LIKE '%' + #keywords.keyword + '%'
*
*You can actually generate an SP with a large number of parameters instead of coding it by hand (set the defaults to '' or NULL depending on your preference in coding your searches). If you found you needed more parameters, it would be simple to increase the number of parameters it generated.
*You can move the search to a full-text index outside the database like Lucene and then use the Lucene results to pull the matching database rows.
A: You can try this:
SELECT * FROM [tablename] WHERE LIKE % +keyword%
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I get the build to automatically update a web service reference in Flex? From within FlexBuilder3, I can go to "Data/Manage Web Services..." select a web service, and click "Update" to ensure that my code and the server are in sync. How do I automate this so that each time I build, the automatically generated web service code is regenerated?
If the server interface changes during development but my code doesn't, it won't work anyway - I'd rather have a compilation error than a runtime error I had to track down to a changed web service interface.
A: Unfortunately you can not update a web service at build time. The update is part of the wizard and is not implemented as a separate action to be called on demand.
One suggestion I have is to go in the project properties and inside the Build panel add as a new builder a program. You will have to create this program that checks each time the project is build that the WSDL file did not change. It is a little bit complicated but if you are working on a project that relies on a web service that is under heavy development it might save you a lot of time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Capistrano not restarting Mongrel clusters properly I have a cluster of three mongrels running under nginx, and I deploy the app using Capistrano 2.4.3. When I "cap deploy" when there is a running system, the behavior is:
*
*The app is deployed. The code is successfully updated.
*In the cap deploy output, there is this:
*
*executing "sudo -p 'sudo password: '
mongrel_rails cluster::restart -C
/var/www/rails/myapp/current/config/mongrel_cluster.yml"
*servers: ["myip"]
*[myip] executing command
*** [out :: myip] stopping port 9096
*** [out :: myip] stopping port 9097
*** [out :: myip] stopping port 9098
*** [out :: myip] already started port 9096
*** [out :: myip] already started port 9097
*** [out :: myip] already started port 9098
*I check immediately on the server and find that Mongrel is still running, and the PID files are still present for the previous three instances.
*A short time later (less than one minute), I find that Mongrel is no longer running, the PID files are gone, and it has failed to restart.
*If I start mongrel on the server by hand, the app starts up just fine.
It seems like 'mongrel_rails cluster::restart' isn't properly waiting for a full stop
before attempting a restart of the cluster. How do I diagnose and fix this issue?
EDIT: Here's the answer:
mongrel_cluster, in the "restart" task, simply does this:
def run
stop
start
end
It doesn't do any waiting or checking to see that the process exited before invoking "start". This is a known bug with an outstanding patch submitted. I applied the patch to Mongrel Cluster and the problem disappeared.
A: You can explicitly tell the mongrel_cluster recipes to remove the pid files before a start by adding the following in your capistrano recipes:
# helps keep mongrel pid files clean
set :mongrel_clean, true
This causes it to pass the --clean option to mongrel_cluster_ctl.
I went back and looked at one of my deployment recipes and noticed that I had also changed the way my restart task worked. Take a look at the following message in the mongrel users group:
mongrel users discussion of restart
The following is my deploy:restart task. I admit it's a bit of a hack.
namespace :deploy do
desc "Restart the Mongrel processes on the app server."
task :restart, :roles => :app do
mongrel.cluster.stop
sleep 2.5
mongrel.cluster.start
end
end
A: First, narrow the scope of what your testing by only calling cap deploy:restart. You might want to pass the --debug option to prompt before remote execution or the --dry-run option just to see what's going on as you tweak your settings.
At first glance, this sounds like a permissions issue on the pid files or mongrel processes, but it's difficult to know for sure. A couple things that catch my eye are:
*
*the :runner variable is explicity set to nil -- Was there a specific reason for this?
*Capistrano 2.4 introduced a new behavior for the :admin_runner variable. Without seeing the entire recipe, is this possibly related to your problem?
:runner vs. :admin_runner (from capistrano 2.4 release)
Some cappers have noted that having deploy:setup and deploy:cleanup run as the :runner user messed up their carefully crafted permissions. I agreed that this was a problem. With this release, deploy:start, deploy:stop, and deploy:restart all continue to use the :runner user when sudoing, but deploy:setup and deploy:cleanup will use the :admin_runner user. The :admin_runner variable is unset, by default, meaning those tasks will sudo as root, but if you want them to run as :runner, just do “set :admin_runner, runner”.
My recommendation for what to do next. Manually stop the mongrels and clean up the PIDs. Start the mongrels manually. Next, continue to run cap deploy:restart while debugging the problem. Repeat as necessary.
A: Either way, my mongrels are starting before the previous stop command has finished shutting 'em all down.
sleep 2.5 is not a good solution, if it takes longer than 2.5 seconds to halt all running mongrels.
There seems to be a need for:
stop && start
vs.
stop; start
(this is how bash works, && waits for the first command to finish w/o error, while ";" simply runs the next command).
I wonder if there is a:
wait cluster_stop
then cluster_start
A: I hate to be so basic, but it sounds like the pid files are still hanging around when it is trying to start. Make sure that mongrel is stopped by hand. Clean up the pid files by hand. Then do a cap deploy.
A: Good discussion: http://www.ruby-forum.com/topic/139734#745030
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why is it impossible, without attempting I/O, to detect that TCP socket was gracefully closed by peer? As a follow up to a recent question, I wonder why it is impossible in Java, without attempting reading/writing on a TCP socket, to detect that the socket has been gracefully closed by the peer? This seems to be the case regardless of whether one uses the pre-NIO Socket or the NIO SocketChannel.
When a peer gracefully closes a TCP connection, the TCP stacks on both sides of the connection know about the fact. The server-side (the one that initiates the shutdown) ends up in state FIN_WAIT2, whereas the client-side (the one that does not explicitly respond to the shutdown) ends up in state CLOSE_WAIT. Why isn't there a method in Socket or SocketChannel that can query the TCP stack to see whether the underlying TCP connection has been terminated? Is it that the TCP stack doesn't provide such status information? Or is it a design decision to avoid a costly call into the kernel?
With the help of the users who have already posted some answers to this question, I think I see where the issue might be coming from. The side that doesn't explicitly close the connection ends up in TCP state CLOSE_WAIT meaning that the connection is in the process of shutting down and waits for the side to issue its own CLOSE operation. I suppose it's fair enough that isConnected returns true and isClosed returns false, but why isn't there something like isClosing?
Below are the test classes that use pre-NIO sockets. But identical results are obtained using NIO.
import java.net.ServerSocket;
import java.net.Socket;
public class MyServer {
public static void main(String[] args) throws Exception {
final ServerSocket ss = new ServerSocket(12345);
final Socket cs = ss.accept();
System.out.println("Accepted connection");
Thread.sleep(5000);
cs.close();
System.out.println("Closed connection");
ss.close();
Thread.sleep(100000);
}
}
import java.net.Socket;
public class MyClient {
public static void main(String[] args) throws Exception {
final Socket s = new Socket("localhost", 12345);
for (int i = 0; i < 10; i++) {
System.out.println("connected: " + s.isConnected() +
", closed: " + s.isClosed());
Thread.sleep(1000);
}
Thread.sleep(100000);
}
}
When the test client connects to the test server the output remains unchanged even after the server initiates the shutdown of the connection:
connected: true, closed: false
connected: true, closed: false
...
A: Since none of the answers so far fully answer the question, I'm summarizing my current understanding of the issue.
When a TCP connection is established and one peer calls close() or shutdownOutput() on its socket, the socket on the other side of the connection transitions into CLOSE_WAIT state. In principle, it's possible to find out from the TCP stack whether a socket is in CLOSE_WAIT state without calling read/recv (e.g., getsockopt() on Linux: http://www.developerweb.net/forum/showthread.php?t=4395), but that's not portable.
Java's Socket class seems to be designed to provide an abstraction comparable to a BSD TCP socket, probably because this is the level of abstraction to which people are used to when programming TCP/IP applications. BSD sockets are a generalization supporting sockets other than just INET (e.g., TCP) ones, so they don't provide a portable way of finding out the TCP state of a socket.
There's no method like isCloseWait() because people used to programming TCP applications at the level of abstraction offered by BSD sockets don't expect Java to provide any extra methods.
A: Detecting whether the remote side of a (TCP) socket connection has closed can be done with the java.net.Socket.sendUrgentData(int) method, and catching the IOException it throws if the remote side is down. This has been tested between Java-Java, and Java-C.
This avoids the problem of designing the communication protocol to use some sort of pinging mechanism. By disabling OOBInline on a socket (setOOBInline(false), any OOB data received is silently discarded, but OOB data can still be sent. If the remote side is closed, a connection reset is attempted, fails, and causes some IOException to be thrown.
If you actually use OOB data in your protocol, then your mileage may vary.
A: It's an interesting topic. I've dug through the java code just now to check. From my finding, there are two distinct problems: the first is the TCP RFC itself, which allows for remotely closed socket to transmit data in half-duplex, so a remotely closed socket is still half open. As per the RFC, RST doesn't close the connection, you need to send an explicit ABORT command; so Java allow for sending data through half closed socket
(There are two methods for reading the close status at both of the endpoint.)
The other problem is that the implementation say that this behavior is optional. As Java strives to be portable, they implemented the best common feature. Maintaining a map of (OS, implementation of half duplex) would have been a problem, I guess.
A: the Java IO stack definitely sends FIN when it gets destructed on an abrupt teardown. It just makes no sense that you can't detect this, b/c most clients only send the FIN if they are shutting down the connection.
...another reason i am really beginning to hate the NIO Java classes. It seems like everything is a little half-ass.
A: This is a flaw of Java's (and all others' that I've looked at) OO socket classes -- no access to the select system call.
Correct answer in C:
struct timeval tp;
fd_set in;
fd_set out;
fd_set err;
FD_ZERO (in);
FD_ZERO (out);
FD_ZERO (err);
FD_SET(socket_handle, err);
tp.tv_sec = 0; /* or however long you want to wait */
tp.tv_usec = 0;
select(socket_handle + 1, in, out, err, &tp);
if (FD_ISSET(socket_handle, err) {
/* handle closed socket */
}
A: I have been using Sockets often, mostly with Selectors, and though not a Network OSI expert, from my understanding, calling shutdownOutput() on a Socket actually sends something on the network (FIN) that wakes up my Selector on the other side (same behaviour in C language). Here you have detection: actually detecting a read operation that will fail when you try it.
In the code you give, closing the socket will shutdown both input and output streams, without possibilities of reading the data that might be available, therefore loosing them. The Java Socket.close() method performs a "graceful" disconnection (opposite as what I initially thought) in that the data left in the output stream will be sent followed by a FIN to signal its close. The FIN will be ACK'd by the other side, as any regular packet would1.
If you need to wait for the other side to close its socket, you need to wait for its FIN. And to achieve that, you have to detect Socket.getInputStream().read() < 0, which means you should not close your socket, as it would close its InputStream.
From what I did in C, and now in Java, achieving such a synchronized close should be done like this:
*
*Shutdown socket output (sends FIN on the other end, this is the last thing that will ever be sent by this socket). Input is still open so you can read() and detect the remote close()
*Read the socket InputStream until we receive the reply-FIN from the other end (as it will detect the FIN, it will go through the same graceful diconnection process). This is important on some OS as they don't actually close the socket as long as one of its buffer still contains data. They're called "ghost" socket and use up descriptor numbers in the OS (that might not be an issue anymore with modern OS)
*Close the socket (by either calling Socket.close() or closing its InputStream or OutputStream)
As shown in the following Java snippet:
public void synchronizedClose(Socket sok) {
InputStream is = sok.getInputStream();
sok.shutdownOutput(); // Sends the 'FIN' on the network
while (is.read() > 0) ; // "read()" returns '-1' when the 'FIN' is reached
sok.close(); // or is.close(); Now we can close the Socket
}
Of course both sides have to use the same way of closing, or the sending part might always be sending enough data to keep the while loop busy (e.g. if the sending part is only sending data and never reading to detect connection termination. Which is clumsy, but you might not have control on that).
As @WarrenDew pointed out in his comment, discarding the data in the program (application layer) induces a non-graceful disconnection at application layer: though all data were received at TCP layer (the while loop), they are discarded.
1: From "Fundamental Networking in Java": see fig. 3.3 p.45, and the whole §3.7, pp 43-48
A: The reason for this behaviour (which is not Java specific) is the fact that you don't get any status information from the TCP stack. After all, a socket is just another file handle and you can't find out if there's actual data to read from it without actually trying to (select(2) won't help there, it only signals that you can try without blocking).
For more information see the Unix socket FAQ.
A: Here is a lame workaround. Use SSL ;) and SSL does a close handshake on teardown so you are notified of the socket being closed (most implementations seem to do a propert handshake teardown that is).
A: I think this is more of a socket programming question. Java is just following the socket programming tradition.
From Wikipedia:
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
Once the handshake is done, TCP does not make any distinction between two end points (client and server). The term "client" and "server" is mostly for convenience. So, the "server" could be sending data and "client" could be sending some other data simultaneously to each other.
The term "Close" is also misleading. There's only FIN declaration, which means "I am not going to send you any more stuff." But this does not mean that there are no packets in flight, or the other has no more to say. If you implement snail mail as the data link layer, or if your packet traveled different routes, it's possible that the receiver receives packets in wrong order. TCP knows how to fix this for you.
Also you, as a program, may not have time to keep checking what's in the buffer. So, at your convenience you can check what's in the buffer. All in all, current socket implementation is not so bad. If there actually were isPeerClosed(), that's extra call you have to make every time you want to call read.
A: The underlying sockets API doesn't have such a notification.
The sending TCP stack won't send the FIN bit until the last packet anyway, so there could be a lot of data buffered from when the sending application logically closed its socket before that data is even sent. Likewise, data that's buffered because the network is quicker than the receiving application (I don't know, maybe you're relaying it over a slower connection) could be significant to the receiver and you wouldn't want the receiving application to discard it just because the FIN bit has been received by the stack.
A: Only writes require that packets be exchanged which allows for the loss of connection to be determined. A common work around is to use the KEEP ALIVE option.
A: When it comes to dealing with half-open Java sockets, one might want to have a look at
isInputShutdown() and isOutputShutdown().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "95"
} |
Q: How do you truncate all tables in a database using TSQL? I have a test environment for a database that I want to reload with new data at the start of a testing cycle. I am not interested in rebuilding the entire database- just simply "re-setting" the data.
What is the best way to remove all the data from all the tables using TSQL? Are there system stored procedures, views, etc. that can be used? I do not want to manually create and maintain truncate table statements for each table- I would prefer it to be dynamic.
A: Don't do this! Really, not a good idea.
If you know which tables you want to truncate, create a stored procedure which truncates them. You can fix the order to avoid foreign key problems.
If you really want to truncate them all (so you can BCP load them for example) you would be just as quick to drop the database and create a new one from scratch, which would have the additional benefit that you know exactly where you are.
A: An alternative option I like to use with MSSQL Server Deveploper or Enterprise is to create a snapshot of the database immediately after creating the empty schema. At that point you can just keep restoring the database back to the snapshot.
A: Here's the king daddy of database wiping scripts. It will clear all tables and reseed them correctly:
SET QUOTED_IDENTIFIER ON;
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON; ALTER TABLE ? NOCHECK CONSTRAINT ALL'
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON; ALTER TABLE ? DISABLE TRIGGER ALL'
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON; DELETE FROM ?'
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON; ALTER TABLE ? CHECK CONSTRAINT ALL'
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON; ALTER TABLE ? ENABLE TRIGGER ALL'
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON';
IF NOT EXISTS (
SELECT
*
FROM
SYS.IDENTITY_COLUMNS
JOIN SYS.TABLES ON SYS.IDENTITY_COLUMNS.Object_ID = SYS.TABLES.Object_ID
WHERE
SYS.TABLES.Object_ID = OBJECT_ID('?') AND SYS.IDENTITY_COLUMNS.Last_Value IS NULL
)
AND OBJECTPROPERTY( OBJECT_ID('?'), 'TableHasIdentity' ) = 1
DBCC CHECKIDENT ('?', RESEED, 0) WITH NO_INFOMSGS;
Enjoy, but be careful!
A: The simplest way of doing this is to
*
*open up SQL Management Studio
*navigate to your database
*Right-click and select Tasks->Generate Scripts (pic 1)
*On the "choose Objects" screen, select the "select specific objects" option and check "tables" (pic 2)
*on the next screen, select "advanced" and then change the "Script DROP and CREATE" option to "Script DROP and CREATE" (pic 3)
*Choose to save script to a new editor window or a file and run as necessary.
this will give you a script that drops and recreates all your tables without the need to worry about debugging or whether you've included everything. While this performs more than just a truncate, the results are the same. Just keep in mind that your auto-incrementing primary keys will start at 0, as opposed to truncated tables which will remember the last value assigned. You can also execute this from code if you don't have access to Management studio on your PreProd or Production environments.
1.
2.
3.
A: When dealing with deleting data from tables which have foreign key relationships - which is basically the case with any properly designed database - we can disable all the constraints, delete all the data and then re-enable constraints
-- disable all constraints
EXEC sp_MSForEachTable "ALTER TABLE ? NOCHECK CONSTRAINT all"
-- delete data in all tables
EXEC sp_MSForEachTable "DELETE FROM ?"
-- enable all constraints
exec sp_MSForEachTable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"
More on disabling constraints and triggers here
if some of the tables have identity columns we may want to reseed them
EXEC sp_MSForEachTable "DBCC CHECKIDENT ( '?', RESEED, 0)"
Note that the behaviour of RESEED differs between brand new table, and one which had had some data inserted previously from BOL:
DBCC CHECKIDENT ('table_name', RESEED, newReseedValue)
The current identity value is set to
the newReseedValue. If no rows have
been inserted to the table since it
was created, the first row inserted
after executing DBCC CHECKIDENT will
use newReseedValue as the identity.
Otherwise, the next row inserted will
use newReseedValue + 1. If the value
of newReseedValue is less than the
maximum value in the identity column,
error message 2627 will be generated
on subsequent references to the table.
Thanks to Robert for pointing out the fact that disabling constraints does not allow to use truncate, the constraints would have to be dropped, and then recreated
A: If you want to keep data in a particular table (i.e. a static lookup table) while deleting/truncating data in other tables within the same db, then you need a loop with the exceptions in it. This is what I was looking for when I stumbled onto this question.
sp_MSForEachTable seems buggy to me (i.e. inconsistent behavior with IF statements) which is probably why its undocumented by MS.
declare @LastObjectID int = 0
declare @TableName nvarchar(100) = ''
set @LastObjectID = (select top 1 [object_id] from sys.tables where [object_id] > @LastObjectID order by [object_id])
while(@LastObjectID is not null)
begin
set @TableName = (select top 1 [name] from sys.tables where [object_id] = @LastObjectID)
if(@TableName not in ('Profiles', 'ClientDetails', 'Addresses', 'AgentDetails', 'ChainCodes', 'VendorDetails'))
begin
exec('truncate table [' + @TableName + ']')
end
set @LastObjectID = (select top 1 [object_id] from sys.tables where [object_id] > @LastObjectID order by [object_id])
end
A: The hardest part of truncating all tables is removing and re-ading the foreign key constraints.
The following query creates the drop & create statements for each constraint relating to each table name in @myTempTable. If you would like to generate these for all the tables, you may simple use information schema to gather these table names instead.
DECLARE @myTempTable TABLE (tableName varchar(200))
INSERT INTO @myTempTable(tableName) VALUES
('TABLE_ONE'),
('TABLE_TWO'),
('TABLE_THREE')
-- DROP FK Contraints
SELECT 'alter table '+quotename(schema_name(ob.schema_id))+
'.'+quotename(object_name(ob.object_id))+ ' drop constraint ' + quotename(fk.name)
FROM sys.objects ob INNER JOIN sys.foreign_keys fk ON fk.parent_object_id = ob.object_id
WHERE fk.referenced_object_id IN
(
SELECT so.object_id
FROM sys.objects so JOIN sys.schemas sc
ON so.schema_id = sc.schema_id
WHERE so.name IN (SELECT * FROM @myTempTable) AND sc.name=N'dbo' AND type in (N'U'))
-- CREATE FK Contraints
SELECT 'ALTER TABLE [PIMSUser].[dbo].[' +cast(c.name as varchar(255)) + '] WITH NOCHECK ADD CONSTRAINT ['+ cast(f.name as varchar(255)) +'] FOREIGN KEY (['+ cast(fc.name as varchar(255)) +'])
REFERENCES [PIMSUser].[dbo].['+ cast(p.name as varchar(255)) +'] (['+cast(rc.name as varchar(255))+'])'
FROM sysobjects f
INNER JOIN sys.sysobjects c ON f.parent_obj = c.id
INNER JOIN sys.sysreferences r ON f.id = r.constid
INNER JOIN sys.sysobjects p ON r.rkeyid = p.id
INNER JOIN sys.syscolumns rc ON r.rkeyid = rc.id and r.rkey1 = rc.colid
INNER JOIN sys.syscolumns fc ON r.fkeyid = fc.id and r.fkey1 = fc.colid
WHERE
f.type = 'F'
AND
cast(p.name as varchar(255)) IN (SELECT * FROM @myTempTable)
I then just copy out the statements to run - but with a bit of dev effort you could use a cursor to run them dynamically.
A: It is much easier (and possibly even faster) to script out your database, then just drop and create it from the script.
A: Make an empty "template" database, take a full backup. When you need to refresh, just restore using WITH REPLACE. Fast, simple, bulletproof. And if a couple tables here or there need some base data(e.g. config information, or just basic information that makes your app run) it handles that too.
A: For SQL 2005,
EXEC sp_MSForEachTable 'TRUNCATE TABLE ?'
Couple more links for 2000 and 2005/2008..
A: This is one way to do it... there are likely 10 others that are better/more efficient, but it sounds like this is done very infrequently, so here goes...
get a list of the tables from sysobjects, then loop over those with a cursor, calling sp_execsql('truncate table ' + @table_name) for each iteration.
A: Truncating all of the tables will only work if you don't have any foreign key relationships between your tables, as SQL Server will not allow you to truncate a table with a foreign key.
An alternative to this is to determine the tables with foreign keys and delete from these first, you can then truncate the tables without foreign keys afterwards.
See http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=65341 and http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=72957 for further details.
A: Run the commented out section once, populate the _TruncateList table with the tables you want truncated, then run the rest of the script. The _ScriptLog table will need to be cleaned up over time if you do this a lot.
You can modify this if you want to do all tables, just put in SELECT name INTO #TruncateList FROM sys.tables. However, you usually don't want to do them all.
Also, this will affect all foreign keys in the database, and you can modify that as well if it's too blunt-force for your application. It's not for my purposes.
/*
CREATE TABLE _ScriptLog
(
ID Int NOT NULL Identity(1,1)
, DateAdded DateTime2 NOT NULL DEFAULT GetDate()
, Script NVarChar(4000) NOT NULL
)
CREATE UNIQUE CLUSTERED INDEX IX_ScriptLog_DateAdded_ID_U_C ON _ScriptLog
(
DateAdded
, ID
)
CREATE TABLE _TruncateList
(
TableName SysName PRIMARY KEY
)
*/
IF OBJECT_ID('TempDB..#DropFK') IS NOT NULL BEGIN
DROP TABLE #DropFK
END
IF OBJECT_ID('TempDB..#TruncateList') IS NOT NULL BEGIN
DROP TABLE #TruncateList
END
IF OBJECT_ID('TempDB..#CreateFK') IS NOT NULL BEGIN
DROP TABLE #CreateFK
END
SELECT Scripts = 'ALTER TABLE ' + '[' + OBJECT_NAME(f.parent_object_id)+ ']'+
' DROP CONSTRAINT ' + '[' + f.name + ']'
INTO #DropFK
FROM .sys.foreign_keys AS f
INNER JOIN .sys.foreign_key_columns AS fc
ON f.OBJECT_ID = fc.constraint_object_id
SELECT TableName
INTO #TruncateList
FROM _TruncateList
SELECT Scripts = 'ALTER TABLE ' + const.parent_obj + '
ADD CONSTRAINT ' + const.const_name + ' FOREIGN KEY (
' + const.parent_col_csv + '
) REFERENCES ' + const.ref_obj + '(' + const.ref_col_csv + ')
'
INTO #CreateFK
FROM (
SELECT QUOTENAME(fk.NAME) AS [const_name]
,QUOTENAME(schParent.NAME) + '.' + QUOTENAME(OBJECT_name(fkc.parent_object_id)) AS [parent_obj]
,STUFF((
SELECT ',' + QUOTENAME(COL_NAME(fcP.parent_object_id, fcp.parent_column_id))
FROM sys.foreign_key_columns AS fcP
WHERE fcp.constraint_object_id = fk.object_id
FOR XML path('')
), 1, 1, '') AS [parent_col_csv]
,QUOTENAME(schRef.NAME) + '.' + QUOTENAME(OBJECT_NAME(fkc.referenced_object_id)) AS [ref_obj]
,STUFF((
SELECT ',' + QUOTENAME(COL_NAME(fcR.referenced_object_id, fcR.referenced_column_id))
FROM sys.foreign_key_columns AS fcR
WHERE fcR.constraint_object_id = fk.object_id
FOR XML path('')
), 1, 1, '') AS [ref_col_csv]
FROM sys.foreign_key_columns AS fkc
INNER JOIN sys.foreign_keys AS fk ON fk.object_id = fkc.constraint_object_id
INNER JOIN sys.objects AS oParent ON oParent.object_id = fkc.parent_object_id
INNER JOIN sys.schemas AS schParent ON schParent.schema_id = oParent.schema_id
INNER JOIN sys.objects AS oRef ON oRef.object_id = fkc.referenced_object_id
INNER JOIN sys.schemas AS schRef ON schRef.schema_id = oRef.schema_id
GROUP BY fkc.parent_object_id
,fkc.referenced_object_id
,fk.NAME
,fk.object_id
,schParent.NAME
,schRef.NAME
) AS const
ORDER BY const.const_name
INSERT INTO _ScriptLog (Script)
SELECT Scripts
FROM #CreateFK
DECLARE @Cmd NVarChar(4000)
, @TableName SysName
WHILE 0 < (SELECT Count(1) FROM #DropFK) BEGIN
SELECT TOP 1 @Cmd = Scripts
FROM #DropFK
EXEC (@Cmd)
DELETE #DropFK WHERE Scripts = @Cmd
END
WHILE 0 < (SELECT Count(1) FROM #TruncateList) BEGIN
SELECT TOP 1 @Cmd = N'TRUNCATE TABLE ' + TableName
, @TableName = TableName
FROM #TruncateList
EXEC (@Cmd)
DELETE #TruncateList WHERE TableName = @TableName
END
WHILE 0 < (SELECT Count(1) FROM #CreateFK) BEGIN
SELECT TOP 1 @Cmd = Scripts
FROM #CreateFK
EXEC (@Cmd)
DELETE #CreateFK WHERE Scripts = @Cmd
END
A: It is a little late but it might help someone.
I created a procedure sometimes back which does the following using T-SQL:
*
*Store all constraints in a Temporary table
*Drop All Constraints
*Truncate all tables with exception of some tables, which does not need truncation
*Recreate all Constraints.
I have listed it on my blog here
A: I do not see why clearing data would be better than a script to drop and re-create each table.
That or keep a back up of your empty DB and restore it over old one
A: Before truncating the tables you have to remove all foreign keys. Use this script to generate final scripts to drop and recreate all foreign keys in database. Please set the @action variable to 'CREATE' or 'DROP'.
A: select 'delete from ' +TABLE_NAME from INFORMATION_SCHEMA.TABLES where TABLE_TYPE='BASE TABLE'
where result come.
Copy and paste on query window and run the command
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "225"
} |
Q: Help me understand how QA works in Scrum Apparently we use the Scrum development methodology. Here's generally how it goes:
Developers thrash around trying to accomplish their tasks. Generally the tasks take most of the sprint to complete. QA pesters Dev to release something they can test, Dev finally throws some buggy code out to QA a day or two before the sprint ends and spends the rest of the time fixing bugs that QA is finding. QA can never complete the tasks on time, sprints are rarely releasable on time, and Dev and QA have a miserable few days at the end of the sprint.
How is scrum supposed to work when releasable Dev tasks take up most of the sprint?
Thank you everyone for your part in the discussion. As it's a pretty open-ended question, it doesn't seem like there is one "answer" - there are many good suggestions below. I'll attempt to summarize some of my "take home" points and make some clarifications.
(BTW - Is this the best place to put this or should I have put it in an 'answer'?)
Points to ponder / act on:
*
*Need to ensure that developer tasks are as small (granular) as possible.
*Sprint length should be appropriately based on average task length (e.g. sprint with 1 week tasks should be at least 4 weeks long)
*Team (including QA) needs to work on becoming more accurate at estimating.
*Consider doing a separate QA sprint in parallel but off-set if that works best for the team
*Unit testing!
A: The Scrum rules say that all Sprint items need to be "fully tested, potentially implementable features" at the end of the Sprint to be considered complete. Sprints ALWAYS end on time, and the Team doesn't get credit and isn't allowed to present anything at the Sprint review that isn't complete - and that includes QA.
Technically, that's all you should need. A Team commits to a certain amount of work, finally gets it to QA two days before the end of the Sprint and the QA isn't done in time. So the output from the Sprint is zero, they have to go in front of the Customer and admit that they have nothing to show for a month of work.
Next time round, you'll bet that they'll pick less work and figure out how to get it to QA so that it can be finished on time.
A: Speaking as a QA who has worked on Agile projects for 2.5 years this is a really difficult issue and I still don't have all the answers.
I work as part of a "triplet" (two developers who pair program + one QA) and I am involved in tasking out stories and estimating in planning meetings at the beginning of two week iterations. As adrianh mentioned above it is essential for QAs to get their voice heard in the initial sprint planning. This can be difficult especially if you are working with Developers with very strong personalities however QAs must be assertive in the true sense of the word (i.e. not aggressive or forceful but respectfully seeking to understand the Truth/PO and Developers/technical experts whilst making themselves understood). I advocate producing QA tasks first during planning to encourage a test driven mentality - the QA may have to literally put themselves forward to get this adopted. It is opposite to how many people think software development works but pays dividends for several reasons;
*
*QA is heard and not relegated to being asked "so how are you going to test that?" after Devs have said their piece (waterfall mentality).
*It allows QA to propose ideas for testing which at the same time checks the testability of the acceptance criteria while the Truth/PO is present (I did say it is essential for them to be present in the planning meeting didn't I?!) to fill in any gaps in understanding.
*It provides the basis for a test driven approach - after the test approach has been enunciated and tasked the Devs can think about how they will produce code to pass those tests.
*If steps 1 - 3 are your only TDD activity for the rest of the iteration you are still doing a million times better than the scenario postulated by Steve in the first post; "Developers thrash around trying to accomplish their tasks. Generally the tasks take most of the sprint to complete. QA pesters Dev to release something they can test, Dev finally throws some buggy code out to QA a day or two before the sprint ends and spends the rest of the time fixing bugs that QA is finding"
Needless to say this comes with some caveats for the QA;
*
*They must be prepared to have their ideas for testing challenged by Devs and Truth/PO and to reach a compromise; the "QA police" attitude won't wash in an Agile team.
*QA tasks must strike a difficult balance to be neither too detailed nor too generic (tasks can be written on a card to go on a "radiator board" and discussed at daily stand up meetings - they need to be moved from "in progress" to "completed" DURING the iteration).
*QAs need to prepare for planning/estimation meetings. Don't expect to be able to just turn up and produce a test approach off the top of your head for unseen user stories! Devs do seem to be able to do this because their tasks are often far more clear cut - e.g. "change x module to interface with z component" or "refactor y method". As a QA you need to be familiar with the functionality being introduced/changed BEFORE planning so that you know the scope of testing and what test design techniques you might apply.
*It is almost essential to automate your tests and have these written and "failing" within the first two or three days of an iteration or at least to co-incide with when the Devs have the code ready. You can then run the test/s and see if they pass as expected (proper QA TDD). This is how you avoid a mini waterfall at the end of iterations. You should really demo the test to the Devs before or as they start coding so they know what to aim for.
*I say 4 is "almost essential" because the same can sometimes be successfully achieved with manual checklists (dare I say scripts!) of expected behaviour - the key is to share this with Devs ahead of time; keep talking to them!
With regards to point 2 above on the subject of the tasks, I have tried creating tasks as granular as 1/2 hour to 2 hours in size each corresponding to a demonstrable piece of work e.g. "Add checks for incorrect password to auto test - 2 hrs". While this helps me organise my work it has been criticised by other team members for being too detailed and has the effect at stand ups of me either moving multiple tasks across to complete from the day before or not being able to move any tasks at all because I have not got onto them yet. People really want to see a sense of steady progress at daily stand ups so it is more helpful to create tasks in 1/2 day or 1 day blocks (but you might keep your own list of "micro-tasks" to do towards to completion of the bigger tasks that you use for COMMUNICATING overall progress at the stand-up).
With regards to points 4 and 5 above; the automated tests or manual checklists you prepare early should really cover just the happy paths or key acceptance criteria. Once these pass you can have planned an additional task for a final round of "Exploratory testing" towards the end of the iteration to check the edge cases. What the Devs do during that time is problematic because as far as they are concerned they are "code complete" unless and until you find a bug. Some Agile practitioners advocate going for the edge cases first although this can also be problematic because if you run out of time you may not have assured that the acceptance criteria have been delivered. This is one of those finely balanced decisions that depends on the context of the user story and your experience as a QA!
As I said at the beginning I still don't have all the answers but hope the above provide some pointers born out of hard experience!
A: We solved this problem as follows:
- Every item in the product backlog must have fit criteria or acceptance criteria,
without those, we don't start a sprint
- A tester is part of our team, for every product backlog item, he creates test tasks (1 or more, based on the acceptance criteria) together with an estimation, and a link to the item to test
- During the daily scrum, all tasks that are finished are placed in a 'To Test' column
- We never do tasks that take longer than 16 hours; tasks that are estimated longer, are split up
A: My opinion is that you have an estimation problem. It seems that the time to test each feature is missing, and only the building part is being considered when planning the sprint.
I'm not saying it is an easy problem to solve, because it is more common than anything. But things that could help are:
*
*Consider QA as members of the dev team, and include them in the sprint planning and estimating more closely.
*'Releasable Dev tasks' should not take up most of the sprint. Complete working features should. Try to gather metrics about dev time vs QA time for each kind of task and use those metrics when estimating future sprints.
*You might need to review your backlog to see if you have very coarse grained features. Try to divide them in smaller tasks that could be easily estimated and tested.
In summary, it seems that your team hasn't found what its real velocity is because there are tasks that are not being considered when doing the estimation and planning for the sprint.
But in the end, estimation inaccuracy is a tough project management issue that you find in agile-based or waterfall-based projects. Good luck.
A: Sounds like your development team might not be doing enough testing on their own, before the release to QA. If all your unit tests are passing, the QA cycle should be relatively smooth sailing, no? They'll find some integration errors, but there shouldn't be very many of those, right?
A: I think that there are several problems here. First, I think that perhaps the developer tasks aren't either fine grained enough, or perhaps not estimated well, or perhaps both. The whole purpose of the sprints in Scrum is to be able to demonstrate workable code at the end of the sprints. Both of the problems that I mentioned could lead to buggy code.
If developers are release buggy code towards the end of the sprint, I would also look at:
*
*Are the product owners really holding the dev members accountable for getting their tasks done. That's the job of the PO and if that's not happening, then the developers will slack.
*Are the devs using any kind of TDD. If not, that might help matters greatly. Get the developers in the habit of testing their code. We have this problem where I work, and my team is focused on doing the TDD in the important areas so that we don't have to have someone else do it later
*Are the task/user stories too generic? Wiggle room in the task breakdowns will cause developers to be sloppy. Again, this is somewhat of a PO problem.
One idea that I've heard batted around in the past is to use a QA person as scrummaster. They will be present for the daily standups and can get sense of where things are at with the developers. They can address issues with the PO (assuming that the PO can adequately do their job).
I can't help but feel that you need more coorporation between QA and your scrum teams. It sounds like testing only happens at the end, which is a problem. Getting QA to be a part of the team will help identify things that can be tested earlier and better.
I also feel like you have an issue with the product owner. They must be in there making sure that everyone is driving the right direction. they should be making sure that there is good cooperation, not only between QA and devs, but between the devs themselves.
A:
"How is scrum supposed to work when releasable Dev
tasks take up most of the sprint?"
As you've found out - it doesn't work terribly well :-) The process you're describing doesn't sound much like Scrum to me - or at least not like Scrum done well.
I'm unsure from what you've described whether the QA folk are part of the team - or a separate group.
If they're a separate group then this is probably a big part of the problem. They won't be involved in the team's commitment to completion of tasks - and the associated scope negotiation with the product owner. I've never seen an agile group succeed well without their being QA skills in the team. Either by having developers with a lot of testing/QA skills - or by having an embedded QA person or three on the team.
If they are on the team then they need to get their voice heard more in the initial sprint planning. By now it should be clear to the product owner and team that you're overcommitting.
I'd try a few things if it were me:
*
*Get QA/testing folk on the team if they're not there already
*Have a good long chat with the product owner & the team over what counts as "done". It sounds like some of the developers are still in the pre-scrum mindset of "handed over to QA"" == done.
*Break down the stories into smaller chunks - makes it easier to spot estimation mistakes
*Consider running shorter sprints - because little and more often is easier to track and learn from.
You might also find these tips about smoothing down a scrum burndown useful.
A: A little late to the party here but here's my take based on what you wrote.
Now, Scrum is a project management methodology, not a development one. But it is key, in my opinion, to have development process in place. Without one, you spend the majority of your time reacting rather than building.
I'm a test-first guy. In my development process I build tests first to enforce the requirements and the design decisions. How is your team enforcing those? The point I'm trying to make here is that you simply can't "throw stuff over the fence" and expect anything but failure to occur. That failure is either going to be by the test team (by not testing very well and thus letting problems slip by) or by the developers (by not building the product that solves the problem). I'm not saying you must write tests first - I'm not a militant or a test-first evangelist - but I'm saying you must have a process in place to produce quality, tested, ready-for-production code when you reach an iteration's end.
I've been right where you are in this development methodology that I call the Death Spiral Method. I built software for the government (US) for years in such a model. It doesn't work well, it costs a LOT of money, it produces late code, poor code, and does nothing for morale. You can't make any headway when you spend all your time fixing bugs you could have avoided making in the first place. I was absolutely beaten down by the affair.
You don't want QA finding your problems. You want to put them out of work, really. My goal is to make QA flabbergasted because everything just works. Granted, that is a goal. In practice, they'll find stuff. I'm not super-human. I make mistakes.
Back to scheduling...
At my current job we do Scrum, we just don't call it that. We aren't into labels here but we are into producing quality code on time. Everyone is on-board. We tell QA what we'll have ready to test and when. If they come a-knocking two weeks early for it, they can talk to the hand. Everyone knows the schedule, everyone knows what will be in the release and everyone knows that the product has to work as advertised before it goes to QA. So what does that mean? You tell QA "don't bother testing XYZ - it is broken and won't be fixed until release C" and if they go testing that, you point them back at that statement and tell them not to waste your time. Harsh, perhaps, but sometimes necessary. I'm not about being rude, but everyone needs to know "the rules" and what should be tested and what is a 'known issue'.
Your management has to be on board. If they aren't you are going to have troubles. QA can't run the show and the dev group can't completely run it either. All the groups (even if those groups are just one person per group or a guy that wears several hats) need to be on the same page: the customer, the test team, the developers, management, and anyone else. More than half the battle is communication, typically.
Perhaps you are biting off more than can be accomplished during a sprint. That might be the case. Why are you doing that? To meet a schedule? If so, that is where management needs to step in and resolve the issue. If you are giving QA buggy code, expect them to toss it back. Better to give them 3 things that work than 8 things that are unfinished. The goal is to produce some set of functionality that is completely implemented on each iteration, not to throw together a bunch of half-done stuff.
I hope this is received as it is intended to be - as an encouragement not a rant. Like I mentioned, I've been where you are and it isn't fun. But there is hope. You can get things turned around in a sprint, maybe two. Perhaps you don't add any new functionality in the next sprint and simply fix what is broken. You'll have to decide that as a team.
One more small plug for writing test code: I've found myself far more relaxed and far more confident in my product since adopting a 'write the tests first' approach. When all my tests pass, I have a level of confidence that I simply couldn't have without them.
Best of luck!
A: Split the tasks into smaller tasks.
Also, QA can create test cases for Dev to test against.
A: One idea to consider is to have QA work one iteration behind the main development. That works well in our environment.
A: It seems to me that there is a resource allocation problem in scenarios requiring QA functional testing in order for a given feature to be 'done' within a sprint. No one seems to address this in any QA-related scrum discussion I've found so far, and the original question here is almost the same (at least related), so I wanted to offer a partial answer and extend the question a bit.
As to the specific original question about development tasks taking the full sprint - it seems that the general advice of easing up on these tasks makes sense if functional testing by QA is part of your definition of 'done'. Given lets say a 4 week sprint, if it takes about a week to test multiple features from multiple developers, then it seems like development tasks taking about 3 weeks, followed by a lag week of testing tasks taking about 1 week is the answer. QA would of course start as soon as possible be we recognize that from the last set of delivered features, there will be about a week lag. I realize that we want to get features to QA asap so you don't have this waterfall-like scenario in a sprint, but the reality is that development usually can't get real, worthwhile delivered functionality to QA until 1 to 3 weeks into the sprint. Sure there are bits and pieces here and there, but the bulk of the work is 2-3 weeks development, then about a week's testing leftover.
So here is the resource allocation problem, and my extension to the question - in the above scenario QA has time to test the planned features of a sprint (3 weeks worth of development tasks, leaving the last week for testing the features delivered last). Also let's assume QA starts to get some testable features after 1 week of development - but what about week #1 for QA, and what about week #4 for development?
If QA functional testing is part of the definition of 'done' for a feature in a sprint, then it seems this inefficiency is unavoidable. QA will be largely idle during week #1 and development will be largely idle during week #4. Of course there are some things that fill in this time naturally, like bug fix and verification, design/plan, etc., but we are essentially scheudling our resources at 75% capacity.
The obvious answer seems to be overlapping sprints for development and QA since the reality is that QA always lags beind development to some degree. Demonstrations to product owners and others would follow the QA sprint since we want features to be tested before being shown. This seems to allow more efficient use of both develoment and QA since we don't have as much wasted time. Assuming we want to keep developers developing and tester testing, I can't see a better practical solution. Perhaps I have missed something, and I hope someone can shed some light on this for me - otherwise it seems this rigid approach to scrum is flawed. Thanks.
A: Hopefully, you fix this by tackling fewer dev tasks in each sprint. Which leads to the questions: Who's settings dev's goals? Why is Dev falling short of those goals consistently?
If dev isn't setting their own goals, that's why they're always late. And that isn't the ideal way to practice Scrum. That's just incremental development with big, deadline-driven deliverables and no actual stake-holder responsibility on the part of developers.
If dev can't set their own goals because they don't know enough, then they have to be more involved up front.
Scrum depends on four basic principles, outlined in the Agile Manifesto.
*
*Interactions matter -- that means dev, QA, project management, and end users need to talk more and talk with each other. Software is a process of encoding knowledge in the arcane language of computers. To encode the knowledge, the developers must have the knowledge. [Why do you think we call it "code"?] Scrum is not a "write spec - throw over transom" methodology. It's ANTI-"write spec - throw over transom"
*Working Software matters -- that means that each piece dev bites off has to lead to a working release. Not a set of bug fixes for QA to wrestle with, but working software.
*Customer Collaboration -- that means dev has to work with business analysts, end users, business owners, everyone who can help them understand what they're building. The deadlines don't matter as much as the next thing handed over to the customer. If the customer needs X, that's the highest priority thing for everyone to do. If the project plan says build Y, that's a load of malarkey.
*Responding to Change -- that means that customers can rearrange the priorities of the following sprints. They can't rearrange the sprint in process (that's crazy) but all the following sprints are candidates for changing priorities.
If the customer drives, then the deadlines become less artificial "project milestones" and more "we need X first, then Y, and this thing in section Z, we don't need that any more. Now that we have W, Z is redundant."
A: Here I would say that, One size does not fit all. Every team deals QA differently. It so much depends on the project you are working on, either it's a small one or big one. Does it need extensive regression, User acceptance and exploratory testing or you have quite few scenarios to test.
Let me restate that in Agile, generalist are preferred on specialist. What is that? Because there is time during the project when you don't have anything to Test, so at that time you might be doing something else. Also you might be doing testing even though you are a hard-core programmer.
How do we handle it?
We have regular 2 week sprint. Testing start after a week on the task completed by developers during the week. Now tester keep adding issues to our Issue tracker and developers who are done with their sprint tasks start picking those bugs. By the end of the sprint we mostly get done with our sprint task and all critical and major bugs.
So what does tester two in the first week of the sprint?
Well, There are always things to test. We have testing tasks in the backlog, that may include some exploratory testing. Many people don't value Exploratory testing but that is extremely important to build quality products. Good testers create task for themselves and find the possibilities where things go wrong and test them.
Hope that helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: What is the "< >" syntax within C# I have been learning about the basics of C# but haven't come across a good explanation of what this is:
var l = new List<string>();
I don't know what the <string> is doing or if it's the List that is doing the magic. I have also seen objects been thrown within the < > tags.
Can someone explain this to me with examples, please?
A: It's generics - it's a form of type parameterisation. In your example, it's making l refer to a list of strings - the list will only ever contain strings: the compiler treats it (pretty much) as if everywhere that the API docs mention "T" it actually says "string". So, you can only add strings to it, and if you use the indexer you don't need to cast to string, etc.
To be honest, giving generics detailed coverage on an online forum is pretty much impossible. (In C# in Depth, I take nearly 50 pages talking about generics.) However, armed with the name of the feature, you should be in a much better position to find out more. The MSDN "Introduction to C# Generics" is probably a good starting point.
Asking specific questions about generics on SO is likely to yield good results - I just don't think it can really be covered properly in one question/answer.
A: That is the generic syntax for C#.
The basic concept is that it allows you to use a Type placeholder and substitute the actual real type in at compile time.
For example, the old way:
ArrayList foos = new Arraylist();
foos.Add("Test");
worked by making ArrayList store a list of System.Objects (The base type for all things .NET).
So, when adding or retrieving an object from the list, The CLR would have to cast it to object, basically what really happens is this:
foos.Add("Test" as System.Object);
string s = foos[1] as String.
This causes a performance penalty from the casting, and its also unsafe because I can do this:
ArrayList listOfStrings = new ArrayList();
listOfStrings.Add(1);
listOfStrings.Add("Test");
This will compile just fine, even though I put an integer in listOfStrings.
Generics changed all of this, now using Generics I can declare what Type my collection expects:
List<int> listOfIntegers = new List<int>();
List<String> listOfStrings = new List<String>();
listOfIntegers.add(1);
// Compile time error.
listOfIntegers.add("test");
This provides compile-time type safety, as well as avoids expensive casting operations.
The way you leverage this is pretty simple, though there are some advanced edge cases. The basic concept is to make your class type agnostic by using a type placeholder, for example, if I wanted to create a generic "Add Two Things" class.
public class Adder<T>
{
public T AddTwoThings(T t1, T t2)
{
return t1 + t2;
}
}
Adder<String> stringAdder = new Adder<String>();
Console.Writeline(stringAdder.AddTwoThings("Test,"123"));
Adder<int> intAdder = new Adder<int>();
Console.Writeline(intAdder.AddTwoThings(2,2));
For a much more detailed explanation of generics, I can't recommend enough the book CLR via C#.
A: This is .NET Generics. The type within the < > denotes the type of element contained in the list.
with ArrayList you'd have to cast the elements inside...
int x = (int)myArrayList[4];
with List you can avoid that step because the compiler already knows the type.
int x = myList[4];
Generics are available in .NET 2.0 and later.
A: Those are generics. You are making a List that only contains strings. You could also say
List<int>
and get a list that only contains ints.
Generics is a huge topic, too big for a single answer here.
A: Those are known as Generics (specifically List is a generic class).
Reading from MSDN
*
*Generics (C# Programming Guide)
*An Introduction to C# Generics
*Generics in the .NET Framework
A: This is generics in action. A regular List stores items of type Object. This requires casting between types. This also will allow you to store any kind of item in one instance of a list. When you are iterating through items in that list you cannot be sure that they are all of a certain type (at least not without casting each item). For instance lets say you create a list like this:
List listOfStrings = new List();
Nothing prevents someone from doing something like this:
listOfStrings.add(6); //Not a string
A generic list would allow you to specify a strongly-typed list.
List<string> listOfStrings = new List<string>();
listOfStrings.add("my name"); //OK
listofStrings.add(6); //Throws a compiler error
There is a more thorough examples on here Generics
A: < > is for generics. In your specific example, it means that the List is a List of strings, not say a list of ints.
Generics are used to allow a type to be, well, generic. It's used ALOT in Collections to allow them to take different types so that they can function much like a normal array and still catch invalid types being assigned at compile time. Basically it allows a class to say "I need to be associated with some specific type T, but I don't want to hard code exactly what that type is, and let the user select it.". A simple array for instance might look something like:
public class MyArray<T> {
private T[] _list;
public MyArray() : this.MyArray(10);
public MyArray(int capacity)
{ _list = new T[capacity]; }
T this[int index] {
get { return _list[index]; }
set { _list[index] = value; }
}
}
Here, we have a private list of type T that is accessed by using our class like a normal array. We don't care what type it is, it doesn't matter to our code. But anyone using the class could use it as, say MyArray<string> to create a list of strings, while someone else might use it as MyArray<bool> and create a list of flags.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: how to hide the text limit line in netbeans 6.5? Is there a way to hide the text limit line in netbeans 6.5?
A: line is not moving to 200 column, but you can hide it setting its color to same as the background
A: You can set it to 0. So It will not be visible.
A: Are you talking about the line running thru the right side, by default at the 80 column point? That is Options -> Editor -> Indentation -> Right margin. I have it set at 200 columns which pushes it off the right side of the screen.
A: In NetBeans 6.9, setting Right Margin to 0 effectively hides the text limit line.
Set the value in Preferences > Editor > Formatting > All Languages > Right Margin.
(Mac OS X 10.6.4, NetBeans 6.9)
A: Hi~ I found out how to hide "Text limit line" :)
*
*Tools -> Options -> Export(Popup Win) -> Browse.. (Select target "ccc.zip" file)
Select Options for Export : Check at "Editor" -> OK
*Edit xml file "\Editors\Preferences\org-netbeans-modules-editor-settings-CustomPreferences.xml" in "ccc.zip" file.
<entry javaType="java.lang.Boolean" name="text-limit-line-visible" xml:space="preserve">
<value><![CDATA[false]]></value></entry>
*Tools -> Options -> Import "ccc.zip" file
Done
A: As for now (October '18) in NetBeans 8.2 + 9 you can hide the text limit line or actually change its color by going to Options -> Fonts & Colors -> Highlighting -> Text Limit Line -> Foreground
A: Remember to go to tools, options, fonts & colors, highlighting tab, text limit line selected before exporting ccp.zip....
A: There is an easy way to disable the warning generated by NetBeans for number of lines.
Goto Tools > Options > Editor > Hints
Find the checkbox Too Many Lines > un-check the checkbox
and click Apply.
Enjoy :)
A: Have you tried to see if your project properties have formatting that overwrites the global properties?
See below:
https://bz.apache.org/netbeans/show_bug.cgi?id=223329
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I sort in a Flex AdvancedDataGrid - callback isn't being called I have an AdvancedDataGrid that uses customer grouping of data. Not all of the groups will be at the same level in the hierarchy, and groups can contain both groups and members. We have a sort callback, but it's not being called except for groups at the leaf-most levels. See code below for an example -- expand all of the groups, then click the sort column on "date of birth" to get a reverse sort by date of birth. (Oddly, for some unfathomable reason, the first ascending sort works.)
We're not getting called for any of the data that's grouped at the same level as a group member.
How do I fix this?
Thanks.
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
layout="vertical"
verticalAlign="middle"
backgroundColor="white" >
<mx:Script>
<![CDATA[
import mx.controls.advancedDataGridClasses.AdvancedDataGridColumn;
import mx.collections.HierarchicalData;
import mx.utils.ObjectUtil;
private var arrData : Array = [
{ name: "User A", dob: "04/14/1980" },
{ name: "User B", dob: "01/02/1975" },
{ name: "Group A", children: [
{ name: "User E", dob: "09/13/1972" },
{ name: "User F", dob: "11/22/1993" }
]
},
{ name: "Group B", children: [
{ name: "Group B1", children: [
{ name: "User I", dob: "01/23/1984" },
{ name: "User J", dob: "11/10/1948" }
]
},
{ name: "User G", dob: "04/09/1989" },
{ name: "User H", dob: "06/20/1963" }
]
},
{ name: "User C", dob: "12/30/1977" },
{ name: "User D", dob: "10/27/1968" }
];
private function date_sortCompareFunc(itemA:Object, itemB:Object):int
{
if ( itemA.hasOwnProperty("dob") && itemB.hasOwnProperty("dob"))
{
var dateA:Date = new Date(Date.parse(itemA.dob));
var dateB:Date = new Date(Date.parse(itemB.dob));
return ObjectUtil.dateCompare(dateA, dateB);
}
else if ( itemA.hasOwnProperty("dob"))
{
return 1;
}
else if (itemB.hasOwnProperty("dob"))
{
return -1;
}
return ObjectUtil.stringCompare(itemA.name, itemB.name);
}
private function date_dataTipFunc(item:Object):String
{
if (item.hasOwnProperty("dob"))
{
return dateFormatter.format(item.dob);
}
return "";
}
private function label_dob(item:Object, col:AdvancedDataGridColumn):String
{
var dob:String="";
if(item.hasOwnProperty("dob"))
{
dob=item.dob;
}
return dob;
}
]]>
</mx:Script>
<mx:DateFormatter id="dateFormatter" formatString="MMMM D, YYYY" />
<mx:AdvancedDataGrid id="adgTest" dataProvider="{new HierarchicalData(this.arrData)}" designViewDataType="tree" width="746" height="400">
<mx:columns>
<mx:AdvancedDataGridColumn headerText="Name" dataField="name"/>
<mx:AdvancedDataGridColumn dataField="dob" headerText="Date of birth"
labelFunction="label_dob"
sortCompareFunction="date_sortCompareFunc"
showDataTips="true"
dataTipFunction="date_dataTipFunc" />
</mx:columns>
</mx:AdvancedDataGrid>
</mx:Application>
A: It seems as if the first row contains null data or an empty string, and the advanceddatagrid is set to use grouped data, then the sort function doesn't get called.
it's a bit of a hack, yes, but if you can put in an unrealistic (say 1/1/1770), constant piece of data that you could insert at the database/file read/data input level then use the column labelFunction to render as null if the data matches that column, it should work, or at least the sort function will get called.
public function dateCellLabel(item:Object, column:AdvancedDataGridColumn):String
{
var date:String = item[column.dataField];
if (date=="1/1/1770")
return null;
else
return date;
}
Sorry about answering this so late, but at least if somebody else tries to find the answer, they might see this.
A: This has something to do with the logic of the SortCompareFunction.
Put dob:"01/01/1970" for all the group nodes and the sort works as expected, is that correct?
A: I don't think the problem is to do with sorting grouped data with null or empty string values (which are perfectly valid values); the
docs clearly state the property represented by dataField must be a valid property on the dataProvider [item] i.e. it must exist, null or otherwise.
While I give RaySir my vote, I don't quite agree that it's a hack, but rather you're normalizing your data, which I think is a perfectly fine presentation-layer thing to do.
Here's is a re-worked example:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
layout="vertical"
verticalAlign="middle"
backgroundColor="white" >
<mx:Script>
<![CDATA[
import mx.collections.HierarchicalData;
import mx.controls.advancedDataGridClasses.AdvancedDataGridColumn;
import mx.utils.ObjectUtil;
private var arrData : Array = [
{ name: "User A", dob: "04/14/1980" },
{ name: "User B", dob: "01/02/1975" },
{ name: "Group A", dob: null, children: [
{ name: "User E", dob: "09/13/1972" },
{ name: "User F", dob: "11/22/1993" }
]
},
{ name: "Group B", dob: null, children: [
{ name: "Group B1", dob: null, children: [
{ name: "User I", dob: "01/23/1984" },
{ name: "User J", dob: "11/10/1948" }
]
},
{ name: "User G", dob: "04/09/1989" },
{ name: "User H", dob: "06/20/1963" }
]
},
{ name: "User C", dob: "12/30/1977" },
{ name: "User D", dob: "10/27/1968" }
];
private function dob_sort(itemA:Object, itemB:Object):int {
var dateA:Date = itemA.dob ? new Date(itemA.dob) : null;
var dateB:Date = itemB.dob ? new Date(itemB.dob) : null;
return ObjectUtil.dateCompare(dateA, dateB);
}
private function dob_dataTip(item:Object):String {
if (!item.hasOwnProperty('children') && item.hasOwnProperty("dob")) {
return dateFormatter.format(item.dob);
}
return null;
}
private function dob_label(item:Object, col:AdvancedDataGridColumn):String {
if(!item.hasOwnProperty('children') && item.hasOwnProperty("dob")) {
return item.dob;
}
return null;
}
]]>
</mx:Script>
<mx:DateFormatter id="dateFormatter" formatString="MMMM D, YYYY" />
<mx:AdvancedDataGrid id="adgTest" dataProvider="{new HierarchicalData(arrData)}" designViewDataType="tree" width="746" height="400">
<mx:columns>
<mx:AdvancedDataGridColumn headerText="Name" dataField="name"/>
<mx:AdvancedDataGridColumn headerText="Date of birth" dataField="dob"
labelFunction="dob_label"
dataTipFunction="dob_dataTip"
sortCompareFunction="dob_sort"
showDataTips="true" />
</mx:columns>
</mx:AdvancedDataGrid>
</mx:Application>
A: While it doesn't appear to be the case in this example, a missing dataField on a column will prevent sort from happening. The systems are precisely as described, the sortCompareFunction is never called.
If you have a custom column renderer that is pulling fields out of data on its own, it's easy to overlook filling in a dataField attribute. Everything will work fine, in that case, until you go to sort. The sortCompareFunction will not be called.
Check by debuggging HierarchicalCollectionView.as, at line 1259 or so, in sortCanBeApplied.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Can HTML checkboxes be set to readonly? I thought they could be, but as I'm not putting my money where my mouth was (so to speak) setting the readonly attribute doesn't actually seem to do anything.
I'd rather not use Disabled, since I want the checked check boxes to be submitted with the rest of the form, I just don't want the client to be able to change them under certain circumstances.
A: onclick="javascript: return false;"
A: If you want them to be submitted to the server with form but be not interacive for user, you can use pointer-events: none in css (works in all modern browsers except IE10- and Opera 12-) and set tab-index to -1 to prevent changing via keyboard. Also note that you can't use label tag as click on it will change the state anyway.
input[type="checkbox"][readonly] {
pointer-events: none !important;
}
td {
min-width: 5em;
text-align: center;
}
td:last-child {
text-align: left;
}
<table>
<tr>
<th>usual
<th>readonly
<th>disabled
</tr><tr>
<td><input type=checkbox />
<td><input type=checkbox readonly tabindex=-1 />
<td><input type=checkbox disabled />
<td>works
</tr><tr>
<td><input type=checkbox checked />
<td><input type=checkbox readonly checked tabindex=-1 />
<td><input type=checkbox disabled checked />
<td>also works
</tr><tr>
<td><label><input type=checkbox checked /></label>
<td><label><input type=checkbox readonly checked tabindex=-1 /></label>
<td><label><input type=checkbox disabled checked /></label>
<td>broken - don't use label tag
</tr>
</table>
A: <input name="isActive" id="isActive" type="checkbox" value="1" checked="checked" onclick="return false"/>
A: <input type="checkbox" onclick="this.checked=!this.checked;">
But you absolutely MUST validate the data on the server to ensure it hasn't been changed.
A: you can use this:
<input type="checkbox" onclick="return false;"/>
This works because returning false from the click event stops the chain of execution continuing.
A: <input type="checkbox" onclick="return false" /> will work for you , I am using this
A: another "simple solution":
<!-- field that holds the data -->
<input type="hidden" name="my_name" value="1" />
<!-- visual dummy for the user -->
<input type="checkbox" name="my_name_visual_dummy" value="1" checked="checked" disabled="disabled" />
disabled="disabled" / disabled=true
A: READONLY doesn't work on checkboxes as it prevents you from editing a field's value, but with a checkbox you're actually editing the field's state (on || off)
From faqs.org:
It's important to understand that READONLY merely prevents the user from changing the value of the field, not from interacting with the field. In checkboxes, for example, you can check them on or off (thus setting the CHECKED state) but you don't change the value of the field.
If you don't want to use disabled but still want to submit the value, how about submitting the value as a hidden field and just printing its contents to the user when they don't meet the edit criteria? e.g.
// user allowed change
if($user_allowed_edit)
{
echo '<input type="checkbox" name="my_check"> Check value';
}
else
{
// Not allowed change - submit value..
echo '<input type="hidden" name="my_check" value="1" />';
// .. and show user the value being submitted
echo '<input type="checkbox" disabled readonly> Check value';
}
A: This presents a bit of a usability issue.
If you want to display a checkbox, but not let it be interacted with, why even a checkbox then?
However, my approach would be to use disabled (The user expects a disabled checkbox to not be editable, instead of using JS to make an enabled one not work), and add a form submit handler using javascript that enables checkboxes right before the form is submitted. This way you you do get your values posted.
ie something like this:
var form = document.getElementById('yourform');
form.onSubmit = function ()
{
var formElems = document.getElementsByTagName('INPUT');
for (var i = 0; i , formElems.length; i++)
{
if (formElems[i].type == 'checkbox')
{
formElems[i].disabled = false;
}
}
}
A: <input type="checkbox" readonly="readonly" name="..." />
with jquery:
$(':checkbox[readonly]').click(function(){
return false;
});
it still might be a good idea to give some visual hint (css, text,...), that the control won't accept inputs.
A: This is a checkbox you can't change:
<input type="checkbox" disabled="disabled" checked="checked">
Just add disabled="disabled" as an attribute.
Edit to address the comments:
If you want the data to be posted back, than a simple solutions is to apply the same name to a hidden input:
<input name="myvalue" type="checkbox" disabled="disabled" checked="checked"/>
<input name="myvalue" type="hidden" value="true"/>
This way, when the checkbox is set to 'disabled', it only serves the purpose of a visual representation of the data, instead of actually being 'linked' to the data. In the post back, the value of the hidden input is being sent when the checkbox is disabled.
A: Some of the answers on here seem a bit roundabout, but here's a small hack.
<form id="aform" name="aform" method="POST">
<input name="chkBox_1" type="checkbox" checked value="1" disabled="disabled" />
<input id="submitBttn" type="button" value="Submit" onClick='return submitPage();'>
</form>
then in jquery you can either choose one of two options:
$(document).ready(function(){
//first option, you don't need the disabled attribute, this will prevent
//the user from changing the checkbox values
$("input[name^='chkBox_1']").click(function(e){
e.preventDefault();
});
//second option, keep the disabled attribute, and disable it upon submit
$("#submitBttn").click(function(){
$("input[name^='chkBox_1']").attr("disabled",false);
$("#aform").submit();
});
});
demo:
http://jsfiddle.net/5WFYt/
A: Building on the above answers, if using jQuery, this may be an good solution for all inputs:
<script>
$(function () {
$('.readonly input').attr('readonly', 'readonly');
$('.readonly textarea').attr('readonly', 'readonly');
$('.readonly input:checkbox').click(function(){return false;});
$('.readonly input:checkbox').keydown(function () { return false; });
});
</script>
I'm using this with Asp.Net MVC to set some form elements read only. The above works for text and check boxes by setting any parent container as .readonly such as the following scenarios:
<div class="editor-field readonly">
<input id="Date" name="Date" type="datetime" value="11/29/2012 4:01:06 PM" />
</div>
<fieldset class="flags-editor readonly">
<input checked="checked" class="flags-editor" id="Flag1" name="Flags" type="checkbox" value="Flag1" />
</fieldset>
A: <input type="radio" name="alwaysOn" onchange="this.checked=true" checked="checked">
<input type="radio" name="alwaysOff" onchange="this.checked=false" >
A: I know that "disabled" isn't an acceptable answer, since the op wants it to post. However, you're always going to have to validate values on the server side EVEN if you have the readonly option set. This is because you can't stop a malicious user from posting values using the readonly attribute.
I suggest storing the original value (server side), and setting it to disabled. Then, when they submit the form, ignore any values posted and take the original values that you stored.
It'll look and behave like it's a readonly value. And it handles (ignores) posts from malicious users. You're killing 2 birds with one stone.
A: No, input checkboxes can't be readonly.
But you can make them readonly with javascript!
Add this code anywhere at any time to make checkboxes readonly work as assumed, by preventing the user from modifying it in any way.
jQuery(document).on('click', function(e){
// check for type, avoid selecting the element for performance
if(e.target.type == 'checkbox') {
var el = jQuery(e.target);
if(el.prop('readonly')) {
// prevent it from changing state
e.preventDefault();
}
}
});
input[type=checkbox][readonly] {
cursor: not-allowed;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<label><input type="checkbox" checked readonly> I'm readonly!</label>
You can add this script at any time after jQuery has loaded.
It will work for dynamically added elements.
It works by picking up the click event (that happens before the change event) on any element on the page, it then checks if this element is a readonly checkbox, and if it is, then it blocks the change.
There are so many ifs to make it not affect the performance of the page.
A: readonly does not work with <input type='checkbox'>
So, if you need to submit values from disabled checkboxes in a form, you can use jQuery:
$('form').submit(function(e) {
$('input[type="checkbox"]:disabled').each(function(e) {
$(this).removeAttr('disabled');
})
});
This way the disabled attributes are removed from the elements when submitting the form.
A: I would have commented on ConroyP's answer, but that requires 50 reputation which I don't have. I do have enough reputation to post another answer. Sorry.
The problem with ConroyP's answer is that the checkbox is rendered unchangeable by not even including it on the page. Although Electrons_Ahoy does not stipulate as much, the best answer would be one in which the unchangeable checkbox would look similar, if not the same as, the changeable checkbox, as is the case when the "disabled" attribute is applied. A solution which addresses the two reasons Electrons_Ahoy gives for not wanting to use the "disabled" attribute would not necessarily be invalid because it utilized the "disabled" attribute.
Assume two boolean variables, $checked and $disabled :
if ($checked && $disabled)
echo '<input type="hidden" name="my_name" value="1" />';
echo '<input type="checkbox" name="my_name" value="1" ',
$checked ? 'checked="checked" ' : '',
$disabled ? 'disabled="disabled" ' : '', '/>';
The checkbox is displayed as checked if $checked is true. The checkbox is displayed as unchecked if $checked is false. The user can change the state of the checkbox if and only if $disabled is false. The "my_name" parameter is not posted when the checkbox is unchecked, by the user or not. The "my_name=1" parameter is posted when the checkbox is checked, by the user or not. I believe this is what Electrons_Ahoy was looking for.
A: I would use the readonly attribute
<input type="checkbox" readonly>
Then use CSS to disable interactions:
input[type='checkbox'][readonly]{
pointer-events: none;
}
Note that using the pseudo-class :read-only doesn't work here.
input[type='checkbox']:read-only{ /*not working*/
pointer-events: none;
}
A: Belated answer, but most answers seem to over complicate it.
As I understand it, the OP was basically wanting:
*
*Readonly checkbox to show status.
*Value returned with form.
It should be noted that:
*
*The OP preferred not to use the disabled attribute, because they 'want the checked check boxes to be submitted with the rest of the form'.
*Unchecked checkboxes are not submitted with the form, as the quote from the OP in 1. above indicates they knew already. Basically, the value of the checkbox only exists if it is checked.
*A disabled checkbox clearly indicates that it cannot be changed, by design, so a user is unlikely to attempt to change it.
*The value of a checkbox is not limited to indicating its status, such as yes or false, but can be any text.
Therefore, since the readonly attribute does not work, the best solution, requiring no javascript, is:
*
*A disabled checkbox, with no name or value.
*If the checkbox is to be displayed as checked, a hidden field with the name and value as stored on the server.
So for a checked checkbox:
<input type="checkbox" checked="checked" disabled="disabled" />
<input type="hidden" name="fieldname" value="fieldvalue" />
For an unchecked checkbox:
<input type="checkbox" disabled="disabled" />
The main problem with disabled inputs, especially checkboxes, is their poor contrast which may be a problem for some with certain visual disabilities. It may be better to indicate a value by plain words, such as Status: none or Status: implemented, but including the hidden input above when the latter is used, such as:
<p>Status: Implemented<input type="hidden" name="status" value="implemented" /></p>
A: I used this to achieve the results:
<input type=checkbox onclick="return false;" onkeydown="return false;" />
A: If you want ALL your checkboxes to be "locked" so user can't change the "checked" state if "readonly" attibute is present, then you can use jQuery:
$(':checkbox').click(function () {
if (typeof ($(this).attr('readonly')) != "undefined") {
return false;
}
});
Cool thing about this code is that it allows you to change the "readonly" attribute all over your code without having to rebind every checkbox.
It works for radio buttons as well.
A: This works for me on Chrome:
<input type="checkbox" onclick="return false">
A: Most of the current answers have one or more of these problems:
*
*Only check for mouse not keyboard.
*Check only on page load.
*Hook the ever-popular change or submit events which won't always work out if something else has them hooked.
*Require a hidden input or other special elements/attributes that you have to undo in order to re-enable the checkbox using javascript.
The following is simple and has none of those problems.
$('input[type="checkbox"]').on('click keyup keypress keydown', function (event) {
if($(this).is('[readonly]')) { return false; }
});
If the checkbox is readonly, it won't change. If it's not, it will. It does use jquery, but you're probably using that already...
It works.
A: I happened to notice the solution given below. In found it my research for the same issue.
I don't who had posted it but it wasn't made by me. It uses jQuery:
$(document).ready(function() {
$(":checkbox").bind("click", false);
});
This would make the checkboxes read only which would be helpful for showing readonly data to the client.
A: Very late to the party but I found an answer for MVC (5)
I disabled the CheckBox and added a HiddenFor BEFORE the checkbox, so when it is posting if finds the Hidden field first and uses that value. This does work.
<div class="form-group">
@Html.LabelFor(model => model.Carrier.Exists, new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.HiddenFor(model => model.Carrier.Exists)
@Html.CheckBoxFor(model => model.Carrier.Exists, new { @disabled = "disabled" })
@Html.ValidationMessageFor(model => model.Carrier.Exists)
</div>
</div>
A:
I just don't want the client to be able to change them under certain circumstances.
READONLY itself won't work. You may be able to do something funky w/CSS but we usually just make them disabled.
WARNING: If they're posted back then the client can change them, period. You can't rely on readonly to prevent a user from changing something. The could always use fiddler or just chane the html w/firebug or some such thing.
A: If you need the checkbox to be submitted with the form but effectively read-only to the user, I recommend setting them to disabled and using javascript to re-enable them when the form is submitted.
This is for two reasons. First and most important, your users benefit from seeing a visible difference between checkboxes they can change and checkboxes which are read-only. Disabled does this.
Second reason is that the disabled state is built into the browser so you need less code to execute when the user clicks on something. This is probably more of a personal preference than anything else. You'll still need some javascript to un-disable these when submitting the form.
It seems easier to me to use some javascript when the form is submitted to un-disable the checkboxes than to use a hidden input to carry the value.
A: The main reason people would like a read-only check-box and (as well) a read-only radio-group is so that information that cannot be changed can be presented back to the user in the form it was entered.
OK disabled will do this -- unfortunately disabled controls are not keyboard navigable and therefore fall foul of all accessibility legislation. This is the BIGGEST hangup in HTML that I know of.
A: Contributing very very late...but anyway. On page load, use jquery to disable all checkboxes except the currently selected one. Then set the currently selected one as read only so it has a similar look as the disabled ones. User cannot change the value, and the selected value still submits.
A: Or just:
$('your selector').click(function(evt){evt.preventDefault()});
A: Just use simple disabled tag like this below.
<input type="checkbox" name="email" disabled>
A: Extract from https://stackoverflow.com/a/71086058/18183749
If you can't use the 'disabled' attribut (as it erases the value's
input at POST), and noticed that html attribut 'readonly' works only
on textarea and some input(text, password, search, as far I've seen),
and finally, if you don't want to bother with duplicating all your
select, checkbox and radio with hidden input logics, you might find
the following function or any of his inner logics to your liking :
addReadOnlyToFormElements = function (idElement) {
// html readonly don't work on input of type checkbox and radio, neither on select. So, a safe trick is to disable the non-selected items
$('#' + idElement + ' input[type="checkbox"]:not(:checked)').prop('disabled',true);
// and, on the selected ones, to disable mouse/keyoard events and mimic readOnly appearance
$('#' + idElement + ' input[type="checkbox"]:checked').prop('tabindex','-1').css('pointer-events','none').css('opacity','0.5');
}
And there's nothing easier than to remove these readonly
removeReadOnlyFromFormElements = function (idElement) {
// Remove the disabled attribut on non-selected
$('#' + idElement + ' input[type="checkbox"]:not(:checked)').prop('disabled',false);
// Restore mouse/keyboard events and remove readOnly appearance on selected ones
$('#' + idElement + ' input[type="checkbox"]:checked').prop('tabindex','').css('pointer-events','').css('opacity','');
}
A: No, but you might be able to use javascript events to achieve something similar
A: You could always use a small image that looks like a check box.
A: Try this to make the checkbox read-only and yet disallow user from checking. This will let you POST the checkbox value. You need to select the default state of the checkbox as Checked in order to do so.
<input type="checkbox" readonly="readonly" onclick="this.checked =! this.checked;">
If you want the above functionality + dont want to receive the checkbox data, try the below:
<input type="checkbox" readonly="readonly" disabled="disabled" onclick="this.checked =! this.checked;">
Hope that helps.
A: What none of you are thinking about here is that you are all using JavaScript, which can be very easily bypassed by any user.
Simply disabling it disables all JQuery/Return False statements.
Your only option for readonly checkboxes is server side.
Display them, let them change them but don't accept the new post data from them.
A: Why not cover the input with a blank div on top.
<div id='checkbox_wrap'>
<div class='click_block'></div>
<input type='checkbox' />
</div>
then something like this with CSS
#checkbox_wrap{
height:10px;
width:10px;
}
.click_block{
height: 10px;
width: 10px;
position: absolute;
z-index: 100;
}
A: Simplest (in my view):
onclick="javascript:{this.checked = this.defaultChecked;}"
A: In old HTML you can use
<input type="checkbox" disabled checked>text
but actually is not recommended to use just simply old HTML, now you should use XHTML.
In well formed XHTML you have to use
<input type="checkbox" disabled="disabled" checked="checked" />text <!-- if yu have a checked box-->
<input type="checkbox" disabled="disabled" />text <!-- if you have a unchecked box -->
well formed XHTML requires a XML form, thats the reason to use disabled="disabled" instead of simply use disabled.
A: The following works if you know that the checkbox is always checked:
<input type="checkbox" name="Name" checked onchange='this.checked = true;'>
A: When posting an HTML checkbox to the server, it has a string value of 'on' or ''.
Readonly does not stop the user editing the checkbox, and disabled stops the value being posted back.
One way around this is to have a hidden element to store the actual value and the displayed checkbox is a dummy which is disabled. This way the checkbox state is persisted between posts.
Here is a function to do this. It uses a string of 'T' or 'F' and you can change this any way you like. This has been used in an ASP page using server side VB script.
public function MakeDummyReadonlyCheckbox(i_strName, i_strChecked_TorF)
dim strThisCheckedValue
if (i_strChecked_TorF = "T") then
strThisCheckedValue = " checked "
i_strChecked_TorF = "on"
else
strThisCheckedValue = ""
i_strChecked_TorF = ""
end if
MakeDummyReadonlyCheckbox = "<input type='hidden' id='" & i_strName & "' name='" & i_strName & "' " & _
"value='" & i_strChecked_TorF & "'>" & _
"<input type='checkbox' disabled id='" & i_strName & "Dummy' name='" & i_strName & "Dummy' " & _
strThisCheckedValue & ">"
end function
public function GetCheckbox(i_objCheckbox)
select case trim(i_objCheckbox)
case ""
GetCheckbox = "F"
case else
GetCheckbox = "T"
end select
end function
At the top of an ASP page you can pickup the persisted value...
strDataValue = GetCheckbox(Request.Form("chkTest"))
and when you want to output your checkbox you can do this...
response.write MakeDummyReadonlyCheckbox("chkTest", strDataValue)
I have tested this and it works just fine. It also does not rely upon JavaScript.
A: My solution is actually the opposite of FlySwat's solution, but I'm not sure if it will work for your situation. I have a group of checkboxes, and each has an onClick handler that submits the form (they're used for changing filter settings for a table). I don't want to allow multiple clicks, since subsequent clicks after the first are ignored. So I disable all checkboxes after the first click, and after submitting the form:
onclick="document.forms['form1'].submit(); $('#filters input').each(function() {this.disabled = true});"
The checkboxes are in a span element with an ID of "filters" - the second part of the code is a jQuery statement that iterates through the checkboxes and disables each one. This way, the checkbox values are still submitted via the form (since the form was submitted before disabling them), and it prevents the user from changing them until the page reloads.
A: When submitting the form, we actually pass the value of the checkbox, not the state (checked/unchecked). Readonly attribute prevents us to edit the value, but not the state. If you want to have a read-only field that will represent the value you want to submit, use readonly text.
A: If anyone else is using MVC and an editor template, this is how I control displaying a read only property (I use a custom attribute to get the value in the if statement)
@if (true)
{
@Html.HiddenFor(m => m)
@(ViewData.Model ? Html.Raw("Yes") : Html.Raw("No"))
}
else
{
@Html.CheckBoxFor(m => m)
}
A: <input name="testName" type="checkbox" disabled>
A: Latest jQuery has this feature
$("#txtname").prop('readonly', false);
A: @(Model.IsEnabled)
Use this condition for dynamically check and uncheck and set readonly if check box is already checked.
<input id="abc" name="abc" type="checkbox" @(Model.IsEnabled ? "checked=checked onclick=this.checked=!this.checked;" : string.Empty) >
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "956"
} |
Q: Adding Combo Box to setup using Orca I am trying to display a combo box in a dialog during setup of a component. Currently, we have a Radio Button Group. I figured that replacing it with the combo box should be as simple as adding proper entries in the "ComboBox" table in the MSI and in the "Control" table, replacing the references to the radio button group with combobox in the appropriate dialog box. However this is not working. The setup blows up and gives an error #2885. [Windows Installer Error 2885: Failed to create the control [3] on the dialog [2]. from here.]
Any ideas on how to do this? I can only use Orca apparently (thats what has been used since anyone can remember).
A: Aha!! Figured out what it was.
I was doing everything right, except that when i replaced the radio button with the combo box, there was still one element (Previous Button) which had the Radio Button Group as its next element (Sort of like a tab stop). And on form load, it tried to find the radio buttons and could not find them, thus giving the error.
Sort of like the typical new programmer mistakes when dealing with linked lists: Not updating the references to the node which you are deleting/inserting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: .NET - How can you split a "caps" delimited string into an array? How do I go from this string: "ThisIsMyCapsDelimitedString"
...to this string: "This Is My Caps Delimited String"
Fewest lines of code in VB.net is preferred but C# is also welcome.
Cheers!
A: Regex.Replace("ThisIsMyCapsDelimitedString", "(\\B[A-Z])", " $1")
A: For more variety, using plain old C# objects, the following produces the same output as @MizardX's excellent regular expression.
public string FromCamelCase(string camel)
{ // omitted checking camel for null
StringBuilder sb = new StringBuilder();
int upperCaseRun = 0;
foreach (char c in camel)
{ // append a space only if we're not at the start
// and we're not already in an all caps string.
if (char.IsUpper(c))
{
if (upperCaseRun == 0 && sb.Length != 0)
{
sb.Append(' ');
}
upperCaseRun++;
}
else if( char.IsLower(c) )
{
if (upperCaseRun > 1) //The first new word will also be capitalized.
{
sb.Insert(sb.Length - 1, ' ');
}
upperCaseRun = 0;
}
else
{
upperCaseRun = 0;
}
sb.Append(c);
}
return sb.ToString();
}
A: Below is a prototype that converts the following to Title Case:
*
*snake_case
*camelCase
*PascalCase
*sentence case
*Title Case (keep current formatting)
Obviously you would only need the "ToTitleCase" method yourself.
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Text.RegularExpressions;
public class Program
{
public static void Main()
{
var examples = new List<string> {
"THEQuickBrownFox",
"theQUICKBrownFox",
"TheQuickBrownFOX",
"TheQuickBrownFox",
"the_quick_brown_fox",
"theFOX",
"FOX",
"QUICK"
};
foreach (var example in examples)
{
Console.WriteLine(ToTitleCase(example));
}
}
private static string ToTitleCase(string example)
{
var fromSnakeCase = example.Replace("_", " ");
var lowerToUpper = Regex.Replace(fromSnakeCase, @"(\p{Ll})(\p{Lu})", "$1 $2");
var sentenceCase = Regex.Replace(lowerToUpper, @"(\p{Lu}+)(\p{Lu}\p{Ll})", "$1 $2");
return new CultureInfo("en-US", false).TextInfo.ToTitleCase(sentenceCase);
}
}
The console out would be as follows:
THE Quick Brown Fox
The QUICK Brown Fox
The Quick Brown FOX
The Quick Brown Fox
The Quick Brown Fox
The FOX
FOX
QUICK
Blog Post Referenced
A: Great answer, MizardX! I tweaked it slightly to treat numerals as separate words, so that "AddressLine1" would become "Address Line 1" instead of "Address Line1":
Regex.Replace(s, "([a-z](?=[A-Z0-9])|[A-Z](?=[A-Z][a-z]))", "$1 ")
A: string s = "ThisIsMyCapsDelimitedString";
string t = Regex.Replace(s, "([A-Z])", " $1").Substring(1);
A: Regex is about 10-12 times slower than a simple loop:
public static string CamelCaseToSpaceSeparated(this string str)
{
if (string.IsNullOrEmpty(str))
{
return str;
}
var res = new StringBuilder();
res.Append(str[0]);
for (var i = 1; i < str.Length; i++)
{
if (char.IsUpper(str[i]))
{
res.Append(' ');
}
res.Append(str[i]);
}
return res.ToString();
}
A: I made this a while ago. It matches each component of a CamelCase name.
/([A-Z]+(?=$|[A-Z][a-z])|[A-Z]?[a-z]+)/g
For example:
"SimpleHTTPServer" => ["Simple", "HTTP", "Server"]
"camelCase" => ["camel", "Case"]
To convert that to just insert spaces between the words:
Regex.Replace(s, "([a-z](?=[A-Z])|[A-Z](?=[A-Z][a-z]))", "$1 ")
If you need to handle digits:
/([A-Z]+(?=$|[A-Z][a-z]|[0-9])|[A-Z]?[a-z]+|[0-9]+)/g
Regex.Replace(s,"([a-z](?=[A-Z]|[0-9])|[A-Z](?=[A-Z][a-z]|[0-9])|[0-9](?=[^0-9]))","$1 ")
A: Just for a little variety... Here's an extension method that doesn't use a regex.
public static class CamelSpaceExtensions
{
public static string SpaceCamelCase(this String input)
{
return new string(Enumerable.Concat(
input.Take(1), // No space before initial cap
InsertSpacesBeforeCaps(input.Skip(1))
).ToArray());
}
private static IEnumerable<char> InsertSpacesBeforeCaps(IEnumerable<char> input)
{
foreach (char c in input)
{
if (char.IsUpper(c))
{
yield return ' ';
}
yield return c;
}
}
}
A: Grant Wagner's excellent comment aside:
Dim s As String = RegularExpressions.Regex.Replace("ThisIsMyCapsDelimitedString", "([A-Z])", " $1")
A: I needed a solution that supports acronyms and numbers. This Regex-based solution treats the following patterns as individual "words":
*
*A capital letter followed by lowercase letters
*A sequence of consecutive numbers
*Consecutive capital letters (interpreted as acronyms) - a new word can begin using the last capital, e.g. HTMLGuide => "HTML Guide", "TheATeam" => "The A Team"
You could do it as a one-liner:
Regex.Replace(value, @"(?<!^)((?<!\d)\d|(?(?<=[A-Z])[A-Z](?=[a-z])|[A-Z]))", " $1")
A more readable approach might be better:
using System.Text.RegularExpressions;
namespace Demo
{
public class IntercappedStringHelper
{
private static readonly Regex SeparatorRegex;
static IntercappedStringHelper()
{
const string pattern = @"
(?<!^) # Not start
(
# Digit, not preceded by another digit
(?<!\d)\d
|
# Upper-case letter, followed by lower-case letter if
# preceded by another upper-case letter, e.g. 'G' in HTMLGuide
(?(?<=[A-Z])[A-Z](?=[a-z])|[A-Z])
)";
var options = RegexOptions.IgnorePatternWhitespace | RegexOptions.Compiled;
SeparatorRegex = new Regex(pattern, options);
}
public static string SeparateWords(string value, string separator = " ")
{
return SeparatorRegex.Replace(value, separator + "$1");
}
}
}
Here's an extract from the (XUnit) tests:
[Theory]
[InlineData("PurchaseOrders", "Purchase-Orders")]
[InlineData("purchaseOrders", "purchase-Orders")]
[InlineData("2Unlimited", "2-Unlimited")]
[InlineData("The2Unlimited", "The-2-Unlimited")]
[InlineData("Unlimited2", "Unlimited-2")]
[InlineData("222Unlimited", "222-Unlimited")]
[InlineData("The222Unlimited", "The-222-Unlimited")]
[InlineData("Unlimited222", "Unlimited-222")]
[InlineData("ATeam", "A-Team")]
[InlineData("TheATeam", "The-A-Team")]
[InlineData("TeamA", "Team-A")]
[InlineData("HTMLGuide", "HTML-Guide")]
[InlineData("TheHTMLGuide", "The-HTML-Guide")]
[InlineData("TheGuideToHTML", "The-Guide-To-HTML")]
[InlineData("HTMLGuide5", "HTML-Guide-5")]
[InlineData("TheHTML5Guide", "The-HTML-5-Guide")]
[InlineData("TheGuideToHTML5", "The-Guide-To-HTML-5")]
[InlineData("TheUKAllStars", "The-UK-All-Stars")]
[InlineData("AllStarsUK", "All-Stars-UK")]
[InlineData("UKAllStars", "UK-All-Stars")]
A: Naive regex solution. Will not handle O'Conner, and adds a space at the start of the string as well.
s = "ThisIsMyCapsDelimitedString"
split = Regex.Replace(s, "[A-Z0-9]", " $&");
A: There's probably a more elegant solution, but this is what I come up with off the top of my head:
string myString = "ThisIsMyCapsDelimitedString";
for (int i = 1; i < myString.Length; i++)
{
if (myString[i].ToString().ToUpper() == myString[i].ToString())
{
myString = myString.Insert(i, " ");
i++;
}
}
A: Try to use
"([A-Z]*[^A-Z]*)"
The result will fit for alphabet mix with numbers
Regex.Replace("AbcDefGH123Weh", "([A-Z]*[^A-Z]*)", "$1 ");
Abc Def GH123 Weh
Regex.Replace("camelCase", "([A-Z]*[^A-Z]*)", "$1 ");
camel Case
A: Implementing the psudo code from: https://stackoverflow.com/a/5796394/4279201
private static StringBuilder camelCaseToRegular(string i_String)
{
StringBuilder output = new StringBuilder();
int i = 0;
foreach (char character in i_String)
{
if (character <= 'Z' && character >= 'A' && i > 0)
{
output.Append(" ");
}
output.Append(character);
i++;
}
return output;
}
A: To match between non-uppercase and Uppercase Letter Unicode Category : (?<=\P{Lu})(?=\p{Lu})
Dim s = Regex.Replace("CorrectHorseBatteryStaple", "(?<=\P{Lu})(?=\p{Lu})", " ")
A: Procedural and fast impl:
/// <summary>
/// Get the words in a code <paramref name="identifier"/>.
/// </summary>
/// <param name="identifier">The code <paramref name="identifier"/></param> to extract words from.
public static string[] GetWords(this string identifier) {
Contract.Ensures(Contract.Result<string[]>() != null, "returned array of string is not null but can be empty");
if (identifier == null) { return new string[0]; }
if (identifier.Length == 0) { return new string[0]; }
const int MIN_WORD_LENGTH = 2; // Ignore one letter or one digit words
var length = identifier.Length;
var list = new List<string>(1 + length/2); // Set capacity, not possible more words since we discard one char words
var sb = new StringBuilder();
CharKind cKindCurrent = GetCharKind(identifier[0]); // length is not zero here
CharKind cKindNext = length == 1 ? CharKind.End : GetCharKind(identifier[1]);
for (var i = 0; i < length; i++) {
var c = identifier[i];
CharKind cKindNextNext = (i >= length - 2) ? CharKind.End : GetCharKind(identifier[i + 2]);
// Process cKindCurrent
switch (cKindCurrent) {
case CharKind.Digit:
case CharKind.LowerCaseLetter:
sb.Append(c); // Append digit or lowerCaseLetter to sb
if (cKindNext == CharKind.UpperCaseLetter) {
goto TURN_SB_INTO_WORD; // Finish word if next char is upper
}
goto CHAR_PROCESSED;
case CharKind.Other:
goto TURN_SB_INTO_WORD;
default: // charCurrent is never Start or End
Debug.Assert(cKindCurrent == CharKind.UpperCaseLetter);
break;
}
// Here cKindCurrent is UpperCaseLetter
// Append UpperCaseLetter to sb anyway
sb.Append(c);
switch (cKindNext) {
default:
goto CHAR_PROCESSED;
case CharKind.UpperCaseLetter:
// "SimpleHTTPServer" when we are at 'P' we need to see that NextNext is 'e' to get the word!
if (cKindNextNext == CharKind.LowerCaseLetter) {
goto TURN_SB_INTO_WORD;
}
goto CHAR_PROCESSED;
case CharKind.End:
case CharKind.Other:
break; // goto TURN_SB_INTO_WORD;
}
//------------------------------------------------
TURN_SB_INTO_WORD:
string word = sb.ToString();
sb.Length = 0;
if (word.Length >= MIN_WORD_LENGTH) {
list.Add(word);
}
CHAR_PROCESSED:
// Shift left for next iteration!
cKindCurrent = cKindNext;
cKindNext = cKindNextNext;
}
string lastWord = sb.ToString();
if (lastWord.Length >= MIN_WORD_LENGTH) {
list.Add(lastWord);
}
return list.ToArray();
}
private static CharKind GetCharKind(char c) {
if (char.IsDigit(c)) { return CharKind.Digit; }
if (char.IsLetter(c)) {
if (char.IsUpper(c)) { return CharKind.UpperCaseLetter; }
Debug.Assert(char.IsLower(c));
return CharKind.LowerCaseLetter;
}
return CharKind.Other;
}
enum CharKind {
End, // For end of string
Digit,
UpperCaseLetter,
LowerCaseLetter,
Other
}
Tests:
[TestCase((string)null, "")]
[TestCase("", "")]
// Ignore one letter or one digit words
[TestCase("A", "")]
[TestCase("4", "")]
[TestCase("_", "")]
[TestCase("Word_m_Field", "Word Field")]
[TestCase("Word_4_Field", "Word Field")]
[TestCase("a4", "a4")]
[TestCase("ABC", "ABC")]
[TestCase("abc", "abc")]
[TestCase("AbCd", "Ab Cd")]
[TestCase("AbcCde", "Abc Cde")]
[TestCase("ABCCde", "ABC Cde")]
[TestCase("Abc42Cde", "Abc42 Cde")]
[TestCase("Abc42cde", "Abc42cde")]
[TestCase("ABC42Cde", "ABC42 Cde")]
[TestCase("42ABC", "42 ABC")]
[TestCase("42abc", "42abc")]
[TestCase("abc_cde", "abc cde")]
[TestCase("Abc_Cde", "Abc Cde")]
[TestCase("_Abc__Cde_", "Abc Cde")]
[TestCase("ABC_CDE_FGH", "ABC CDE FGH")]
[TestCase("ABC CDE FGH", "ABC CDE FGH")] // Should not happend (white char) anything that is not a letter/digit/'_' is considered as a separator
[TestCase("ABC,CDE;FGH", "ABC CDE FGH")] // Should not happend (,;) anything that is not a letter/digit/'_' is considered as a separator
[TestCase("abc<cde", "abc cde")]
[TestCase("abc<>cde", "abc cde")]
[TestCase("abc<D>cde", "abc cde")] // Ignore one letter or one digit words
[TestCase("abc<Da>cde", "abc Da cde")]
[TestCase("abc<cde>", "abc cde")]
[TestCase("SimpleHTTPServer", "Simple HTTP Server")]
[TestCase("SimpleHTTPS2erver", "Simple HTTPS2erver")]
[TestCase("camelCase", "camel Case")]
[TestCase("m_Field", "Field")]
[TestCase("mm_Field", "mm Field")]
public void Test_GetWords(string identifier, string expectedWordsStr) {
var expectedWords = expectedWordsStr.Split(' ');
if (identifier == null || identifier.Length <= 1) {
expectedWords = new string[0];
}
var words = identifier.GetWords();
Assert.IsTrue(words.SequenceEqual(expectedWords));
}
A: A simple solution, which should be order(s) of magnitude faster than a regex solution (based on the tests I ran against the top solutions in this thread), especially as the size of the input string grows:
string s1 = "ThisIsATestStringAbcDefGhiJklMnoPqrStuVwxYz";
string s2;
StringBuilder sb = new StringBuilder();
foreach (char c in s1)
sb.Append(char.IsUpper(c)
? " " + c.ToString()
: c.ToString());
s2 = sb.ToString();
A: For C# building on this awesome answer by @ZombieSheep but now using a compiled regex for better performance:
public static class StringExtensions
{
private static readonly Regex _regex1 = new(@"(\P{Ll})(\P{Ll}\p{Ll})", RegexOptions.Compiled | RegexOptions.CultureInvariant);
private static readonly Regex _regex2 = new(@"(\p{Ll})(\P{Ll})", RegexOptions.Compiled | RegexOptions.CultureInvariant);
public static string SplitCamelCase(this string str)
{
return _regex2.Replace(_regex1.Replace(str, "$1 $2"), "$1 $2");
}
}
Sample code:
private static void Main(string[] args)
{
string str = "ThisIsAPropertyNAMEWithNumber10";
Console.WriteLine(str.SplitCamelCase());
}
Result:
This Is A Property NAME With Number 10
A plus point of this one is that it also works for strings that contain digits/numbers.
A: Regex.Replace(str, @"(\p{Ll}(?=[\p{Lu}0-9])|\p{Lu}(?=\p{Lu}\p{Ll}|[0-9])|[0-9](?=\p{L}))", "$1 ")
It deals with all Unicode characters, plus it works fine if your string is a regular sentence that contains a camel case expression (and you want to keep the sentence intact but to break the camel case into words, without duplicating spaces etc).
I took Markus Jarderot's answer which is excellent (so credits to him) and replaced [A-Z] with \p{Lu} and [a-z] with \p{Ll} and modified the last part to deal with numbers.
If you want numbers to trail after acronyms (e.g. HTML5Guide ⮕ HTML5 Guide):
Regex.Replace(str, @"(\p{Ll}(?=[\p{Lu}0-9])|\p{Lu}(?=\p{Lu}\p{Ll})|[0-9](?=\p{L}))", " $1")
Another approach
Just another approach to solve the problem:
Regex.Replace(str, @"((?<=[\p{Ll}0-9])\p{Lu}|(?<=\p{Lu})\p{Lu}(?=\p{Ll})|(?<=\p{L})[0-9]|(?<=[0-9])\p{Ll})", " $1")
More Options
If you want numbers to trail after acronyms (e.g. HTML5Guide ⮕ HTML5 Guide):
Regex.Replace(str, @"((?<=[\p{Ll}0-9])\p{Lu}|(?<=\p{Lu})\p{Lu}(?=\p{Ll})|(?<=\p{Ll})[0-9]|(?<=[0-9])\p{Ll})", " $1")
If you want numbers to trail after any word (e.g. Html5Guide ⮕ Html5 Guide):
Regex.Replace(str, @"((?<=[\p{Ll}0-9])\p{Lu}|(?<=\p{Lu})\p{Lu}(?=\p{Ll})|(?<=[0-9])\p{Ll})", " $1")
If you don't want to deal with numbers and you're sure to not have them in the string:
Regex.Replace(str, @"((?<=\p{Ll})\p{Lu}|(?<=\p{Lu})\p{Lu}(?=\p{Ll}))", " $1")
For a simpler version (ignoring special Unicode characters like é as in fiancé),
pick any of the above regexes and simply
replace \p{Lu} with [A-Z], \p{Ll} with [a-z] and \p{L} with [A-Za-z].
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "120"
} |
Q: Cross browser "jump to"/"scroll" textarea I have a textarea with many lines of input, and a JavaScript event fires that necessitates I scroll the textarea to line 345.
scrollTop sort of does what I want, except as far as I can tell it's pixel level, and I want something that operates on a line level. What also complicates things is that, afaik once again, it's not possible to make textareas not line-wrap.
A: If you use .scrollHeight instead of .clientHeight, it will work properly for textareas that are shown with a limited height and a scrollbar:
<script type="text/javascript" language="JavaScript">
function Jump(line)
{
var ta = document.getElementById("TextArea");
var lineHeight = ta.scrollHeight / ta.rows;
var jump = (line - 1) * lineHeight;
ta.scrollTop = jump;
}
</script>
<textarea name="TextArea" id="TextArea"
rows="40" cols="80" title="Paste text here"
wrap="off"></textarea>
<input type="button" onclick="Jump(98)" title="Go!" value="Jump"/>
A: You can stop wrapping with the wrap attribute. It is not part of HTML 4, but most browsers support it.
You can compute the height of a line by dividing the height of the area by its number of rows.
<script type="text/javascript" language="JavaScript">
function Jump(line)
{
var ta = document.getElementById("TextArea");
var lineHeight = ta.clientHeight / ta.rows;
var jump = (line - 1) * lineHeight;
ta.scrollTop = jump;
}
</script>
<textarea name="TextArea" id="TextArea"
rows="40" cols="80" title="Paste text here"
wrap="off"></textarea>
<input type="button" onclick="Jump(98)" title="Go!" value="Jump"/>
Tested OK in FF3 and IE6.
A: Something to consider when referring to the accepted answer: you may not have specified the rows attribute in your textarea e.g. instead, you may have set the height of the textarea using CSS.
Therefore referring to ta.rows will not work as per above (it's 2 by default), so instead you could get the line-height of your textarea via currentStyle / getComputedStyle or even jQuery's .css(), and do something like the following:
function jump(line) {
var ta = document.getElementById("TextArea");
var jump = line * parseInt(getStyle(ta, 'line-height'), 10);
ta.scrollTop = jump;
}
function getStyle(el, styleProp) {
if (el.currentStyle) {
var y = el.currentStyle[styleProp];
} else if (window.getComputedStyle) {
var y = document.defaultView.getComputedStyle(el, null).getPropertyValue(styleProp);
}
return y;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How do I get PHP to work with ADOdb and MySQL? I'm trying to get a PHP site working in IIS on Windows Server with MySQL.
I'm getting this error…
Fatal error: Call to undefined function mysql_connect() in C:\inetpub...\adodb\drivers\adodb-mysql.inc.php on line 363
Update…
This link outlines the steps I followed to install PHP on my server:
How do I get PHP and MySQL working on IIS 7.0 ?
(note especially steps 6 and 8 regarting php.ini and php_mysql.dll).
Only two lines in the phpinfo report reference SQL:
<?php
phpinfo();
?>
Configure Command:
cscript /nologo configure.js "--enable-snapshot-build" "--enable-mysqlnd"
sql.safe_mode:
Local Value Off, Master Value Off
PHP Configure Command http://img79.imageshack.us/img79/2373/configurecommandmw8.gif
PHP sql.safe_mode http://img49.imageshack.us/img49/3066/sqlsafemoderu6.gif
Update…
I found the solution: How do I install MySQL modules within PHP?
A: Check out phpinfo to see if the mysql functions are compiled with your PHP
<?php
phpinfo();
?>
Since in some versions of php, its not default with the install.
Edit for the Update:
You should have a full MySQL category in your phpinfo();
See this for example: https://secure18.easycgi.com/phpinfo.php (googled example)
A: It sounds like the version of PHP you are using has not been compiled with MySQL support, or has not been enabled in the php.ini.
A: Looks like you haven't got the MySQL PHP extensions installed! You shouldn't have to do any configuration other than installing the correct modules (and shouldn't be doing anything with ADODB).
PHP comes in 2 versions as well - a CGI version and an ISAPI module. You're best using the ISAPI version with ISS and adding all the trimmings...
A: I found the solution: How do I install MySQL modules within PHP?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Boost: Fire and forget asynchronous function call? I would like invoke a function call in a one shot manner. What's the best way to do this in Boost / C++?
I would like to pass it two parameters and do not need a result.
A: Well you can spin up a thread and then never join it again. Something like:
boost::thread some_thread(&SomeFunction, param1, param2);
Once the some_thread variable goes out of scope, the thread of execution will be detached and will not be able to be joined again. It's a waste to spin up a thread unless the processing time of this function is significant though.
A: I haven't used boost::thread in awhile but I see a quick example on the documentation page for the class:
void find_the_question(int the_answer);
boost::thread deep_thought_2(find_the_question,42);
I believe as soon as it finishes the function, the thread will exit. This may not be what you want in that once the thread goes out of scope, it will be destroyed. If that's not going to work, you probably need to create a long running thread pool and then pass your functors as boost::bind compositions.
A: Depending on how often you are doing this, you might be best off creating a pool of threads, along with a work queue. Creating a thread can create a lot of overhead if you are trying to do it dozens of times a second. If you don't care about the return value, that makes it really easy.
Spin up a thread or two (or ten); have a thread-safe queue of functors to call (bind the parameters to the function and put that on the queue); the threads wait on the queue for something to show up, the first thread to wake up gets to process the work. When a thread is done running a job, it waits on the queue again.
Take a look at this project for an idea of one way to do it.
Of course if you are only making asynchonous calls every couple of seconds to improve a UI's responsiveness, it'd be easier to just start up a new thread every time.
A: Perhaps you want to emit a signal?
I really liked Qt's signals and slots functionality, and I know Boost has signals/slots as well. I've never used signals/slots in Boost, though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I get a Video Thumbnail in .Net? I'm looking to implement a function that retrieves a single frame from an input video, so I can use it as a thumbnail.
Something along these lines should work:
// filename examples: "test.avi", "test.dvr-ms"
// position is from 0 to 100 percent (0.0 to 1.0)
// returns a bitmap
byte[] GetVideoThumbnail(string filename, float position)
{
}
Does anyone know how to do this in .Net 3.0?
The correct solution will be the "best" implementation of this function.
Bonus points for avoiding selection of blank frames.
A: This project will do the trick for AVIs: http://www.codeproject.com/KB/audio-video/avifilewrapper.aspx
Anything other formats, you might look into directshow. There are a few projects that might help:
http://sourceforge.net/projects/directshownet/
http://code.google.com/p/slimdx/
A: 1- Get latest version of ffmpeg.exe from : http://ffmpeg.arrozcru.org/builds/
2- Extract the file and copy ffmpeg.exe to your website
3- Use this Code:
Process ffmpeg;
string video;
string thumb;
video = Server.MapPath("first.avi");
thumb = Server.MapPath("frame.jpg");
ffmpeg = new Process();
ffmpeg.StartInfo.Arguments = " -i "+video+" -ss 00:00:07 -vframes 1 -f image2 -vcodec mjpeg "+thumb;
ffmpeg.StartInfo.FileName = Server.MapPath("ffmpeg.exe");
ffmpeg.Start();
A: I ended up rolling my own stand alone class (with the single method I described), the source can be viewed here. Media browser is GPL but I am happy for the code I wrote for that file to be Public Domain. Keep in mind it uses interop from the directshow.net project so you will have to clear that portion of the code with them.
This class will not work for DVR-MS files, you need to inject a direct show filter for those.
A: There are some libraries at www.mitov.com that may help. It's a generic wrapper for Directshow functionality, and I think one of the demos shows how to take a frame from a video file.
A: This is also worth to see:
http://www.codeproject.com/Articles/13237/Extract-Frames-from-Video-Files
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: What columns can be used in OUTPUT INTO clause? I'm trying to build a mapping table to associate the IDs of new rows in a table with those that they're copied from. The OUTPUT INTO clause seems perfect for that, but it doesn't seem to behave according to the documentation.
My code:
DECLARE @Missing TABLE (SrcContentID INT PRIMARY KEY )
INSERT INTO @Missing
( SrcContentID )
SELECT cshadow.ContentID
FROM Private.Content AS cshadow
LEFT JOIN Private.Content AS cglobal ON cshadow.Tag = cglobal.Tag
WHERE cglobal.ContentID IS NULL
PRINT 'Adding new content headers'
DECLARE @Inserted TABLE (SrcContentID INT PRIMARY KEY, TgtContentID INT )
INSERT INTO Private.Content
( Tag, Description, ContentDate, DateActivate, DateDeactivate, SortOrder, CreatedOn, IsDeleted, ContentClassCode, ContentGroupID, OrgUnitID )
OUTPUT cglobal.ContentID, INSERTED.ContentID INTO @Inserted (SrcContentID, TgtContentID)
SELECT Tag, Description, ContentDate, DateActivate, DateDeactivate, SortOrder, CreatedOn, IsDeleted, ContentClassCode, ContentGroupID, NULL
FROM Private.Content AS cglobal
INNER JOIN @Missing AS m ON cglobal.ContentID = m.SrcContentID
Results in the error message:
Msg 207, Level 16, State 1, Line 34
Invalid column name 'SrcContentID'.
(line 34 being the one with the OUTPUT INTO)
Experimentation suggests that only rows that are actually present in the target of the INSERT can be selected in the OUTPUT INTO. But this contradicts the docs in the books online. The article on OUTPUT Clause has example E that describes a similar usage:
The OUTPUT INTO clause returns values
from the table being updated
(WorkOrder) and also from the Product
table. The Product table is used in
the FROM clause to specify the rows to
update.
Has anyone worked with this feature?
(In the meantime I've rewritten my code to do the job using a cursor loop, but that's ugly and I'm still curious)
A: I'm running into EXACTLY the same problem as you are, I feel your pain...
As far as I've been able to find out there's no way to use the from_table_name prefix with an INSERT statement.
I'm sure there's a viable technical reason for this, and I'd love to know exactly what it is.
Ok, found it, here's a forum post on why it doesn't work:
MSDN forums
A: You can do this with a MERGE in Sql Server 2008. Example code below:
--drop table A
create table A (a int primary key identity(1, 1))
insert into A default values
insert into A default values
delete from A where a>=3
-- insert two values into A and get the new primary keys
MERGE a USING (SELECT a FROM A) AS B(a)
ON (1 = 0) -- ignore the values, NOT MATCHED will always be true
WHEN NOT MATCHED THEN INSERT DEFAULT VALUES -- always insert here for this example
OUTPUT $action, inserted.*, deleted.*, B.a; -- show the new primary key and source data
Result is
INSERT, 3, NULL, 1
INSERT, 4, NULL, 2
i.e. for each row the new primary key (3, 4) and the old one (1, 2). Creating a table called e.g. #OUTPUT and adding " INTO #OUTPUT;" at the end of the OUTPUT clause would save the records.
A: I've verified that the problem is that you can only use INSERTED columns. The documentation seems to indicate that you can use from_table_name, but I can't seem to get it to work (The multi-part identifier "m.ContentID" could not be bound.):
TRUNCATE TABLE main
SELECT *
FROM incoming
SELECT *
FROM main
DECLARE @Missing TABLE (ContentID INT PRIMARY KEY)
INSERT INTO @Missing(ContentID)
SELECT incoming.ContentID
FROM incoming
LEFT JOIN main
ON main.ContentID = incoming.ContentID
WHERE main.ContentID IS NULL
SELECT *
FROM @Missing
DECLARE @Inserted TABLE (ContentID INT PRIMARY KEY, [Content] varchar(50))
INSERT INTO main(ContentID, [Content])
OUTPUT INSERTED.ContentID /* incoming doesn't work, m doesn't work */, INSERTED.[Content] INTO @Inserted (ContentID, [Content])
SELECT incoming.ContentID, incoming.[Content]
FROM incoming
INNER JOIN @Missing AS m
ON m.ContentID = incoming.ContentID
SELECT *
FROM @Inserted
SELECT *
FROM incoming
SELECT *
FROM main
Apparently the from_table_name prefix is only allowed on DELETE or UPDATE (or MERGE in 2008) - I'm not sure why:
*
*from_table_name
Is a column prefix that specifies a table included in the FROM clause of a DELETE or UPDATE statement that is used to specify the rows to update or delete.
If the table being modified is also specified in the FROM clause, any reference to columns in that table must be qualified with the INSERTED or DELETED prefix.
A: I think I found a solution to this problem, it sadly involves a temporary table, but at least it'll prevent the creation of a dreaded cursor :)
What you need to do is add an extra column to the table you're duplicating records from and give it a 'uniqueidentifer' type.
then declare a temporary table:
DECLARE @tmptable TABLE (uniqueid uniqueidentifier, original_id int, new_id int)
insert the the data into your temp table like this:
insert into @tmptable
(uniqueid,original_id,new_id)
select NewId(),id,0 from OriginalTable
the go ahead and do the real insert into the original table:
insert into OriginalTable
(uniqueid)
select uniqueid from @tmptable
Now to add the newly created identity values to your temp table:
update @tmptable
set new_id = o.id
from OriginalTable o inner join @tmptable tmp on tmp.uniqueid = o.uniqueid
Now you have a lookup table that holds the new id and original id in one record, for your using pleasure :)
I hope this helps somebody...
A: (MS) If the table being modified is also specified in the FROM clause, any reference to columns in that table must be qualified with the INSERTED or DELETED prefix.
In your example, you can't use cglobal table in the OUTPUT unless it's INSERTED.column_name or DELETED.column_name:
INSERT INTO Private.Content
(Tag)
OUTPUT cglobal.ContentID, INSERTED.ContentID
INTO @Inserted (SrcContentID, TgtContentID)
SELECT Tag
FROM Private.Content AS cglobal
INNER JOIN @Missing AS m ON cglobal.ContentID = m.SrcContentID
What worked for me was a simple alias table, like this:
INSERT INTO con1
(Tag)
OUTPUT **con2**.ContentID, INSERTED.ContentID
INTO @Inserted (SrcContentID, TgtContentID)
SELECT Tag
FROM Private.Content con1
**INNER JOIN Private.Content con2 ON con1.id=con2.id**
INNER JOIN @Missing AS m ON con1.ContentID = m.SrcContentID
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: bandwidth and traffic simulator for web apps? Can you suggest how to create a test environment to simulate various types of bandwidths and traffic in a web app?
Or maybe an open source program which does this against localhost?
I think this is a very important subject when programming web apps but it is not a usual topic, the only way i can imagine to create such kind of environment is to use some kind of proxy in a local network but before start looking into the squid documentation i would like to hear your suggestions.
A: if you're using apache you may want to take a look at apache ab
A: There are two approaches to shape network traffic to simulate a network link:
*
*Run some software on the client or server that sits somewhere in the networking stack and shapes the traffic between the app and the network interface
*Run the traffic shaping software on a dedicated machine with 2 network interfaces through which your traffic is routed
(2) is a better solution if you don't want to install software on the client or server (and possibly impact performance), but requires more hardware fiddling.
Some other features you might want to think about are what shaping parameters can be simulated. Most do delay and packet loss, some do jitter and bandwidth limiting as well. Some solutions can selectively filter traffic (for instance by port number, TCP or UDP etc).
Here is a list of some of the systems I've found:
Open Source or Freeware
DummyNet is an open source BSD Unix-based for dedicated devices. It is not clear if the software is being actively maintained
NistNet is an open source Linux-based system for dedicated devices. The software has not been actively maintained for several years.
Commercial
Apposite Technoligies sell dedicated hardware solutions for simulating WAN links, with a Web based GUI for configuring the settings and collecting traffic measurements
East Coast DataCom sell hardware dedicated simulators for simulating routers and modems
Itrinegy offer both dedicated device solutions, and solutions for running on clients or servers.
Network FX offer several dedicated device products for simulating network impairments between the client & server
NetLimiter is a client side system that allows throttling of individual applications, and includes a firewall.
Shunra Software offer a range of products, from high end enterprise WAN simulation and testing, to a simple client-resident emulator.
A: The closest I can think of is doing something similar with VEDekstop from Shunra..
Simulating High Latency and Low Bandwidth in Testing of Database Applications
Shunra VE Desktop Standard is a Windows-based client software solution that simulates a wide area network link so that you can test applications under a variety of current and potential network conditions – directly from your desktop.
A: I wrote a php script awhile back which used CURL to run a sequence of page requests against my server which represented a typical use scenario. I had it output the times that it took for the server to respond to each of the requests. I then had another script which spawned a bunch of these test case scripts simultaneously for a sustained period and correlated the results into a file which I could then look at in a spreadsheet to see average times. This way I could simulate the number of users hitting the site that I wanted. The limitations are that you need to run the test script on a different server to the web server and that the client machine can become too loaded to give meaningful results past a certain point. I've since left the job otherwise I would paste the scripts here.
A: If you are running a Linux box as your server, Linux box as your client, or have the capability to put (perhaps a VM) a Linux router between your client and server, you can use NetEm.
NetEm is a Linux TC (Traffic Control) discipline which can delay (i.e. add latency) packets leaving a host. Although it's tricky to set up clever rules (e.g. add latency to some traffic, not to others), it's easy to add a simple "delay everything leaving the interface by 50ms" type rules and some recipes are provided.
By sticking a Linux VM between your client and server, you can simulate as much latency as you like. And you can turn it on and off dynamically. Linux has other TC disciplines which can be combined with NetEm to restrict bandwidth (but the script to set this up can be somewhat complicated). NetEm can also randomly drop packets.
I use it and it works a treat :)
A: As other people have mentioned, Apache's ab (comes with Apache, so you probably have it already) is good.
Other good options are:
*
*HP's LoadRunner Apache
*Jakarta's JMeter
*Tsung (if you want to get your erlang on)
I personally like ab and JMeter the best.
A: Web Application Stress Tool (WAST) from Microsoft is what you need.
http://www.microsoft.com/downloads/details.aspx?familyid=e2c0585a-062a-439e-a67d-75a89aa36495&displaylang=en
A: I haven't used it for years (lack of need, not because I'd found anything else), but xat webspeed would be the first thing I would point toward
A: We use Loadrunner to do bandwidth and traffic simulation in our App. Loadrunner is can start agents on various machines and you can simulate one machine as running on dialup modem v/s another on DSL v/s another on Cable internet.
We also use Loadrunner to simulate various kinds of traffic conditions from 10 user run to 500 user run. We can also insert think times in the script and simulate a real user executing the http request. The best part is that it comes with a recording studio where it will plug in with Internet explorer and you can record the whole scenario/Usecase that can be as simple as hitting one page to a full blown 50-60 page script or more.
A: i found this little java program that works great : sloppy
yet not a proffesional solution but it works for simple tests, i guess it uses java streams and buffers to slow down the connection .
A: Have you looked at Tsung? It's a great utility for seeing if your website will scale in event of attack, I mean massive popularity. We use it for our web frontend, and our internal systems too.
A: If you're interested in performing your tests out of your browser, there is also a really great Firefox plug-in.
A: Do not forget about Wanulator (http://www.wanulator.de/).
The name Wanulator comes from "WAN" and "simulator. This pretty much describes what the software does: It simulates different Internet conditions such as delay or packet loss. Furthermore it simulates user access line speeds e.g. modem, ISDN or ADSL.
Wanulator is currently packaged as a Linux boot CD based on SLAX. This will give you a full out of the box experience. You can turn any PC into a test-system within a blink - just by booting the Wanulator CD. The package already includes useful client SW such as web-browser and network sniffer (Wireshark). Nevertheless if the PC has 2 network interfaces the system can run as an intermediate system between your server and your client - as a switch - without any configuration hassles.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: ExecuteScalar not functioning on server The application uses Oracle DataAccess ver. 1.1. , VS 2008, .Net Framework 3.5 w/SP1
OracleConnection connection = new OracleConnection(ConnectionStringLogon);
connection.Open();
OracleParameter selectParam = new OracleParameter(":applicationName", OracleDbType.Varchar2, 256);
selectParam.Value = applicationName;
if (connection.State != ConnectionState.Open)
connection.Open();
OracleCommand cmd = new OracleCommand();
cmd.Connection = connection;
cmd.CommandText = "Select ApplicationId from Applications where AppName = 'appName'";
cmd.CommandType = CommandType.Text;
if (selectParam != null)
{
cmd.Parameters.Add(selectParam);
}
object lookupResult = cmd.ExecuteScalar();
cmd.Parameters.Clear();
if (lookupResult != null)
The procedure fails on object lookupResult = cmd.ExecuteScalar(); with this error:
Event Type: Error
Event Source: App Log
Event Category: None
Event ID: 9961
Date: 9/30/2008
Time: 4:42:11 PM
User: N/A
Computer: Server15
Description:
System.NullReferenceException: Object reference not set to an instance of an object.
at Oracle.DataAccess.Client.OracleCommand.ExecuteReader(Boolean requery, Boolean fillRequest, CommandBehavior behavior)
at Oracle.DataAccess.Client.OracleCommand.ExecuteReader()
at Oracle.DataAccess.Client.OracleCommand.ExecuteScalar()
at Membership.OracleMembershipProvider.GetApplicationId(String applicationName, Boolean createIfNeeded) in OracleMembershipProvider.cs:line 1626
I've looked at this from every angle that I can conceive of... basically, no matter how I wrap it, the Execute fails.
A: I notice your CommandText doesn't contain the specified parameter ":applicationName"
A: It's not an error in "ExecuteReader". It's an error in the execution of the query... is applicationName null?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What are the best resources for learning CIL (MSIL) I'm an expert C# 3 / .NET 3.5 programmer looking to start doing some runtime codegen using System.Reflection.Emit.DynamicMethod. I'd love to move up to the next level by becoming intimately familiar with IL.
Any pointers (pun intended)?
A: In addition to Darren's answer, I'd suggest picking or inventing a toy language, and writing a simple compiler for it. Pick something that requires little parsing, like BF or a stack-based language, and you'll find that writing a compiler is actually simpler than it seems.
A: The best way to learn it, is to write something you understand, then look at the IL it created. Also, depending on what you are doing, you can use expression trees instead of emitting IL, and then when you compile the expression trees, those smart guys at microsoft create the IL for ya.
A: .NET Reflector is great for examining the IL produced by C#/VB.NET.
It's a wonderful learning tool.
A: The ECMA 335 specification is freely available for download here:
http://www.ecma-international.org/publications/standards/Ecma-335.htm
Partition III is the most relevant for dealing with MSIL, but I'd strongly recommend partition I as well for any .NET developer as it will greatly solidify understanding of the platform.
A: Expert .NET 2.0 IL Assembler. Although this book may be a little dated now, it was still a great overview for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Control.Enter event doesn't fire when switching tasks. Is there an alternative that does? Rep steps:
*
*create example .NET form application
*put a TextBox on the form
*wire a function up to the TextBox's Enter event
When you run this application, the Control.Enter event fires when focus first goes to the TextBox. However, if you click away into another application and then click back into the test application, the event will not fire again.
So moving between applications does not trigger Enter/Leave.
Is there another alternative Control-level event that I can use, which will fire in this scenario?
Ordinarily, I would use Form.Activated. Unfortunately, that is troublesome here because my component is hosted by a docking system that can undock my component into a new Form without notifying me.
A: What are you trying to do in the Enter event?
I can't find another control-level event that fires in your example program but when my test app does regain focus, the control that last had focus still has it.
Interesting question but it needs a little more context.
A: If I try your example and click outside the control on another window, desktop, etc, I can get the Got and Lost Focus events to fire, but if you're only trying to click within a form or a control with only 1 control, these event will never be fired because it is the only thing to focus on. Neither will Entered or left, unless you change the dynamics or overload the controls, you cannot get this to happen
A: In your example, I think you need another control. The reason being is that the first control (tabIndex 0) is the one with focus. With no other control to switch focus to, this control will always be focused, and therefore can never be entered. Switching to another application or form will not change the focus or active control in this form so when you return you will still not get the event fired.
With added controls control.entered should work fine. If this is your only control, why not call the event on formLoad, or TextChanged, when the form gets focus?
A: Thanks, I'll give some background.
My control is a UserControl that contains a grid and a toolbar. A user will typically launch several of these controls to view different slices of the system's data.
There are several keyboards shortcuts that can launch actions from the selected row in the current grid. However, it is a requirement that these keyboard shortcuts should apply not only to the currently focused grid. If the user is currently focused on one of the many other areas of the application, then this keyboard shortcut should still work, and it should be routed to the last focused grid.
So I wired a function to the Control.Enter event of my UserControl to basically say LastFocusedGrid = this.
And it would work, except for the docking and undocking...
See, these controls are hosted inside an application with docking features, somewhat similar to visual studio.
By default, the control launches as a tab within the main working area of the application, similar to the way a source file opens in visual studio.
However, the user can "rip out" a tab by grabbing the tab header and dragging it out of the main application. At this point, the application creates a new "float form" to host the control. Switching between the main application and this float form is the same as switching between apps, for the purposes of the Control.Enter and Form.Activated events.
At that point we have the "one control within a form" scenario simulated with the example application described in the original post.
Now, there are some ways around this. I could leverage the Form.Activated event, which DOES fire when switching between forms. If you add an event in the test application to the Form's Activated event, you will see that it works great.
The problem is that my UserControl's relationship with its parent Form is fluid, making the solution somewhat complicated. I tried wiring up to "this.ParentForm.Activated" which worked okay. The problem is when do you call this? What happens when you are undocked/redocked? I ended up with a nasty bunch of code with things like "previousParentForm" so that I could unhook from the old form, and then I was still facing the problem that the docking system doesn't notify me when my parent Form is being changed, so I was going to have to make a bunch of changes there, too.
These problems are not unsolvable, but if there is a simpler control-level "parent form was activated" event, then that would be a lot more elegant.
That's rather long, but I hope it clarifies the situation.
A: So when creating your grid, can you not set the KeyPressed, or KeyUp, etc. event? If so, all the grids can make use of the same event handler. Just make sure that when you get into the event handler to do something like:
Grid currentGrid = (Grid)sender;
Then you should be able to apply that block of code to any grid that gets sent in without having to worry about keeping track.
Since all the event handler really is, it's location is a mute point really as long as everything you need to execute it is accessible.
A: Frye, the problem is that the keyboard shortcuts should work no matter where the user is in the application. They are gloabl commands, handled at the top level, and then routed to the "last focused grid."
So handling the keystrokes at the grid level will not help.
To be more specific, assume user launches grids A, B, and C. But he also launches other controls X, Y, and Z that have nothing to do with my code.
User clicks on A, then on C. Then he clicks on Y, then on Z. With focus on Z, he hits my keyboard shortcut. In this case, grid C should respond since it was the last grid the user was focused in.
A: It sounds like the issue that you're having is not directly related to the Enter event and more to the point, if you have controls "that have nothing to do with your code" then you really aren't looking at a control level event.
A: Guess I wasn't clear.
My control lives in a container application. So do other unrelated controls by other teams. Think of it like visual studio -- my control is the code editing tab, but there is also the pending changes list and the properties window, which cohabitate with the source files but aren't directly related.
The keyboard shortcut is handled by the container application. Then it should be routed to the last one of my controls that the user was focused on.
Maintaing this "LastFocusedGrid" reference is what I do in the Enter event.
If you want to see similar functionality at work in visual studio, try this:
*
*open a few source files
*navigate to the "Start Page" tab.
*Hit Ctrl-F and search "current document" for some string
*Notice that the serach feature auto-navigates to the LAST FOCUSED source file to perform the search.
So even though you weren't focused in the source file, the ctrl-F command was processed by visual studio and routed to the last focused source file tab.
Now try the same thing with Ctrl-G. It doesn't work unless you are focused directly in the source file.
My keyboard commands need to work like Ctrl-F here, not like Ctrl-G. That is why I don't just capture the keyboard events directly in my control.
Does that clarify or make things worse?
A: Have you tried just a simple Control.GotFocus?
A: in this example if you toggle between clicking the textboxes neither the enter or got focus will do as expected, however if you click the child forms instead both will behave as expected.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
namespace EnterBrokenExample
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Form Form1 = new Form();
Form c1 = new Form();
Form c2 = new Form();
Form1.IsMdiContainer = true;
c1.MdiParent = Form1;
c2.MdiParent = Form1;
c1.Show();
c2.Show();
TextBox tb1 = new TextBox();
c1.Controls.Add(tb1);
tb1.Enter += ontbenter;
tb1.Text = "Some Text";
tb1.GotFocus += ongotfocus;
TextBox tb2 = new TextBox();
c2.Controls.Add(tb2);
tb2.Enter += ontbenter;
tb2.Text = "some other text";
tb2.GotFocus += ongotfocus;
Application.Run(Form1);
}
static void ontbenter(object sender, EventArgs args)
{
if (!(sender is TextBox))
return;
TextBox s = (TextBox)sender;
s.SelectAll();
}
static void ongotfocus(object sender, EventArgs args)
{
if (!(sender is TextBox))
return;
TextBox s = (TextBox)sender;
s.SelectAll();
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Does Oracle have something like Change Data Capture in SQL Server 2008? Change Data Capture is a new feature in SQL Server 2008. From MSDN:
Change data capture provides
historical change information for a
user table by capturing both the fact
that DML changes were made and the
actual data that was changed. Changes
are captured by using an asynchronous
process that reads the transaction log
and has a low impact on the system
This is highly sweet - no more adding CreatedDate and LastModifiedBy columns manually.
Does Oracle have anything like this?
A: Sure. Oracle actually has a number of technologies for this sort of thing depending on the business requirements.
*
*Oracle has had something called Workspace Manager for a long time (8i days) that allows you to version-enable a table and track changes over time. This can be a bit heavyweight, though, because it is based on views with instead-of triggers.
*Starting in 11.1 (as an extra cost option to the enterprise edition), Oracle has a Total Recall that asynchronously mines the redo logs for data changes that get logged to a separate table which can then be queried using flashback query syntax on the main table. Total Recall is automatically going to partition and compress the historical data and automatically takes care of purging the data after a specified data retention period.
*Oracle has a LogMiner technology that mines the redo logs and presents transactions to consumers. There are a number of technologies that are then built on top of LogMiner including Change Data Capture and Streams.
*You can also use materialized views and materialized view logs if the goal is to replicate changes.
A: Oracle has Change Data Notification where you register a query with the system and the resources accessed in that query are tagged to be watched. Changes to those resources are queued by the system allowing you to run procs against the data.
This is managed using the DBMS_CHANGE_NOTIFICATION package.
Here's an infodoc about it:
http://www.oracle-base.com/articles/10g/dbms_change_notification_10gR2.php
If you are connecting to Oracle from a C# app, ODP.Net (Oracles .Net client library) can interact with Change Data Notification to alert your c# app when Oracle changes are made - pretty kewl. Goodbye to polling repeatedly for data changes if you ask me - just register the table, set up change data notifcation through ODP.Net and wala, c# methods get called only when necessary. woot!
A: "no more adding CreatedDate and LastModifiedBy columns manually" ... as long as you can afford to keep complete history of your database online in the redo logs and never want to move the data to a different database.
I would keep adding them and avoid relying on built-in database techniques like that. If you have a need to keep historical status of records then use an audit table or ship everything off to a data warehouse that handles slowly changing dimensions properly.
Having said that, I'll add that Oracle 10g+ can mine the log files simply by using flashback query syntax. Examples here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_10002.htm#i2112847
This technology is also used in Oracle's Datapump export utility to provide consistent data for multiple tables.
A: I believe Oracle has provided auditing features since 8i, however the tables used to capture the data are rather complex and there is a significant performance impact when this is turned on.
In Oracle 8i you could only enable this for an entire database and not a table at a time, however 9i introduced Fine Grained Auditing which provides far more flexibility. This has been expanded upon in 10/11g.
For more information see http://www.oracle.com/technology/deploy/security/database-security/fine-grained-auditing/index.html.
Also in 11g Oracle introduced the Audit Vault, which provides secure storage for audit information, even DBA's cannot change this data (according to Oracle's documentation, I haven't used this feature yet). More info can be found at http://www.oracle.com/technology/deploy/security/database-security/fine-grained-auditing/index.html.
A: Oracle has mechanism called Flashback Data Archive. From A Fresh Look at Auditing Row Changes:
Oracle Flashback Query retrieves data as it existed at some time in the past.
Flashback Data Archive provides the ability to track and store all transactional changes to a table over its lifetime. It is no longer necessary to build this intelligence into your application. A Flashback Data Archive is useful for compliance with record stage policies and audit reports.
CREATE TABLESPACE SPACE_FOR_ARCHIVE
datafile 'C:\ORACLE DB12\ARCH_SPACE.DBF'size 50G;
CREATE FLASHBACK ARCHIVE longterm
TABLESPACE space_for_archive
RETENTION 1 YEAR;
ALTER TABLE EMPLOYEES FLASHBACK ARCHIVE LONGTERM;
select EMPLOYEE_ID, FIRST_NAME, JOB_ID, VACATION_BALANCE,
VERSIONS_STARTTIME TS,
nvl(VERSIONS_OPERATION,'I') OP
from EMPLOYEES
versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
where EMPLOYEE_ID = 100
order by EMPLOYEE_ID, ts;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: XMLHttpRequest POST multipart/form-data I want to use XMLHttpRequest in JavaScript to POST a form that includes a file type input element so that I can avoid page refresh and get useful XML back.
I can submit the form without page refresh, using JavaScript to set the target attribute on the form to an iframe for MSIE or an object for Mozilla, but this has two problems. The minor problem is that target is not W3C compliant (which is why I set it in JavaScript, not in XHTML). The major problem is that the onload event doesn't fire, at least not on Mozilla on OS X Leopard. Besides, XMLHttpRequest would make for prettier response code because the returned data could be XML, not confined to XHTML as is the case with iframe.
Submitting the form results in HTTP that looks like:
Content-Type: multipart/form-data;boundary=<boundary string>
Content-Length: <length>
--<boundary string>
Content-Disposition: form-data, name="<input element name>"
<input element value>
--<boundary string>
Content-Disposition: form-data, name=<input element name>"; filename="<input element value>"
Content-Type: application/octet-stream
<element body>
How do I get the XMLHttpRequest object's send method to duplicate the above HTTP stream?
A: There isn't any way to access a file input field inside javascript so there isn't a javascript only solution for ajax file uploads.
There are workaround like using an iframe.
The other option would be to use something like SWFUpload or Google Gears
A: You can construct the 'multipart/form-data' request yourself (read more about it at http://www.faqs.org/rfcs/rfc2388.html) and then use the send method (ie. xhr.send(your-multipart-form-data)). Similarly, but easier, in Firefox 4+ (also in Chrome 5+ and Safari 5+) you can use the FormData interface that helps to construct such requests. The send method is good for text content but if you want to send binary data such as images, you can do it with the help of the sendAsBinary method that has been around starting with Firefox 3.0. For details on how to send files via XMLHttpRequest, please refer to http://blog.igstan.ro/2009/01/pure-javascript-file-upload.html.
A: I don't see why iframe (an invisible one) implies XHTML and not ANY content. If you use an iframe you can set the onreadystatechange event and wait for 'complete'. Next you could use frame.window.document.innerHTML (please someone correct me) to get the string result.
var lFrame = document.getElementById('myframe');
lFrame.onreadystatechange = function()
{
if (lFrame.readyState == 'complete')
{
// your frame is done, get the content...
}
};
A: You will need to POST to an IFrame to get this to work, simply add a target attribute to your form, where you specify the IFrame ID. Something like this:
<form method="post" target="myiframe" action="handler.php">
...
</form>
<iframe id="myiframe" style="display:none" />
A: i am confuse about the onload event that you specified, it is on the page or on iframe?,
the first answer is correct theres no way to do this using purely xmlhttprequest, if what you want to acheive is triggering some method once the response exist on iframe, simply check if it has content already or not using DOM scripting, then fire the method.
to attach onload event to the iframe
if(window.attachEvent){
document.getElementById(iframe).attachEvent('onload', some_method);
}else{
document.getElementById(iframe).addEventListener('load', some_method, false);
}
A:
Content-Disposition: form-data, name
You should use semicolon, like this: Content-Disposition: form-data; name
A: Here is an up to date way using FormData (full doc @MDN)
Script:
var form = document.querySelector('#myForm');
form.addEventListener("submit", function(e) {
var xhr = new XMLHttpRequest();
xhr.open("POST", this.action);
xhr.addEventListener("load", function(e) {
// Your callback
});
xhr.send(new FormData(this));
e.preventDefault();
});
(from this basic form)
<form id="myForm" action="..." method="POST" enctype="multipart/form-data">
<input type="file" name="file0">
<input type="text" name="some-text">
...
</form>
Thanks again to Alex Polo for his answer
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Why do I need OleDbCommand.Prepare()? I'm working with a datagrid and adapter that correspond with an MSAccess table through a stored query (named "UpdatePaid", 3 paramaters as shown below) like so:
OleDbCommand odc = new OleDbCommand("UpdatePaid", connection);
OleDbParameter param;
odc.CommandType = CommandType.StoredProcedure;
param = odc.Parameters.Add("v_iid", OleDbType.Double);
param.SourceColumn = "I";
param.SourceVersion = DataRowVersion.Original;
param = odc.Parameters.Add("v_pd", OleDbType.Boolean);
param.SourceColumn = "Paid";
param.SourceVersion = DataRowVersion.Current;
param = odc.Parameters.Add("v_Projected", OleDbType.Currency);
param.SourceColumn = "ProjectedCost";
param.SourceVersion = DataRowVersion.Current;
odc.Prepare();
myAdapter.UpdateCommand = odc;
...
myAdapter.Update();
It works fine...but the really weird thing is that it didn't until I put in the odc.Prepare() call.My question is thus: Do I need to do that all the time when working with OleDb stored procs/queries? Why? I also have another project coming up where I'll have to do the same thing with a SqlDbCommand... do I have to do it with those, too?
A: This is called, oddly enough, a prepared statement, and they're actually really nice. Basically what happens is you either create or get a sql statement (insert, delete, update) and instead of passing actual values, you pass "?" as a place holder. This is all well and good, except what we want is our values to get passed in instead of the "?".
So we prepare the statement so instead of "?", we pass in parameters as you have above that are going to be the values that go in in place of the place holders.
Preparing parses the string to find where parameters can replace the question marks so all you have to do is enter the parameter data and execute the command.
Within oleDB, stored queries are prepared statements, so a prepare is required. I've not used stored queries with SqlDB, so I'd have to defer to the 2 answers previous.
A: I don't use it with SqlDbCommand. It seems as a bug to me that it's required. It should only be nice to have if you're going to call a procedure multiple times in a row. Maybe I'm wrong and there's a note in documentation about providers that love this call too much.
A: Are you using the JET OLEDB Provider? or MSDASQL + JET ODBC?
You should not need to call Prepare(), but I believe that's driver/provider dependent.
You definitely don't need to use Prepare() for System.Data.SqlClient.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to alter a float by its smallest increment (or close to it)? I have a double value f and would like a way to nudge it very slightly larger (or smaller) to get a new value that will be as close as possible to the original but still strictly greater than (or less than) the original.
It doesn't have to be close down to the last bit—it's more important that whatever change I make is guaranteed to produce a different value and not round back to the original.
A: Check your math.h file. If you're lucky you have the nextafter and nextafterf functions defined. They do exactly what you want in a portable and platform independent way and are part of the C99 standard.
Another way to do it (could be a fallback solution) is to decompose your float into the mantissa and exponent part. Incrementing is easy: Just add one to the mantissa. If you get an overflow you have to handle this by incrementing your exponent. Decrementing works the same way.
EDIT: As pointed out in the comments it is sufficient to just increment the float in it's binary representation. The mantissa-overflow will increment the exponent, and that's exactly what we want.
That's in a nutshell the same thing that nextafter does.
This won't be completely portable though. You would have to deal with endianess and the fact that not all machines do have IEEE floats (ok - the last reason is more academic).
Also handling NAN's and infinites can be a bit tricky. You cannot simply increment them as they are by definition not numbers.
A: In absolute terms, the smallest amount you can add to a floating point value to make a new distinct value will depend on the current magnitude of the value; it will be the type's machine epsilon multiplied by the current exponent.
Check out the IEEE spec for floating point represenation. The simplest way would be to reinterpret the value as an integer type, add 1, then check (if you care) that you haven't flipped the sign or generated a NaN by examining the sign and exponent bits.
Alternatively, you could use frexp to obtain the current mantissa and exponent, and hence calculate a value to add.
A: u64 &x = *(u64*)(&f);
x++;
Yes, seriously.
Edit: As someone pointed out, this does not deal with -ve numbers, Inf, Nan or overflow properly. A safer version of the above is
u64 &x = *(u64*)(&f);
if( ((x>>52) & 2047) != 2047 ) //if exponent is all 1's then f is a nan or inf.
{
x += f>0 ? 1 : -1;
}
A: I needed to do the exact same thing and came up with this code:
double DoubleIncrement(double value)
{
int exponent;
double mantissa = frexp(value, &exponent);
if(mantissa == 0)
return DBL_MIN;
mantissa += DBL_EPSILON/2.0f;
value = ldexp(mantissa, exponent);
return value;
}
A: For what it's worth, the value for which standard ++ incrementing ceases to function is 9,007,199,254,740,992.
A: This may not be exactly what you want, but you still might find numeric_limits in of use. Particularly the members min(), and epsilon().
I don't believe that something like mydouble + numeric_limits::epsilon() will do what you want, unless mydouble is already close to epsilon. If it is, then you're in luck.
A: I found this code a while back, maybe it will help you determine the smallest you can push it up by then just increment it by that value. Unfortunately i can't remember the reference for this code:
#include <stdio.h>
int main()
{
/* two numbers to work with */
double number1, number2; // result of calculation
double result;
int counter; // loop counter and accuracy check
number1 = 1.0;
number2 = 1.0;
counter = 0;
while (number1 + number2 != number1) {
++counter;
number2 = number2 / 10;
}
printf("%2d digits accuracy in calculations\n", counter);
number2 = 1.0;
counter = 0;
while (1) {
result = number1 + number2;
if (result == number1)
break;
++counter;
number2 = number2 / 10.0;
}
printf("%2d digits accuracy in storage\n", counter );
return (0);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: Interface questions Suppose that I have interface MyInterface and 2 classes A, B which implement MyInterface.
I declared 2 objects: MyInterface a = new A() , and MyInterface b = new B().
When I try to pass to a function - function doSomething(A a){} I am getting an error.
This is my code:
public interface MyInterface {}
public class A implements MyInterface{}
public class B implements MyInterface{}
public class Tester {
public static void main(String[] args){
MyInterface a = new A();
MyInterface b = new B();
test(b);
}
public static void test(A a){
System.out.println("A");
}
public static void test(B b){
System.out.println("B");
}
}
My problem is that I am getting from some component interface which can be all sorts of classes and I need to write function for each class.
So one way is to get interface and to check which type is it. (instance of A)
I would like to know how others deal with this problem??
Thx
A: Can you not just have a method on the interface which each class implements? Or do you not have control of the interface?
This would provide both polymorphism and avoid the need to define any external methods. I believe this is the intention of an interface, it allows a client to treat all classes implementing it in a non type specific manner.
If you cannot add to the interface then you would be best introducing a second interface with the appropriate method. If you cannot edit either the interface or the classes then you need a method which has the interface as a parameter and then check for the concrete class. However this should be a last resort and rather subverts the use of the interface and ties the method to all the implementations.
A: It sounds like you are after something like this:
public static void test(MyInterface obj){
if(obj instanceof A) {
A tmp = (A)obj;
} else if(obj instanceof B) {
B tmp = (B)obj;
} else {
//handle error condition
}
}
But please note this is very bad form and indicates something has gone seriously wrong in your design. If you don't have control of the interface then, as suggested by marcj, adding a second interface might be the way to go. Note you can do this whilst preserving binary compatibility.
A: I'm unclear on what you're actually asking, but the problem is that you don't have a method that takes a parameter of type MyInterface. I don't know what the exact syntax is in Java, but you could do something like if (b is B) { test(b as B) } but I wouldn't. If you need it to be generic, then use the MyInterface type as the variable type, otherwise use B as the variable type. You're defeating the purpose of using the interface.
A: I think visitor design pattern will help you out here. The basic idea is to have your classes (A and B) call the appropriate method themselves instead of you trying to decide which method to call. Being a C# guy I hope my Java works:
public interface Visitable {
void accept(Tester tester)
}
public interface MyInterface implements Visitable {
}
public class A implements MyInterface{
public void accept(Tester tester){
tester.test(this);
}
}
public class B implements MyInterface{
public void accept(Tester tester){
tester.test(this);
}
}
public class Tester {
public static void main(String[] args){
MyInterface a = new A();
MyInterface b = new B();
a.accept(this);
b.accept(this);
}
public void test(A a){
System.out.println("A");
}
public void test(B b){
System.out.println("B");
}
}
A: I'm not sure if I fully understand the issue, but it seems like one way might be to move the test() methods into the child classes:
public interface MyInterface {
public void test();
}
public class A implements MyInterface{
public void test() {
System.out.println("A");
}
}
public class B implements MyInterface{
public void test() {
System.out.println("B");
}
}
public class Tester {
public static void main(String[] args){
MyInterface a = new A();
MyInterface b = new B();
b.test();
}
}
You could similarly use a toString() method and print the result of that. I can't quite tell from the question, though, if your requirements make this impossible.
A: Use only one public class/interface in one .java file, otherwise it'll throw error. And call the object with the object name.. You declared two methos in Teater class only, then what the purpose of declaring class A,B.
A: I usually use an abstract class to get around this problem, like so:
public abstract class Parent {}
public class A extends Parent {...}
public class B extends Parent {...}
That allows you to pass Parent objects to functions that take A or B.
A: You have 3 options:
*
*Visitor pattern; you'll need to be able to change the MyInterface type to include a method visit(Visitor) where the Visitor class contains lots of methods for visiting each subclass.
*Use if-else inside your method test(MyInterface) to check between them
*Use chaining. That is, declare handlers ATester, BTester etc, all of which implement the interface ITester which has the method test(MyInterface). Then in the ATester, check that the type is equal to A before doing stuff. Then your main Tester class can have a chain of these testers and pass each MyInterface instance down the chain, until it reaches an ITester which can handle it. This is basically turning the if-else block from 2 into separate classes.
Personally I would go for 2 in most situations. Java lacks true object-orientation. Deal with it! Coming up with various ways around it usually just makes for difficult-to-follow code.
A: Sounds like you need either a) to leverage polymorphism by putting method on MyInterface and implementing in A and B or b) some combination of Composite and Visitor design pattern. I'd start with a) and head towards b) when things get unwieldy.
My extensive thoughts on Visitor:
http://tech.puredanger.com/2007/07/16/visitor/
A: public interface MyInterface {}
public class A implements MyInterface{}
public class B implements MyInterface{}
public class Tester {
public static void main(String[] args){
MyInterface a = new A();
MyInterface b = new B();
test(b); // this is wrong
}
public static void test(A a){
System.out.println("A");
}
public static void test(B b){
System.out.println("B");
}
}
You are trying to pass an object referenced by MyInterface reference variable to a method defined with an argument with its sub type like test(B b). Compiler complains here because the MyInterface reference variable can reference any object which is a sub type of MyInterface, but not necessarily an object of B.There can be runtime errors if this is allowed in Java. Take an example which will make the concept clearer for you. I have modified your code for class B and added a method.
public class B implements MyInterface {
public void onlyBCanInvokeThis() {}
}
Now just alter the test(B b) method like below :
public static void test(B b){
b.onlyBCanInvokeThis();
System.out.println("B");
}
This code will blow up at runtime if allowed by compiler:
MyInterface a = new A();
// since a is of type A. invoking onlyBCanInvokeThis()
// inside test() method on a will throw exception.
test(a);
To prevent this, compiler disallows such method invocation techniques with super class reference.
I'm not sure what are you trying to achieve but it seems like you want to achieve runtime polymorphism. To achieve that you need to declare a method in your MyInterface and implement it in each of the subclass. This way the call to the method will be resolved at run time based on the object type and not on the reference type.
public interface MyInterface {
public void test();
}
public class A implements MyInterface{
public void test() {
System.out.println("A");
}
}
public class B implements MyInterface{
public void test() {
System.out.println("B");
}
}
public class Tester {
public static void main(String[] args){
MyInterface a = new A();
MyInterface b = new B();
b.test(); // calls B's implementation of test()
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Difference between a BitmapFrame and BitmapImage in WPF What is the difference between a BitmapFrame and BitmapImage in WPF? Where would you use each (ie. why would you use a BitmapFrame rather than a BitmapImage?)
A: The accepted answer is incomplete (not to suggest that my answer is complete either) and my addition may help someone somewhere.
The reason (albeit the only reason) I use a BitmapFrame is when I access the individual frames of a multiple-framed TIFF image using the TiffBitmapDecoder class. For example,
TiffBitmapDecoder decoder = new TiffBitmapDecoder(
new Uri(filename),
BitmapCreateOptions.None,
BitmapCacheOption.None);
for (int frameIndex = 0; frameIndex < decoder.Frames.Count; frameIndex++)
{
BitmapFrame frame = decoder.Frames[frameIndex];
// Do something with the frame
// (it inherits from BitmapSource, so the options are wide open)
}
A: You should stick to using the abstract class BitmapSource if you need to get at the bits, or even ImageSource if you just want to draw it.
The implementation BitmapFrame is just the object oriented nature of the implementation showing through. You shouldn't really have any need to distinguish between the implementations. BitmapFrames may contain a little extra information (metadata), but usually nothing but an imaging app would care about.
You'll notice these other classes that inherit from BitmapSource:
*
*BitmapFrame
*BitmapImage
*CachedBitmap
*ColorConvertedBitmap
*CroppedBitmap
*FormatConvertedBitmap
*RenderTargetBitmap
*TransformedBitmap
*WriteableBitmap
You can get a BitmapSource from a URI by constructing a BitmapImage object:
Uri uri = ...;
BitmapSource bmp = new BitmapImage(uri);
Console.WriteLine("{0}x{1}", bmp.PixelWIdth, bmp.PixelHeight);
The BitmapSource could also come from a decoder. In this case you are indirectly using BitmapFrames.
Uri uri = ...;
BitmapDecoder dec = BitmapDecoder.Create(uri, BitmapCreateOptions.None, BitmapCacheOption.Default);
BitmapSource bmp = dec.Frames[0];
Console.WriteLine("{0}x{1}", bmp.PixelWIdth, bmp.PixelHeight);
A: BitmapFrame is a low level primitive for image manipulation. It is usually used when you want to encode/decode some image from one format to another.
BitmapImage is more high level abstraction that has some neat data-binding properties (UriSource, etc).
If you are just displaying images and want some fine tuning BitmapImage is what you need.
If you are doing low level image manipulation then you will need BitmapFrame.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Is there any one website which contains many good C# screencasts? Is there any one website which contains many good C# screencasts?
A: Dimecasts.net is coming out with lots of good, short screencasts on various .NET topics. Some in the ALT.NET space. Most of the example code they write is in C#.
A: dnrTV
dnrTv is a fusion of a training video
and an interview show. Training videos
are typically sterile and one-way.
Let's face it, you can only take so
much. But you need to see the code! In
this format, you get the spontaneity
of an interview talk show, and the
detail of a webcast or training video.
Carl Franklin is the host of the
wildly popular mp3 talk show .NET
Rocks!, which he started recording in
August, 2002. dnrTV launched on
January 12th, 2006, the same week as
.NET Rocks! show number 159!
We see dnrTV as a natural adjunct to
.NET Rocks!, allowing more technical
topics to be explored in detail. As
always, Carl keeps the atmosphere
light and conversational, which makes
for a nice way to spend your lunch
hour!
A: Sorry for the delay but I think the TekPub Screencast is just great.
A: *
*WindowsClient.net WPF Videos
*Visual C# Developer Center
*YouTube search for 'C# programming'
A: You should checkout Pluralsight. They have fantastic training videos and their C# Fundamentals video is their most popular.
A: I like the channel 9 screencasts tagged C#..
A: http://www.asp.net/learn/
http://www.learnvisualstudio.net
A: Ok...these are not about C# as such, but if you fancy learning about NHibernate, the Summer of NHibernate vids are probably the best I've ever watched. Decent sized captures and even though they're about NHibernate I learned a thing or two about refactoring unit tests as well. I even donated because I thought they were that good. 10/10
A: Learn C# on YouTube: 89 C# tutorials collection. Mainly examples of how to do something with C#. Another big collection of 98 C# videos. It covers a lot of C# fundamentals.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: FIPS compliant password encryption for .NET I've working on a WinForms in VB.NET (3.5) application that requires the user to enter domain administrator credentials. To make things easier on the user, they should only have to enter the user name and password once, and then just rely on my app to save these credentials. I'd like to save these credentials with the other user settings, but for security reasons, the password needs to be encrypted.
What's an easy way to encrypt and decrypt this password? I'm wanting the encryption method to be FIPS compatible. The methods I've tried so far result in this exception:
System.InvalidOperationException: This
implementation is not part of the
Windows Platform FIPS validated
cryptographic algorithms.
A: Look into the Data Protection API (DPAPI), which is FIPS compliant (as far as I can tell; you can review the evaluation here).
DPAPI is exposed in .NET 2.0 and greater with the System.Security.Cryptography.ProtectedData class. It uses the user's current credentials as the encryption key. See my more complete answer here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: The best way to assert pre-condition and post-condition of arguments and values in .NET? I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend?
A: Another option is Spec#.
Spec# is an extension of the object-oriented language C#. It extends the type system to include non-null types and checked exceptions. It provides method contracts in the form of pre- and postconditions as well as object invariants.
A: We'll eventually use Code Contracts when .NET 4.0 ships. However, in our production code right now we have had great success with a "Guard" class along with common way to generate exceptions.
For more details, see my post about this.
A: I prefer exceptions over asserts because if it's supposed to be that way and isn't, I want to know about it so I can fix it, and the coverage we get in debug mode is nowhere near real-life usage or coverage, so just using Debug.Assert doesn't do enough.
Using asserts means that you won't add bloat to your release code, but it means you only get to see when and why these contracts get broken if you catch them at it in a debug build.
Using exceptions means you get to see the contract breaking whenever it happens, debug or release, but it also means your release build contains more checks and code.
You could go with an inbetween approach and use Trace to trace out your pre and post conditions to some kind of application log, which you could use to debug problems. However, you'd need a way of harvesting these logs to learn what issues your users are encountering. THere is also the possibility of combing this with exceptions so you get exceptions for the more severe problems.
The way I see it though, is that if the contract is worth enforcing then its worth throwing an exception when it breaks. I think that's somewhat down to opinion and target application though. If you do throw exceptions, you probably want some form of incident reporting system that provides crash reports when raised exceptions are left unhandled.
A: You could have a look at the fluent framework at http://conditions.codeplex.com/
Its open source and free.
A: Spec# is the way to do it, which is a superset of C#. Now you have "Code Contracts", which is the language-agnostic version of Spec#, so now you can have code contracts in VB.NET, for example.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I resequence dropdown list controls like the NetFlix queue from the client side (Javascript) We have a series of drop down controls that determine the sort order of columns. The problem we are having is when the user selects a column as the 2nd column the other dropdown lists need to have their values changed so that there is only one "2nd".
*
*Column A [1]
*Column B [2]
*Column C [3]
*Column D [4]
*Column E [5]
In the list above, when you change Column D to [2], Column B becomes [3], C becomes [4], etc. I can manage it on the server side but I was wondering if anybody had some clues how to do this on the client side with javascript.
A: Look at Javascript toolkits like Scriptaculous for client side reordering.
You add your elements as "Sortables" and code your own callbacks to execute when the items are dragged, then dropped -- such as sending an asynchronous request to the server to persist the new order.
Here is a full tutorial on creating sortable lists with Scriptaculous and PHP. For ASP, the client side code will be slightly different, but the process will be similar.
A: On a note on JavaScript frameworks; I highly recommend jQuery.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why does IE give unexpected errors when setting innerHTML I tried to set innerHTML on an element in firefox and it worked fine, tried it in IE and got unexpected errors with no obvious reason why.
For example if you try and set the innerHTML of a table to " hi from stu " it will fail, because the table must be followed by a sequence.
A: Don't know why you're being down-modded for the question Stu, as this is something I solved quite recently. The trick is to 'squirt' the HTML into a DOM element that is not currently attached to the document tree. Here's the code snippet that does it:
// removing the scripts to avoid any 'Permission Denied' errors in IE
var cleaned = html.replace(/<script(.|\s)*?\/script>/g, "");
// IE is stricter on malformed HTML injecting direct into DOM. By injecting into
// an element that's not yet part of DOM it's more lenient and will clean it up.
if (jQuery.browser.msie)
{
var tempElement = document.createElement("DIV");
tempElement.innerHTML = cleaned;
cleaned = tempElement.innerHTML;
tempElement = null;
}
// now 'cleaned' is ready to use...
Note we're using only using jQuery in this snippet here to test for whether the browser is IE, there's no hard dependency on jQuery.
A: You're seeing that behaviour because innerHTML is read-only for table elements in IE. From MSDN's innerHTML Property documentation:
The property is read/write for all objects except the following, for which it is read-only: COL, COLGROUP, FRAMESET, HEAD, HTML, STYLE, TABLE, TBODY, TFOOT, THEAD, TITLE, TR.
A: check the scope of the element you are trying to set the innerHTML. since FF and IE handle this in a different way
A: http://www.ericvasilik.com/2006/07/code-karma.html
A: I have been battling with the problem of replacing a list of links in a table with a different list of links. As above, the problem comes with IE and its readonly property of table elements.
Append for me wasn't an option so I have (finally) worked out this (which works for Ch, FF and IE 8.0 (yet to try others - but I am hopeful)).
replaceInReadOnly(document.getElementById("links"), "<a href>........etc</a>");
function replaceInReadOnly(element, content){
var newNode = document.createElement();
newNode.innerHTML = content;
var oldNode = element.firstChild;
var output = element.replaceChild(newNode, oldNode);
}
Works for me - I hope it works for you
A: "Apparently firefox isn't this picky" ==> Apparently FireFox is so buggy, that it doesn't register this obvious violation of basic html-nesting rules ...
As someone pointed out in another forum, FireFox will accept, that you append an entire html-document as a child of a form-field or even an image ,-(
A: Have you tried setting innerText and/or textContent? Some nodes (like SCRIPT tags) won't behave as expected when you try to change their innerHTML in IE. More here about innerText versus textContent:
http://blog.coderlab.us/2006/04/18/the-textcontent-and-innertext-properties/
A: Are you setting a completely different innerHTML or replacing a pattern in the innerHTML? I ask because if you're trying to do a trivial search/replace via the 'power' of innerHTML, you will find some types of element not playing in IE.
This can be cautiously remedied by surrounding your attempt in a try/catch and bubbling up the DOM via parentNode until you successfully manage to do it.
But this is not going to be suitable if you're inserting brand-new content.
A: You can modify the behavior. Here is some code that prevents garbage collection of otherwise-referenced elements in IE:
if (/(msie|trident)/i.test(navigator.userAgent)) {
var innerhtml_get = Object.getOwnPropertyDescriptor(HTMLElement.prototype, "innerHTML").get
var innerhtml_set = Object.getOwnPropertyDescriptor(HTMLElement.prototype, "innerHTML").set
Object.defineProperty(HTMLElement.prototype, "innerHTML", {
get: function () {return innerhtml_get.call (this)},
set: function(new_html) {
var childNodes = this.childNodes
for (var curlen = childNodes.length, i = curlen; i > 0; i--) {
this.removeChild (childNodes[0])
}
innerhtml_set.call (this, new_html)
}
})
}
var mydiv = document.createElement ('div')
mydiv.innerHTML = "test"
document.body.appendChild (mydiv)
document.body.innerHTML = ""
console.log (mydiv.innerHTML)
http://jsfiddle.net/DLLbc/9/
A: I just figured out that if you try to set innerHTML on an element in IE that isn't logically correct it will throw this error.
For example if you try and set the innerHTML of a table to " hi from stu " it will fail, because the table must be followed by a sequence.
Apparently firefox isn't this picky.
Hope it helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I escape a DOM name in javascript? I have a form element that I want to address via javascript, but it doesn't like the syntax.
<form name="mycache">
<input type="hidden" name="cache[m][2]">
<!-- ... -->
</form>
I want to be able to say:
document.mycache.cache[m][2]
but obviously I need to indicate that cache[m][2] is the whole name, and not an array reference to cache. Can it be done?
A: UPDATE: Actually, I was wrong, you can use [ or ] characters as part of a form elements id and/or name attribute.
Here's some code that proves it:
<html>
<body>
<form id="form1">
<input type='test' id='field[m][2]' name='field[m][2]' value='Chris'/>
<input type='button' value='Test' onclick='showtest();'/>
<script type="text/javascript">
function showtest() {
var value = document.getElementById("field[m][2]").value;
alert(value);
}
</script>
</form>
</body>
</html>
Update: You can also use the following to get the value from the form element:
var value = document.forms.form1["field[m][2]"].value;
A: Use document.getElementsByName("input_name") instead. Cross platform too. Win.
A: Is it possible to add an id reference to the form element and use document.getElementById?
A: -- and in the old days (in HTML3.2/4.01 transitional/XHTML1.0 transitional DOM-binding) you could use:
form.elements["cache[m][2]"]
-- but the elements-stuff is, as Chris Pietschmann showed, not necessary as these binding-schemes also allow direct access (though I personally would prefer the extra readability !-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: ColdFusion: pick first non null value from a list In JavaScript, you can do this:
var a = null;
var b = "I'm a value";
var c = null;
var result = a || b || c;
And 'result' will get the value of 'b' because JavaScript short-circuits the 'or' operator.
I want a one-line idiom to do this in ColdFusion and the best I can come up with is:
<cfif LEN(c) GT 0><cfset result=c></cfif>
<cfif LEN(b) GT 0><cfset result=b></cfif>
<cfif LEN(a) GT 0><cfset result=a></cfif>
Can anyone do any better than this?
A: ColdFusion doesn't have nulls.
Your example is basing the choice on which item is an empty string.
If that is what you're after, and all your other values are simple values, you can do this:
<cfset result = ListFirst( "#a#,#b#,#c#" )/>
(Which works because the standard list functions ignore empty elements.)
A: Note: other CFML engines do support nulls.
If we really are dealing with nulls (and not empty strings), here is a function that will work for Railo and OpenBlueDragon:
<cffunction name="FirstNotNull" returntype="any" output="false">
<cfset var i = 0/>
<cfloop index="i" from="1" to="#ArrayLen(Arguments)#">
<cfif NOT isNull(Arguments[i]) >
<cfreturn Arguments[i] />
</cfif>
</cfloop>
</cffunction>
Then to use the function is as simple as:
<cfset result = FirstNotNull( a , b , c ) />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Unit test naming best practices What are the best practices for naming unit test classes and test methods?
This was discussed on SO before, at What are some popular naming conventions for Unit Tests?
I don't know if this is a very good approach, but currently in my testing projects, I have one-to-one mappings between each production class and a test class, e.g. Product and ProductTest.
In my test classes I then have methods with the names of the methods I am testing, an underscore, and then the situation and what I expect to happen, e.g. Save_ShouldThrowExceptionWithNullName().
A: See:
http://googletesting.blogspot.com/2007/02/tott-naming-unit-tests-responsibly.html
For test method names, I personally find using verbose and self-documented names very useful (alongside Javadoc comments that further explain what the test is doing).
A: I like this naming style:
OrdersShouldBeCreated();
OrdersWithNoProductsShouldFail();
and so on.
It makes really clear to a non-tester what the problem is.
A: I think one of the most important things is be consistent in your naming convention (and agree it with other members of your team). To many times I see loads of different conventions used in the same project.
A: Update (Jul 2021)
It's been quite a while since my original answer (almost 12 years) and best practices have been changing a lot during this time. So I feel inclined to update my own answer and offer different naming strategies to the readers.
Many comments and answers point out that the naming strategy I propose in my original answer is not resistant to refactorings and ends up with difficult to understand names, and I fully agree.
In the last years, I ended up using a more human readable naming schema where the test name describes what we want to test, in the line described by Vladimir Khorikov.
Some examples would be:
*
*Add_credit_updates_customer_balance
*Purchase_without_funds_is_not_possible
*Add_affiliate_discount
But as you can see it's quite a flexible schema but the most important thing is that reading the name you know what the test is about without including technical details that may change over time.
To name the projects and test classes I still adhere to the original answer schema.
Original answer (Oct 2009)
I like Roy Osherove's naming strategy. It's the following:
[UnitOfWork_StateUnderTest_ExpectedBehavior]
It has every information needed on the method name and in a structured manner.
The unit of work can be as small as a single method, a class, or as large as multiple classes. It should represent all the things that are to be tested in this test case and are under control.
For assemblies, I use the typical .Tests ending, which I think is quite widespread and the same for classes (ending with Tests):
[NameOfTheClassUnderTestTests]
Previously, I used Fixture as suffix instead of Tests, but I think the latter is more common, then I changed the naming strategy.
A: Kent Beck suggests:
*
*One test fixture per 'unit' (class of your program). Test fixtures are classes themselves. The test fixture name should be:
[name of your 'unit']Tests
*Test cases (the test fixture methods) have names like:
test[feature being tested]
For example, having the following class:
class Person {
int calculateAge() { ... }
// other methods and properties
}
A test fixture would be:
class PersonTests {
testAgeCalculationWithNoBirthDate() { ... }
// or
testCalculateAge() { ... }
}
A: In VS + NUnit I usually create folders in my project to group functional tests together. Then I create unit test fixture classes and name them after the type of functionality I'm testing. The [Test] methods are named along the lines of Can_add_user_to_domain:
- MyUnitTestProject
+ FTPServerTests <- Folder
+ UserManagerTests <- Test Fixture Class
- Can_add_user_to_domain <- Test methods
- Can_delete_user_from_domain
- Can_reset_password
A: I should add that the keeping your tests in the same package but in a parallel directory to the source being tested eliminates the bloat of the code once your ready to deploy it without having to do a bunch of exclude patterns.
I personally like the best practices described in "JUnit Pocket Guide" ... it's hard to beat a book written by the co-author of JUnit!
A: Class Names. For test fixture names, I find that "Test" is quite common in the ubiquitous language of many domains. For example, in an engineering domain: StressTest, and in a cosmetics domain: SkinTest. Sorry to disagree with Kent, but using "Test" in my test fixtures (StressTestTest?) is confusing.
"Unit" is also used a lot in domains. E.g. MeasurementUnit. Is a class called MeasurementUnitTest a test of "Measurement" or "MeasurementUnit"?
Therefore I like to use the "Qa" prefix for all my test classes. E.g. QaSkinTest and QaMeasurementUnit. It is never confused with domain objects, and using a prefix rather than a suffix means that all the test fixtures live together visually (useful if you have fakes or other support classes in your test project)
Namespaces. I work in C# and I keep my test classes in the same namespace as the class they are testing. It is more convenient than having separate test namespaces. Of course, the test classes are in a different project.
Test method names. I like to name my methods WhenXXX_ExpectYYY. It makes the precondition clear, and helps with automated documentation (a la TestDox). This is similar to the advice on the Google testing blog, but with more separation of preconditions and expectations. For example:
WhenDivisorIsNonZero_ExpectDivisionResult
WhenDivisorIsZero_ExpectError
WhenInventoryIsBelowOrderQty_ExpectBackOrder
WhenInventoryIsAboveOrderQty_ExpectReducedInventory
A: I like to follow the "Should" naming standard for tests while naming the test fixture after the unit under test (i.e. the class).
To illustrate (using C# and NUnit):
[TestFixture]
public class BankAccountTests
{
[Test]
public void Should_Increase_Balance_When_Deposit_Is_Made()
{
var bankAccount = new BankAccount();
bankAccount.Deposit(100);
Assert.That(bankAccount.Balance, Is.EqualTo(100));
}
}
Why "Should"?
I find that it forces the test writers to name the test with a sentence along the lines of "Should [be in some state] [after/before/when] [action takes place]"
Yes, writing "Should" everywhere does get a bit repetitive, but as I said it forces writers to think in the correct way (so can be good for novices). Plus it generally results in a readable English test name.
Update:
I've noticed that Jimmy Bogard is also a fan of 'should' and even has a unit test library called Should.
Update (4 years later...)
For those interested, my approach to naming tests has evolved over the years. One of the issues with the Should pattern I describe above as its not easy to know at a glance which method is under test. For OOP I think it makes more sense to start the test name with the method under test. For a well designed class this should result in readable test method names. I now use a format similar to <method>_Should<expected>_When<condition>. Obviously depending on the context you may want to substitute the Should/When verbs for something more appropriate. Example:
Deposit_ShouldIncreaseBalance_WhenGivenPositiveValue()
A: I use Given-When-Then concept.
Take a look at this short article http://cakebaker.42dh.com/2009/05/28/given-when-then/. Article describes this concept in terms of BDD, but you can use it in TDD as well without any changes.
A: I recently came up with the following convention for naming my tests, their classes and containing projects in order to maximize their descriptivenes:
Lets say I am testing the Settings class in a project in the MyApp.Serialization namespace.
First I will create a test project with the MyApp.Serialization.Tests namespace.
Within this project and of course the namespace I will create a class called IfSettings (saved as IfSettings.cs).
Lets say I am testing the SaveStrings() method. -> I will name the test CanSaveStrings().
When I run this test it will show the following heading:
MyApp.Serialization.Tests.IfSettings.CanSaveStrings
I think this tells me very well, what it is testing.
Of course it is usefull that in English the noun "Tests" is the same as the verb "tests".
There is no limit to your creativity in naming the tests, so that we get full sentence headings for them.
Usually the Test names will have to start with a verb.
Examples include:
*
*Detects (e.g. DetectsInvalidUserInput)
*Throws (e.g. ThrowsOnNotFound)
*Will (e.g. WillCloseTheDatabaseAfterTheTransaction)
etc.
Another option is to use "that" instead of "if".
The latter saves me keystrokes though and describes more exactly what I am doing, since I don't know, that the tested behavior is present, but am testing if it is.
[Edit]
After using above naming convention for a little longer now, I have found, that the If prefix can be confusing, when working with interfaces. It just so happens, that the testing class IfSerializer.cs looks very similar to the interface ISerializer.cs in the "Open Files Tab".
This can get very annoying when switching back and forth between the tests, the class being tested and its interface. As a result I would now choose That over If as a prefix.
Additionally I now use - only for methods in my test classes as it is not considered best practice anywhere else - the "_" to separate words in my test method names as in:
[Test] public void detects_invalid_User_Input()
I find this to be easier to read.
[End Edit]
I hope this spawns some more ideas, since I consider naming tests of great importance as it can save you a lot of time that would otherwise have been spent trying to understand what the tests are doing (e.g. after resuming a project after an extended hiatus).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "647"
} |
Q: Automatic Chart Pagination with Report Parameters Based on several report parameters in SQL Server 2005 reporting services, I would like to automatically generate one or several chart(s) for each row in the return result and paginate or space them out. How do I go about that?
A: If the number of charts will vary for each row, but the variations are known (e.g. it's either just chart 1, or chart 1 and 3, or charts 1 2 and 3) then it's simple enough using a table.
In the default detail row add any normal fields you need. Now insert a new detail row for each chart you might need. Lastly set the visibility of each chart row based on your rules, noting that the rule will hide the row if your expression evaluates to true. Make sure you select the row using the area to the left of the left-most cell, if you got it right you'll see that it's a row in the properties grid.
To get the layout you want you can merge cells for the charts to go in, or use a single cell and put a Rectangle in it, then in the Rectangle lay out your other controls.
Any rows that are hidden will be collapsed, so you wont get big empty sections like you can if you simply toggle the visibility of the charts themselves.
A: What you can do is place a List control on the page, set List grouping by record unique key (ID, or several fields if composite), and place a charts on the List. Next, set items visibility expressions to control it with report parameters.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I have YAHOO.util.KeyListener disabled when an input element is focused? I have a MenuBar setup with YUI's MenuBar widget, and I have a YAHOO.util.KeyListener attached to document to get quick keyboard access to the menus and sub-menu items (e.g. 's' to open the Setup menu). The problem is that the keylistener will still fire when a user is in an input element. For example, a user might be typing soup into a text field, and the 's' character will cause the Setup menu to pop open.
One solution would be to disable the keylistener when focus is on an input element, and enable it on blur. How would I go about doing this? Is there a better solution?
A: I commend you for trying to provide keyboard shortcuts, but be aware that this will be a bit of a pain to implement cross-platform. If it's feasible, I strongly recommend using access keys on <a> tags.
If you're still going, I guess accesskey won't work for you. I'll assume you've read the relevant YUI tutorial.
If blur and focus are really the right way to go, I'd use something like
YAHOO.util.Event.onDOMReady(init);
function init() {
// set up the keyboard listeners
setUpExceptionsToKeyboardShortcuts();
}
function disableShortcuts() {
// Do what you've got to do
}
function enableShortcuts() {
// Do what you've got to do
}
function setUpExceptionsToKeyboardShortcuts() {
var focusable = document.getElementsByTagName('input');
focusable = focusable.concat(document.getElementsByTagName('select'));
focusable = focusable.concat(document.getElementsByTagName('textarea'));
YAHOO.util.Event.addListener(focusable, 'focus', disableShortcuts);
YAHOO.util.Event.addListener(focusable, 'blur', ensableShortcuts);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/155445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.