text stringlengths 8 267k | meta dict |
|---|---|
Q: Display a tooltip over a button using Windows Forms How can I display a tooltip over a button using Windows Forms?
A: For default tooltip this can be used -
System.Windows.Forms.ToolTip ToolTip1 = new System.Windows.Forms.ToolTip();
ToolTip1.SetToolTip(this.textBox1, "Hello world");
A customized tooltip can also be used in case if formatting is required for tooltip message. This can be created by custom formatting the form and use it as tooltip dialog on mouse hover event of the control. Please check following link for more details -
http://newapputil.blogspot.in/2015/08/create-custom-tooltip-dialog-from-form.html
A: The .NET framework provides a ToolTip class. Add one of those to your form and then on the MouseHover event for each item you would like a tooltip for, do something like the following:
private void checkBox1_MouseHover(object sender, EventArgs e)
{
toolTip1.Show("text", checkBox1);
}
A: Lazy and compact storing text in the Tag property
If you are a bit lazy and do not use the Tag property of the controls for anything else you can use it to store the tooltip text and assign MouseHover event handlers to all such controls in one go like this:
private System.Windows.Forms.ToolTip ToolTip1;
private void PrepareTooltips()
{
ToolTip1 = new System.Windows.Forms.ToolTip();
foreach(Control ctrl in this.Controls)
{
if (ctrl is Button && ctrl.Tag is string)
{
ctrl.MouseHover += new EventHandler(delegate(Object o, EventArgs a)
{
var btn = (Control)o;
ToolTip1.SetToolTip(btn, btn.Tag.ToString());
});
}
}
}
In this case all buttons having a string in the Tag property is assigned a MouseHover event. To keep it compact the MouseHover event is defined inline using a lambda expression. In the event any button hovered will have its Tag text assigned to the Tooltip and shown.
A: You can use the ToolTip class:
Creating a ToolTip for a Control
Example:
private void Form1_Load(object sender, System.EventArgs e)
{
System.Windows.Forms.ToolTip ToolTip1 = new System.Windows.Forms.ToolTip();
ToolTip1.SetToolTip(this.Button1, "Hello");
}
A: private void Form1_Load(object sender, System.EventArgs e)
{
ToolTip toolTip1 = new ToolTip();
toolTip1.AutoPopDelay = 5000;
toolTip1.InitialDelay = 1000;
toolTip1.ReshowDelay = 500;
toolTip1.ShowAlways = true;
toolTip1.SetToolTip(this.button1, "My button1");
toolTip1.SetToolTip(this.checkBox1, "My checkBox1");
}
A: Based on DaveK's answer, I created a control extension:
public static void SetToolTip(this Control control, string txt)
{
new ToolTip().SetToolTip(control, txt);
}
Then you can set the tooltip for any control with a single line:
this.MyButton.SetToolTip("Hello world");
A: The ToolTip is a single WinForms control that handles displaying tool tips for multiple elements on a single form.
Say your button is called MyButton.
*
*Add a ToolTip control (under Common
Controls in the Windows Forms
toolbox) to your form.
*Give it a
name - say MyToolTip
*Set the "Tooltip on MyToolTip" property of MyButton (under Misc in
the button property grid) to the text that should appear when you hover over it.
The tooltip will automatically appear when the cursor hovers over the button, but if you need to display it programmatically, call
MyToolTip.Show("Tooltip text goes here", MyButton);
in your code to show the tooltip, and
MyToolTip.Hide(MyButton);
to make it disappear again.
A: Using the form designer:
*
*Drag the ToolTip control from the Toolbox, onto the form.
*Select the properties of the control you want the tool tip to appear on.
*Find the property 'ToolTip on toolTip1' (the name may not be toolTip1 if you changed it's default name).
*Set the text of the property to the tool tip text you would like to display.
You can set also the tool tip programatically using the following call:
this.toolTip1.SetToolTip(this.targetControl, "My Tool Tip");
A: I have done the cool tool tip
Code is:
1.Initialize the tooltip object
2.call the object when or where you want to displays your creativity
Ex-
ToolTip t=new ToolTip();
t.setToolTip(textBoxName,"write your message here what tp you want to show up");
A: Sure, just handle the mousehover event and tell it to display a tool tip.
t is a tooltip defined either in the globals or in the constructor using:
ToolTip t = new ToolTip();
then the event handler:
private void control_MouseHover(object sender, EventArgs e)
{
t.Show("Text", (Control)sender);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "244"
} |
Q: RESTifying URLs At work here, we have a box serving XML feeds to business partners. Requests for our feeds are customized by specifying query string parameters and values. Some of these parameters are required, but many are not.
For example, we've require all requests to specify a GUID to identify the partner, and a request can either be for a "get latest" or "search" action:
For a search: http://services.null.ext/?id=[GUID]&q=[Search Keywords]
Latest data in category: http://services.null.ext/?id=[GUID]&category=[ID]
Structuring a RESTful URL scheme for these parameters is easy:
Search: http://services.null.ext/[GUID]/search/[Keywords]
Latest: http://services.null.ext/[GUID]/latest/category/[ID]
But what how should we handle the dozen or so optional parameters we have? Many of these are mutually exclusively, and many are required in combinations. Very quickly, the number of possible paths becomes overwhelmingly complex.
What are some recommended practices for how to map URLs with complex query strings to friendlier /REST/ful/paths?
(I'm interested in conventions, schemes, patterns, etc. Not specific technologies to implement URL-rewriting on a web server or in a framework.)
A: You should leave optional query parameters in the Query string. There is no "rule" in REST that says there cannot be a query string. Actually, it's quite the opposite. The query string should be used to alter the view of the representation you are transferring back to the client.
Stick to "Entities with Representable State" for your URL path components. Category seems OK, but what exactly is it that you are feeding over XML? Posts? Catalog Items? Parts?
I think a much better REST taxonomy would look like this (assuming the content of your XML feed is an "article"):
*
*http://myhost.com/PARTNERGUID/articles/latest?criteria1=value1&criteria2=value2
*http://myhost.com/PARTNERGUID/articles/search?criteria1=value1&criteria2=value2
If you're not thinking about the entities you are representing while building your REST structure, you're not doing REST. You're doing something else.
Take a look at this article on REST best practices. It's old, but it may help.
A: Parameters with values? One option is the query string. Using it is not inherently non-restful. Another option is to use the semi-colon, Tim Berners-Lee talks about them and they might just fit the bill, allowing the URL to make sense, without having massively long paths.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Python - How do I convert "an OS-level handle to an open file" to a file object? tempfile.mkstemp() returns:
a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order.
How do I convert that OS-level handle to a file object?
The documentation for os.open() states:
To wrap a file descriptor in a "file
object", use fdopen().
So I tried:
>>> import tempfile
>>> tup = tempfile.mkstemp()
>>> import os
>>> f = os.fdopen(tup[0])
>>> f.write('foo\n')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
IOError: [Errno 9] Bad file descriptor
A: You forgot to specify the open mode ('w') in fdopen(). The default is 'r', causing the write() call to fail.
I think mkstemp() creates the file for reading only. Calling fdopen with 'w' probably reopens it for writing (you can reopen the file created by mkstemp).
A: You can use
os.write(tup[0], "foo\n")
to write to the handle.
If you want to open the handle for writing you need to add the "w" mode
f = os.fdopen(tup[0], "w")
f.write("foo")
A: temp = tempfile.NamedTemporaryFile(delete=False)
temp.file.write('foo\n')
temp.close()
A: What's your goal, here? Is tempfile.TemporaryFile inappropriate for your purposes?
A: Here's how to do it using a with statement:
from __future__ import with_statement
from contextlib import closing
fd, filepath = tempfile.mkstemp()
with closing(os.fdopen(fd, 'w')) as tf:
tf.write('foo\n')
A: I can't comment on the answers, so I will post my comment here:
To create a temporary file for write access you can use tempfile.mkstemp and specify "w" as the last parameter, like:
f = tempfile.mkstemp("", "", "", "w") # first three params are 'suffix, 'prefix', 'dir'...
os.write(f[0], "write something")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
} |
Q: Best Practices for Performing Sequential Operations What is the best way to perform a couple of tasks together and if one task fails then the next tasks should not be completed? I know if it were the database operations then I should have used Transactions but I am talking about different types of operations like the following:
All tasks must pass:
SendEmail
ArchiveReportsInDatabase
CreateAFile
In the above scenario all the tasks must pass or else the whole batch operation must be rollback.
A: Rollbacks are tough - AFAIK, there's really only 2 ways to go about it. Either a 2 phase commit protocol, or compensating transactions. You really have to find a way to structure your tasks in one of these fashions.
Usually, the better idea is to take advantage of other folks' hard work and use technologies that already have 2PC or compensation built in. That's one reason that RDBMS are so popular.
So, the specifics are task dependent...but the pattern is fairly easy:
class Compensator {
Action Action { get; set; }
Action Compensate { get; set; }
}
Queue<Compensator> actions = new Queue<Compensator>(new Compensator[] {
new Compensator(SendEmail, UndoSendEmail),
new Compensator(ArchiveReportsInDatabase, UndoArchiveReportsInDatabase),
new Compensator(CreateAFile, UndoCreateAFile)
});
Queue<Compensator> doneActions = new Queue<Compensator>();
while (var c = actions.Dequeue() != null) {
try {
c.Action();
doneActions.Add(c);
} catch {
try {
doneActions.Each(d => d.Compensate());
} catch (EXception ex) {
throw new OhCrapException("Couldn't rollback", doneActions, ex);
}
throw;
}
}
Of course, for your specific tasks - you may be in luck.
*
*Obviously, the RDBMS work can already be wrapped in a transaction.
*If you're on Vista or Server 2008, then you get Transactional NTFS to cover your CreateFile scenario.
*Email is a bit trickier - I don't know of any 2PC or Compensators around it (I'd only be slightly surprised if someone pointed out that Exchange has one, though) so I'd probably use MSMQ to write a notification and let a subscriber pick it up and eventually email it. At that point, your transaction really covers just sending the message to the queue, but that's probably good enough.
All of these can participate in a System.Transactions Transaction, so you should be in pretty good shape.
A: in C#
return SendEmail() && ArchiveResportsInDatabase() && CreateAFile();
A: Another idea:
try {
task1();
task2();
task3();
...
taskN();
}
catch (TaskFailureException e) {
dealWith(e);
}
A: A couple of suggestions:
In a distributed scenario, some sort of two-phase commit protocol may be needed. Essentially, you send all participants a message saying "Prepare to do X". Each participant must then send a response saying "OK, I guarantee I can do X" or "No, can't do it." If all participants guarantee they can complete, then send the message telling them to do it. The "guarantees" can be as strict as needed.
Another approach is to provide some sort of undo mechanism for each operation, then have logic like this:
try:
SendEmail()
try:
ArchiveReportsInDatabase()
try:
CreateAFile()
except:
UndoArchiveReportsInDatabase()
raise
except:
UndoSendEmail()
raise
except:
// handle failure
(You wouldn't want your code to look like that; this is just an illustration of how the logic should flow.)
A: If your language allows it, this is very tidy:
*
*Put your tasks in an array of code blocks or function pointers.
*Iterate over the array.
*Break if any block returns failure.
A: You didn't mention what programming language/environment you're using. If it's the .NET Framework, you might want to take a look at this article. It describes the Concurrency and Control Runtime from Microsoft's Robotics Studio, which allows you to apply all sorts of rules on a set of (asynchronous) events: for example, you can wait for any number of them to complete, cancel if one event fails, etc. It can run things in multiple threads as well, so you get a very powerful method of doing stuff.
A: You don't specify your environment. In Unix shell scripting, the && operator does just this.
SendEmail () {
# ...
}
ArchiveReportsInDatabase () {
# ...
}
CreateAFile () {
# ...
}
SendEmail && ArchiveReportsInDatabase && CreateAFile
A: If you're using a language which uses sort-circuit evaluation (Java and C# do), you can simply do:
return SendEmail() && ArchiveResportsInDatabase() && CreateAFile();
This will return true if all the functions return true, and stop as soon as the first one return false.
A: Exceptions are generally good for this sort of thing. Pseudo-Java/JavaScript/C++ code:
try {
if (!SendEmail()) {
throw "Could not send e-mail";
}
if (!ArchiveReportsInDatabase()) {
throw "Could not archive reports in database";
}
if (!CreateAFile()) {
throw "Could not create file";
}
...
} catch (Exception) {
LogError(Exception);
...
}
Better still if your methods throw exceptions themselves:
try {
SendEmail();
ArchiveReportsInDatabase();
CreateAFile();
...
} catch (Exception) {
LogError(Exception);
...
}
A very nice outcome of this style is that your code doesn't get increasingly indented as you move down the task chain; all your method calls remain at the same indentation level. Too much indentation makes the code harder to read.
Moreover, you have a single point in the code for error handling, logging, rollback etc.
A: To really do it right you should use an asyncronous messaging pattern. I just finished a project where I did this using nServiceBus and MSMQ.
Basically, each step happens by sending a message to a queue. When nServiceBus finds messages waiting in the queue it calls your Handle method corresponding to that message type. This way each individual step is independently failable and retryable. If one step fails the message ends up in an error queue so you can easily retry it later.
These pure-code solutions being suggested aren't as robust since if a step fails you would have no way to retry only that one step in the future and you'd have to implement rollback code which isn't even possible in some cases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Call Web Services Using Ajax or Silverlight? Which performs best? I'm building an ASP.NET AJAX application that uses JavaScript to call web services to get its data, and also uses Silverlights Isolated Storage to cache the data on the client machine. Ultimately once the data is downloaded it is passed to JavaScript which displays in on the page using the HTML DOM.
What I'm trying to figure out is, does it make sense for me to make these Web Service calls in Silverlight then pass the data to JavaScript once it's loaded? Also, Silverlight will be saving the data to disk using Isolated Storage whether I call the Web Services with JavaScript or Silverlight. If I call the Web Services with JavaScript, the data will be passed to Silverlight to cache.
I've done some prototyping both ways, and I'm finding the performance to pretty much be the same either way. Also, one of the kickers that is pointing me towards using Silverlight for the whole client-side data access layer, is I need to have timers periodically check for updated data and download it to the cache so the JavaScript can loading when it needs to.
Has anyone done anything similar to this? If so, what are your experiences relating to performance with either the JavaScript or Silverlight method described?
A: Since Silverlight can handle JSON and XML based services, the format of the response is totally irrelevant. What you must consider, however, is the following:
1) Silverlight is approximately 1000 times faster than JavaScript
2) If your web service is natively SOAP based, Visual Studio can generate a proxy for you, so that you don't need to parse the SOAP message.
3) Silverlight has LINQ to XML and LINQ to JSON, which makes parsing both POX and JSON a breeze.
In a perfect world, I would go with Silverlight for the "engine", and fall back to JavaScript in case Silverlight is not available.
Greetings,
Laurent
A: Another thing to consider - getting your data in JSON format will be faster than XML and Web Services. JSON becomes a JavaScript object pretty quickly and doesn't have to be parsed like XML does. Personally, I'd go with JavaScript.
article: Speeding Up AJAX with JSON
A: Since JavaScript isn't multithreaded, I'm finding that using Silverlight to access/cache the data then pass it to JavaScript for display produces much better performance, while refraining from locking/freezing the browser so the user can keep doing stuff while the data loads.
A: Passing JSON-formatted data is in part faster because unlike an XML SOAP message, it doesn't require a SOAP header or any other miscellaneous info - it's just pure data. Thus, making the total size of the message smaller.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I concatenate files in a subdirectory with Unix find execute and cat into a single file? I can do this:
$ find .
.
./b
./b/foo
./c
./c/foo
And this:
$ find . -type f -exec cat {} \;
This is in b.
This is in c.
But not this:
$ find . -type f -exec cat > out.txt {} \;
Why not?
A: Hmm... find seems to be recursing as you output out.txt to the current directory
Try something like
find . -type f -exec cat {} \; > ../out.txt
A: You could do something like this :
$ cat `find . -type f` > out.txt
A: find's -exec argument runs the command you specify once for each file it finds. Try:
$ find . -type f -exec cat {} \; > out.txt
or:
$ find . -type f | xargs cat > out.txt
xargs converts its standard input into command-line arguments for the command you specify. If you're worried about embedded spaces in filenames, try:
$ find . -type f -print0 | xargs -0 cat > out.txt
A: How about just redirecting the output of find into a file, since all you're wanting to do is cat all the files into one large file:
find . -type f -exec cat {} \; > /tmp/out.txt
A: Maybe you've inferred from the other responses that the > symbol is interpreted by the shell before find gets it as an argument. But to answer your "why not" lets look at your command, which is:
$ find . -type f -exec cat > out.txt {} \;
So you're giving find these arguments: "." "-type" "f" "-exec" "cat" you're giving the redirect these arguments: "out.txt" "{}" and ";". This confuses find by not terminating the -exec arguments with a semi-colon and by not using the file name as an argument ("{}"), it possibly confuses the redirection too.
Looking at the other suggestions you should really avoid creating the output in the same directory you're finding in. But they'd work with that in mind. And the -print0 | xargs -0 combination is greatly useful. What you wanted to type was probably more like:
$ find . -type f -exec cat \{} \; > /tmp/out.txt
Now if you really only have one level of sub directories and only normal files, you can do something silly and simple like this:
cat `ls -p|sed 's/\/$/\/*/'` > /tmp/out.txt
Which gets ls to list all your files and directories appending '/' to the directories, while sed will append a '*' to the directories. The shell will then interpret this list and expand the globs. Assuming that doesn't result in too many files for the shell to handle, these will all be passed as arguments to cat, and the output will be written to out.txt.
A: Or just leave out the find which is useless if you use the really great Z shell (zsh), and you can do this:
setopt extendedglob
(this should be in your .zshrc)
Then:
cat **/*(.) > outfile
just works :-)
A: Try this:
(find . -type f -exec cat {} \;) > out.txt
A: In bash you could do
cat $(find . -type f) > out.txt
with $( ) you can get the output from a command and pass it to another
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Framework to host web services on a basic hosted web server Can anyone recommend a framework or basic technology to use to host a web service on a basic hosted web server?
I have data in a mySQL database that will be accessed by the web service.
A: You can create them very easily in C#
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Bad habits of your Scrum Master Scrum is quite popular dev.process these days and often Project Manager suddenly gets new title (Scrum Master). However it should be not just a new title, but new habits and new paradigm. What are the bad habits of your Scrum master?
A: *
*Micromanaging the team with hyper activity
*Overriding senior developers on technical decisions because "Scrum says" and "the team must vote". Totally dis-empowering senior technical people.
*Trying to squeeze blood from a stone at retrospectives on issues that aren't actually issues.
*Telling me the points don't matter but at each review the points are disected, analysed every 2 weeks at review. Furthermore, basing our annual bonus on our points performance.
Scrum is good but it can disregard good engineering practice and technical processes that have worked like a charm for ages.
A: Constantly swapping new bugs in and out of the Sprint.
A: There are two kinds of scrum masters:
*
*A project manager whose title has changed because of adoption of Agile.
*An exclusive scrum master who only facilitates the scrum and reports to the project manager (shusa).
The second point is preached and practiced in 'truly' Agile organizations. It is expensive but it has some merits.
Also,
*
*A scrum master is expected to be present with the sprint team all the time (not literally). If a project manager does that, s/he would be micro-managing.
*Scrum masters' role is not to manage budgets, but to put predictability around the sprint team, in terms of amount of work that the team can do.
*Scrum masters should know strengths and weaknesses of team members, and facilitate inter-scrum best practice sharing.
So, my point is, if these roles are confused, the team may not do very well.
A: Not helping with the push-back part of the process e.g. 'these are all the stores the customer wants in this iteration so thats what we have to do'.
A: Constantly trying to tie actual hours worked back to story point estimates.
A: Not keeping scrums on track - letting them descend into technical discussions and a much longer meeting.
A: Assigning work and asking for daily status reports instead of letting the team learn how to manage its own work.
A: The big bad habit our Scrum Master had at first was thinking we would take care of our own impediments. That's one of the things the Scrum Master is supposed to do but she left it to us until it got unmanageable.
The other thing we've dealt with is the Scrum Master thinking they were in charge of riding the developers' backs until tasks were taken care of. This creates a bad atmosphere on the team since they're supposed to be self-managing.
To me and our team, the Scrum Master's job is to be a shield and assistant for the team, blocking impediments and doing what they can to help expedite things. Ken Schwaber's Agile Software Development with Scrum is an excellent intro to Scrum, it's what our team used and we've been pretty successful with it. There's also Agile Project Management with Scrum, which is more for the Scrum Master and Product Owner roles specifically.
A: *
*Micromanaging
*Exercising old-style command and control instead of facilitating a self-directed team
*Focusing more on the numbers/burn-ups/backlog than on the people who make up the team
*Not protecting the team from outside interference
A: When I was involved in a Scrum, the Scrum master quickly developed the habit of just letting us do our own thing, and the Scrum fell back into our normal development routine.
A: *
*Not being able to slot tasks within cycle appropriately (too many usually)
*Not dealing well with external customers (if a certain task is too large for a single cycle, whining to the team instead of pushing back on the customer)
*Making daily scrums too large of a process -- not sticking to a certain time limit (we prefer 15 min max).
A: I really dislike it when ex-PMs turned Scrum Master consider Scrum a way to cut back on their original (individual) obligations without investing that time back into team-work, and active stress-reduction (planning for setbacks). They just lay back and start praising themselves for the great results, whereas everyone can see the team would perform even better without their presence, at all.
In my opinion, our best Scrum masters have been developers with a large sense of responsibility, or non-PMs.
Then again, I have worked (before the world knew about Scrum) for PMs that seriously rocked. They would make great Scrum masters today, I'm sure.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What is the command line syntax to delete files in Perforce? I am creating some build scripts that interact with Perforce and I would like to mark for delete a few files. What exactly is the P4 syntax using the command line?
A: p4 delete filename
(output of p4 help delete)
delete -- Open an existing file to delete it from the depot
p4 delete [ -c changelist# ] [ -n ] file ...
Opens a file that currently exists in the depot for deletion.
If the file is present on the client it is removed. If a pending
changelist number is given with the -c flag the opened file is
associated with that changelist, otherwise it is associated with
the 'default' pending changelist.
Files that are deleted generally do not appear on the have list.
The -n flag displays what would be opened for delete without actually
changing any files or metadata.
A: Teach a man to fish:
*
*p4 help - gets you general command
syntax
*p4 help commands - lists the
commands
*p4 help <command name> -
provides detailed help for a specific
command
A: http://www.perforce.com/perforce/doc.062/manuals/boilerplates/quickstart.html
Deleting files
To delete files from both the Perforce server and your workspace, issue the p4 delete command. For example:
p4 delete demo.txt readme.txt
The specified files are removed from your workspace and marked for deletion from the server. If you decide you don't want to delete the files after all, issue the p4 revert command. When you revert files opened for delete, Perforce restores them to your workspace.
A: Admitted - it takes a (small) number of steps to find the (excellent!) Perforce user guide online in the version that matches your installation and get to the chapter with the information you need.
Whenever I find myself in need of anything about the p4 command line client, I rely on the help Perforce have built into it. Accessing it could not be easier:
*
*on the command line, enter p4
This gets you to the information Michael Burr has shown in his answer (and some more).
If you do not get a help screen right away, something is wrong with our client configuration, e.g. P4PORT is not set properly. You obviously need to fix that first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Programmatically triggering events in Javascript for IE using jQuery When an Event is triggered by a user in IE, it is set to the window.event object. The only way to see what triggered the event is by accessing the window.event object (as far as I know)
This causes a problem in ASP.NET validators if an event is triggered programmatically, like when triggering an event through jQuery. In this case, the window.event object stores the last user-triggered event.
When the onchange event is fired programmatically for a text box that has an ASP.NET validator attached to it, the validation breaks because it is looking at the element that fired last event, which is not the element the validator is for.
Does anyone know a way around this? It seems like a problem that is solvable, but from looking online, most people just find ways to ignore the problem instead of solving it.
To explain what I'm doing specifically:
I'm using a jQuery time picker plugin on a text box that also has 2 ASP.NET validators associated with it. When the time is changed, I'm using an update panel to post back to the server to do some things dynamically, so I need the onchange event to fire in order to trigger the postback for that text box.
The jQuery time picker operates by creating a hidden unordered list that is made visible when the text box is clicked. When one of the list items is clicked, the "change" event is fired programmatically for the text box through jQuery's change() method.
Because the trigger for the event was a list item, IE sees the list item as the source of the event, not the text box, like it should.
I'm not too concerned with this ASP.NET validator working as soon as the text box is changed, I just need the "change" event to be processed so my postback event is called for the text box. The problem is that the validator throws an exception in IE which stops any event from being triggered.
Firefox (and I assume other browsers) don't have this issue. Only IE due to the different event model. Has anyone encountered this and seen how to fix it?
I've found this problem reported several other places, but they offer no solutions:
*
*jQuery's forum, with the jQuery UI Datepicker and an ASP.NET Validator
*ASP.NET forums, bug with ValidatorOnChange() function
A: I had the same problem. Solved by using this function:
jQuery.fn.extend({
fire: function(evttype){
el = this.get(0);
if (document.createEvent) {
var evt = document.createEvent('HTMLEvents');
evt.initEvent(evttype, false, false);
el.dispatchEvent(evt);
} else if (document.createEventObject) {
el.fireEvent('on' + evttype);
}
return this;
}
});
So my "onSelect" event handler to datepicker looks like:
if ($.browser.msie) {
datepickerOptions = $.extend(datepickerOptions, {
onSelect: function(){
$(this).fire("change").blur();
}
});
}
A: I solved the issue with a patch:
window.ValidatorHookupEvent = function(control, eventType, body) {
$(control).bind(eventType.slice(2), new Function("event", body));
};
Update: I've submitted the issue to MS (link).
A: From what you're describing, this problem is likely a result of the unique event bubbling model that IE uses for JS.
My only real answer is to ditch the ASP.NET validators and use a jQuery form validation plugin instead. Then your textbox can just be a regular ASP Webforms control and when the contents change and a postback occures all is good. In addition you keep more client-side concerns seperated from the server code.
I've never had much luck mixing Webform Client controls (like the Form Validation controls) with external JS libraries like jQuery. I've found the better route is just to go with one or the other, but not to mix and match.
Not the answer you're probably looking for.
If you want to go with a jQuery form validation plugin concider this one jQuery Form Validation
A: Consider setting the hidden field _EVENTTARGET value before initiating the event with javascript. You'll need to set it to the server side id (replace underscore with $ in the client id) for the server to understand it. I do this on button clicks that I simulate so that the server side can determine which OnClick method to fire when the result gets posted back -- Ajax or not, doesn't really matter.
A: This is an endemic problem with jQuery datepickers and ASP validation controls.
As you are saying, the wrong element cross-triggers an ASP NET javascript validation routine, and then the M$ code throws an error because the triggering element in the routine is undefined.
I solved this one differently from anyone else I have seen - by deciding that M$ should have written their code more robustly, and hence redeclaring some of the M$ validator code to cope with the undefined element. Everything else I have seen is essentially a workaround on the jQuery side, and cuts possible functionality out (eg. using the click event instead of change).
The bit that fails is
for (i = 0; i < vals.length; i++) {
ValidatorValidate(vals[i], null, event);
}
which throws an error when it tries to get a length for the undefined 'vals'.
I just added
if (vals) {
for (i = 0; i < vals.length; i++) {
ValidatorValidate(vals[i], null, event);
}
}
and she's good to go. Final code, which redeclares the entire offending function, is below. I put it as a script include at the bottom of my master page or page.
Yes, this does break upwards compatibility if M$ decide to change their validator code in the future. But one would hope they'll fix it and then we can get rid of this patch altogether.
// Fix issue with datepicker and ASPNET validators: redeclare MS validator code with fix
function ValidatorOnChange(event) {
if (!event) {
event = window.event;
}
Page_InvalidControlToBeFocused = null;
var targetedControl;
if ((typeof (event.srcElement) != "undefined") && (event.srcElement != null)) {
targetedControl = event.srcElement;
}
else {
targetedControl = event.target;
}
var vals;
if (typeof (targetedControl.Validators) != "undefined") {
vals = targetedControl.Validators;
}
else {
if (targetedControl.tagName.toLowerCase() == "label") {
targetedControl = document.getElementById(targetedControl.htmlFor);
vals = targetedControl.Validators;
}
}
var i;
if (vals) {
for (i = 0; i < vals.length; i++) {
ValidatorValidate(vals[i], null, event);
}
}
ValidatorUpdateIsValid();
}
A: This is how I solved a simlar issue.
Wrote an onSelect() handler for the datepicker.
link text
In that function, called __doPostBack('textboxcontrolid','').
This triggered a partial postback for the textbox to the server, which called the validators in turn.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Can you get a Windows (AD) username in PHP? I have a PHP web application on an intranet that can extract the IP and host name of the current user on that page, but I was wondering if there is a way to get/extract their Active Directory/Windows username as well. Is this possible?
A: Check the AUTH_USER request variable. This will be empty if your web app allows anonymous access, but if your server's using basic or Windows integrated authentication, it will contain the username of the authenticated user.
In an Active Directory domain, if your clients are running Internet Explorer and your web server/filesystem permissions are configured properly, IE will silently submit their domain credentials to your server and AUTH_USER will be MYDOMAIN\user.name without the users having to explicitly log in to your web app.
A: Look at the PHP LDAP library functions: http://us.php.net/ldap.
Active Directory [mostly] conforms to the LDAP standard.
A: We have multiple domains in our environment so I use preg_replace with regex to get just the username without DOMAIN\ .
preg_replace("/^.+\\\\/", "", $_SERVER["AUTH_USER"]);
A: If you're using Apache on Windows, you can install the mod_auth_sspi from
https://sourceforge.net/projects/mod-auth-sspi/
Instructions are in the INSTALL file, and there is a whoami.php example. (It's just a case of copying the mod_auth_sspi.so file into a folder and adding a line into httpd.conf.)
Once it's installed and the necessary settings are made in httpd.conf to protect the directories you wish, PHP will populate the $_SERVER['REMOTE_USER'] with the user and domain ('USER\DOMAIN') of the authenticated user in IE -- or prompt and authenticate in Firefox before passing it in.
Info is session-based, so single(ish) signon is possible even in Firefox...
-Craig
A: You could probably authenticate the user in Apache with mod_auth_kerb by requiring authenticated access to some files … I think that way, the username should also be available in PHP environment variables somewhere … probably best to check with <?php phpinfo(); ?> once you get it runnning.
A: If you are looking for retrieving remote user IDSID/Username, use:
echo gethostbyaddr($_SERVER['REMOTE_ADDR']);
You will get something like
iamuser1-mys.corp.company.com
Filter the rest of the domain behind, and you are able to get the idsid only.
For more information visit http://lostwithin.net/how-to-get-users-ip-and-computer-name-using-php/
A: Use this code:
shell_exec("wmic computersystem get username")
A: I've got php mysql running on IIS - I can use $_SERVER["AUTH_USER"] if I turn on Windows Authentication in IIS -> Authentication and turn off Anonymous authentication (important)
I've used this to get my user and domain:
$user = $_SERVER['AUTH_USER'];
$user will return a value like: DOMAIN\username on our network, and then it's just a case of removing the DOMAIN\ from the string.
This has worked in IE, FF, Chrome, Safari (tested) so far.
A: You can say getenv('USERNAME')
A: Check out patched NTLM authentication module for Apache
https://github.com/rsim/mod_ntlm
Based on NTLM auth module for Apache/Unix
http://modntlm.sourceforge.net/
Read more at http://blog.rayapps.com/
Source: http://imthi.com/blog/programming/leopard-apache2-ntlm-php-integrated-windows-authentication.php
A: No. But what you can do is have your Active Directory admin enable LDAP so that users can maintain one set of credentials
http://us2.php.net/ldap
A: get_user_name works the same way as getenv('USERNAME');
I had encoding(with cyrillic) problems using getenv('USERNAME')
A: Referencing trying to also figure out if AUTH_USER is part of a particular domain group; a clever way to do this is t create a locked down folder with text files (can be blank). Set security to only having the security/distro group you want to validate. Once you run a @file_get_contents (<---will toss a warning)...if the user does not have group access they will not be able to get the file contents and hence, will not have that particular AD group access. This is simple and works wonderfully.
A: This is a simple NTLM AD integration example, allows single sign on with Internet Explorer, requires login/configuration in other browsers.
PHP Example
<?php
$user = $_SERVER['REMOTE_USER'];
$domain = getenv('USERDOMAIN');
?>
In your apache httpd.conf file
LoadModule authnz_sspi_module modules/mod_authnz_sspi.so
<Directory "/path/to/folder">
AllowOverride All
Options ExecCGI
AuthName "SSPI Authentication"
AuthType SSPI
SSPIAuth On
SSPIAuthoritative On
SSPIOmitDomain On
Require valid-user
Require user "NT AUTHORITY\ANONYMOUS LOGON" denied
</Directory>
And if you need the module, this link is useful:
https://www.apachehaus.net/modules/mod_authnz_sspi/
A: I tried almost all of these suggestions, but they were all returning empty values. If anyone else has this issue, I found this handy function on php.net (http://php.net/manual/en/function.get-current-user.php):
get_current_user();
$username = get_current_user();
echo $username;
This was the only way I was finally able to get the user's active directory username. If none of the above answers has worked, give this a try.
A: try this code :
$user= shell_exec("echo %username%");
echo "user : $user";
you get your windows(AD) username in php
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: PHP mode for Emacs I'm having trouble with my php code not indenting correctly...
I would like my code to look like this
if (foo)
{
print "i am indented";
}
but it always looks like this:
if (foo)
{
print "i am not indented correctly";
}
I tired googling for similar things and tried adding the following to my .emacs, but it didn't work at all.
Any thoughts?
(add-hook 'php-mode-hook
(function (lambda ()
;; GNU style
(setq php-indent-level 4
php-continued-statement-offset 4
php-continued-brace-offset 0
php-brace-offset 0
php-brace-imaginary-offset 0
php-label-offset -4))))
A: Customize c-default-style variable. Add this to your .emacs file:
(setq c-default-style "bsd"
c-basic-offset 4)
Description of bsd style.
A: Customize the variable c-default-style. You either want your "Other" mode (or "php" if its available) set to "bsd" or you can set hte style in all modes to bsd.
From what I understand, PHP mode is built on top of c mode, so it inherits its customizations.
A: Try with this:
(defun my-build-tab-stop-list (width)
(let ((num-tab-stops (/ 80 width))
(counter 1)
(ls nil))
(while (<= counter num-tab-stops)
(setq ls (cons (* width counter) ls))
(setq counter (1+ counter)))
(nreverse ls)))
(add-hook 'c-mode-common-hook
#'(lambda ()
;; You an remove this, if you don't want fixed tab-stop-widths
(set (make-local-variable 'tab-stop-list)
(my-build-tab-stop-list tab-width))
(setq c-basic-offset tab-width)
(c-set-offset 'defun-block-intro tab-width)
(c-set-offset 'arglist-intro tab-width)
(c-set-offset 'arglist-close 0)
(c-set-offset 'defun-close 0)
(setq abbrev-mode nil)))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Escaping a String from getting regex parsed in Java In Java, suppose I have a String variable S, and I want to search for it inside of another String T, like so:
if (T.matches(S)) ...
(note: the above line was T.contains() until a few posts pointed out that that method does not use regexes. My bad.)
But now suppose S may have unsavory characters in it. For instance, let S = "[hi". The left square bracket is going to cause the regex to fail. Is there a function I can call to escape S so that this doesn't happen? In this particular case, I would like it to be transformed to "\[hi".
A: Try Pattern.quote(String). It will fix up anything that has special meaning in the string.
A: String.contains does not use regex, so there isn't a problem in this case.
Where a regex is required, rather rejecting strings with regex special characters, use java.util.regex.Pattern.quote to escape them.
A: Any particular reason not to use String.indexOf() instead? That way it will always be interpreted as a regular string rather than a regex.
A: As Tom Hawtin said, you need to quote the pattern. You can do this in two ways (edit: actually three ways, as pointed out by @diastrophism):
*
*Surround the string with "\Q" and "\E", like:
if (T.matches("\\Q" + S + "\\E"))
*Use Pattern instead. The code would be something like this:
Pattern sPattern = Pattern.compile(S, Pattern.LITERAL);
if (sPattern.matcher(T).matches()) { /* do something */ }
This way, you can cache the compiled Pattern and reuse it. If you are using the same regex more than once, you almost certainly want to do it this way.
Note that if you are using regular expressions to test whether a string is inside a larger string, you should put .* at the start and end of the expression. But this will not work if you are quoting the pattern, since it will then be looking for actual dots. So, are you absolutely certain you want to be using regular expressions?
A: Regex uses the backslash character '\' to escape a literal. Given that java also uses the backslash character you would need to use a double bashslash like:
String S = "\\[hi"
That will become the String:
\[hi
which will be passed to the regex.
Or if you only care about a literal String and don't need a regex you could do the following:
if (T.indexOf("[hi") != -1) {
A: T.contains() (according to javadoc : http://java.sun.com/javase/6/docs/api/java/lang/String.html) does not use regexes. contains() delegates to indexOf() only.
So, there are NO regexes used here. Were you thinking of some other String method ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What are the best steps to start programming with TDD with C#? I want to start working with TDD but I don't know really where to start. We coding with .NET (C#/ASP.NET).
A: See the questions Why should I practice Test Driven Development and how should I start?, Moving existing code to Test Driven Development, What is unit testing? and What is TDD?
A: I would start by reading up on TDD and why it's a good practice. As you read that, try to think about how the concepts apply to your own projects.
When I was learning TDD, it seemed simple at first, but it's such a paradigm shift that it forced me to change the way I thought about how my program would work. And I guess that's kind of the point. :)
A: There is also lots of good information on the Google Testing Blog
A: There's a good book called Test Driven Development in Microsoft .NET that you might check out. It is essentially the same as the classic Test Driven Development by Example, but with the Microsoft platform in mind.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Getting the currently logged-in windows user I found this via google: http://www.mvps.org/access/api/api0008.htm
'******************** Code Start **************************
' This code was originally written by Dev Ashish.
' It is not to be altered or distributed,
' except as part of an application.
' You are free to use it in any application,
' provided the copyright notice is left unchanged.
'
' Code Courtesy of
' Dev Ashish
'
Private Declare Function apiGetUserName Lib "advapi32.dll" Alias _
"GetUserNameA" (ByVal lpBuffer As String, nSize As Long) As Long
Function fOSUserName() As String
' Returns the network login name
Dim lngLen As Long, lngX As Long
Dim strUserName As String
strUserName = String$(254, 0)
lngLen = 255
lngX = apiGetUserName(strUserName, lngLen)
If ( lngX > 0 ) Then
fOSUserName = Left$(strUserName, lngLen - 1)
Else
fOSUserName = vbNullString
End If
End Function
'******************** Code End **************************
Is this the best way to do it?
A: You could also use Environ$ but the method specified by the question is better. Users/Applications can change the environment variables.
A: I generally use an environ from within VBA as in the following. I haven't had the problems that Ken mentions as possibilities.
Function UserNameWindows() As String
UserNameWindows = VBA.Environ("USERNAME") & "@" & VBA.Environ("USERDOMAIN")
End Function
A: You could also do this:
Set WshNetwork = CreateObject("WScript.Network")
Print WshNetwork.UserName
It also has a UserDomain property and a bunch of other things:
http://msdn.microsoft.com/en-us/library/907chf30(VS.85).aspx
A: Alternative way to do that - probably the API you mention is a better way to get username.
For Each strComputer In arrComputers
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set colItems = objWMIService.ExecQuery("Select * from Win32_ComputerSystem",,48)
For Each objItem in colItems
Wscript.Echo "UserName: " & objItem.UserName & " is logged in at computer " & strComputer
Next
A: Lots of alternative methods in other posts, but to answer the question: yes that is the best way to do it. Faster than creating a COM object or WMI if all you want is the username, and available in all versions of Windows from Win95 up.
A: there are lots of way to get the current logged user name in WMI.
my way is to get it through the username from process of 'explorer.exe'
because when user login into window, the access of this file according to the current user.
WMI script would be look like this:
Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strIP & "\root\cimv2")
Set colProcessList = objWMIService.ExecQuery("Select * from Win32_Process")
For Each objprocess In colProcessList
colProperties = objprocess.GetOwner(strNameOfUser, strUserDomain)
If objprocess.Name = "explorer.exe" Then
UsrName = strNameOfUser
DmnName = strUserDomain
End If
Next
for more detailcheck the link on :
http://msdn.microsoft.com/en-us/library/aa394599%28v=vs.85%29.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: JavaScript Table Manipulation I have a table with one column and about ten rows. The first column has rows with text as row headers, "header 1", "header 2". The second column contains fields for the user to type data (textboxes and checkboxes).
I want to have a button at the top labelled "Add New...", and have it create a third column, with the same fields as the first column. If the user clicks it again, it will create another blank column with fields (as in the second column).
Does anyone know of an effective way to manipulate the DOM to achieve this?
I'm experimenting with div's and TABLES but i'm on my third day of doing this, and it feels harder than it should be.
A: Amusing exercise. Thanks to AviewAnew's hint, I could do it.
function AddColumn(tableId)
{
var table = document.getElementById(tableId);
if (table == undefined) return;
var rowNb = table.rows.length;
// Take care of header
var bAddNames = (table.tHead.rows[0].cells.length % 2 == 1);
var newcell = table.rows[0].cells[bAddNames ? 1 : 0].cloneNode(true);
table.rows[0].appendChild(newcell);
// Add the remainder of the column
for(var i = 1; i < rowNb; i++)
{
newcell = table.rows[i].cells[0].cloneNode(bAddNames);
table.rows[i].appendChild(newcell);
}
}
with following HTML:
<input type="button" id="BSO" value="Add" onclick="javascript:AddColumn('TSO')"/>
<table border="1" id="TSO">
<thead>
<tr><th>Fields</th><th>Data</th></tr>
</thead>
<tbody>
<tr><td>Doh</td><td>10</td></tr>
<tr><td>Toh</td><td>20</td></tr>
<tr><td>Foo</td><td>30</td></tr>
<tr><td>Bar</td><td>42</td></tr>
<tr><td>Ga</td><td>50</td></tr>
<tr><td>Bu</td><td>666</td></tr>
<tr><td>Zo</td><td>70</td></tr>
<tr><td>Meu</td><td>80</td></tr>
</tbody>
</table>
A: Something along the lines of
function(table)
{
for(var i=0;i<table.rows.length;i++)
{
newcell = table.rows[i].cells[0].cloneNode(true);
table.rows[i].appendChild(newcell);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is there a way to script a table's data (temp tables too) in MS SQL 2000? Given a table or a temp table, I'd like to run a procedure that will output a SQL script (i.e. a bunch of INSERT statements) that would populate the table. Is this possible in MS SQL Server 2000?
A: You can create a script to do it using a cursor. I just did one yesterday. You can get the idea from this.
DECLARE MY_CURSOR Cursor
FOR
Select Year, HolidayId, Date, EffBegDate, isnull(EffEndDate,'') AS EffEndDate, ChangedUser From HolidayDate
Open My_Cursor
DECLARE @Year varchar(50), @HolidayId varchar(50), @Date varchar(50), @EffBegDate varchar(50), @EffEndDate varchar(50), @ChangedUser varchar(50)
Fetch NEXT FROM MY_Cursor INTO @Year, @HolidayId, @Date, @EffBegDate, @EffEndDate, @ChangedUser
While (@@FETCH_STATUS <> -1)
BEGIN
IF (@@FETCH_STATUS <> -2)
print 'INSERT INTO [Employee3].[dbo].[HolidayDate]([Year],[HolidayId],[Date],[EffBegDate],[EffEndDate],[ChangedUser])'
print 'VALUES ('''+@Year+''','''+@HolidayId+''','''+@Date+''','''+@EffBegDate+''','''+@EffEndDate+''','''+@ChangedUser+''')'
FETCH NEXT FROM MY_Cursor INTO @Year, @HolidayId, @Date, @EffBegDate, @EffEndDate, @ChangedUser
END
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR
GO
A: A simple approach:
SELECT 'INSERT INTO table (col1, col2, col3) VALUES ('
'''' + col1 + ''', '
'''' + col2 + ''', '
'''' + col3 + ''')'
FROM table
A more elaborate approach would be to write a procedure that builds the INSERT statement by checking the table's schema, but I've never found a real need to do that in practice.
A: Somebody else tried it here. Please have a look.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to write acceptance tests How would you go about introducing acceptance tests into a team using the .NET framework? What tools are available for this purpose?
Thanks!
A: You might want to take a look at FitNesse, which is meant to be a way for Acceptance tests to look like a wiki document (so that they can be read and written by QA or project managers)
http://fitnesse.org/
Here's a good intro
http://ablog.apress.com/?p=735
A: See this post, I think its a good idea to first understand what an acceptance test is. Is your question not how to introduce Unit testing, which is the sister of Acceptance Tests.
For .Framework they should use NMock and NUnit.
Also worth while checking BDD
How detailed should a customer acceptance test be?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I get back a 2 digit representation of a number in SQL 2000 I have a table on SQL2000 with a numeric column and I need the select to return a 01, 02, 03...
It currently returns 1,2,3,...10,11...
Thanks.
A: Does this work?
SELECT REPLACE(STR(mycolumn, 2), ' ', '0')
From http://foxtricks.blogspot.com/2007/07/zero-padding-numeric-value-in-transact.html
A: This sort of question is about the interface to the database. Really the database should return the data and your application can reformat it if it wants the data in a particular format. You shouldn't do this in the database, but out in the presentation layer.
A: John's answer works and is generalizable to any number of digits, but I would be more comfortable with
select case when mycolumn between -9 and 9 then '0' + str(mycolumn) else str(mycolumn) end
A: where n is a positive integer between 0 and 99:
select right('0'+ltrim(str(n)),2)
or
select right(str(100+n),2)
but I like John's answer best. Single point of specification for target width, but I posted these because they are also common idioms that might work better in other situations or languages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Recommended books on Desktop Application development using MVC I'm looking for recommendations on books about MVC on the desktop. If they use Java, that is a bonus.
Some background:
I'm writing a desktop application in Java. It's an audio application that has a number of views and a central model called a Library with playlists, effects lists and a folder structure to organize them. In this application I'd like to have menus, context-menus and drag and drop support for various user actions. I've been struggling with how to achieve this using MVC.
I started with all the logic/controllers in the main class but have started to separate them out into their own classes. Now I need to start using listeners and observers to handle messages between the views and the controller. This led to me creating a number of interfaces and looping through my listeners in several places to fire off various messages. But that loop code keeps getting repeated (not DRY), so I'm assuming that now I should create different types of Event classes, create those events in my views and use a single method within the view to fire it off to the various listeners.
Update: Arguabley it shouldn't matter much but I'm using SWT, not Swing.
A: I've had the same problem: it really takes a lot of discipline to write a (non trivial) swing app, because all the listeners and events and asynchronous processing make up really fast for a big pile of unmaintainable code.
I found that classic MVC isn't enough, you have to look into more specific patterns like Presentation Model and such. The only book I found covering this patterns when applied to desktop applications is Desktop Java Live, by Scott Delap. While the majority of swing books deal with techniques to solve specific problems (how to make a gridless jtable, how to implement a round button, ...), Delap's book will help you architect a medium-sized swing application, best practices, etc.
A: Pretty much any java, eclipse, netbeans swing books should to the trick.
1) FREE --- Thinking in Java (http://mindview.net/Books/TIJ/DownloadSites)
2) CORE java , vol 1 and 2
3) Swing hacks : http://www.amazon.com/Swing-Hacks-Tips-Tools-Killer/dp/0596009070
4) netbeans RCP : http://www.amazon.com/Rich-Client-Programming-Plugging-NetBeans/dp/B00132S6UU/ref=dp_kinw_strp_1
5) eclipse Rich client programming -- http://www.amazon.com/Eclipse-Rich-Client-Platform-Applications/dp/0321334612
Hope this helps.
BR,
~A
A: In C# rather then Java, but Jeremy Miller has a bunch of posts regarding desktop apps and MVP/MVC (and a whole bunch of other related stuff).
A: Just to throw in my 2 cents, I recommend the book Head First Design Patterns. It has a very good explanation of the MVC pattern (in Java). It builds on other design patterns also discussed in the book such as Observer, Strategy and Composite that are used in MVC.
Best MVC tutorial I've read. Highly recommended.
A: Don't forget the Swing Tutorials; for instance the Swing Events tutorial.
And please bear in mind the SwingWorker, or handling events in a separate worker thread. I'm no expert on Swing by any means but I do know that a lot of the perceived slowness of Java Desktop applications is due to the work done in the event thread. If such work takes some time the entire GUI is unresponsive. Hard to fix afterward, not all that hard to do right if you keep it in mind.
As for books, I found the Core Java series by Cay Horstmann and Gary Cornell very nice to read. It is however about Java (including Swing) and not about MVC.
A: I need to add to my above entry that
the free BOOK -- THINKING IN JAVA talks about OOP, MVC and also about Swing. Not sure if it discusses the various implementations of MVC, though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Add/Remove for my application showing up in roaming user account I've built an installer for my application by hand (don't ask why). And I set up the registry keys for its entry in the add/remove control panel under HKCU\Software\Microsoft\Windows\CurrentVersion\Uninstall. And it works fine. I need it to be under HKCU so my installer will run on Vista without asking to be elevated.
The issue I have is that if a user installs using a domain account with a roaming profile, and then goes to a different machine, there's an entry for my software in the add/remove control panel with no information in it. I don't want it to appear there for roaming users, my app does not get installed in such a way that it will work in that circumstance anyway. Is there anyway I can setup that entry so my app won't appear in the add/remove? Or have I doomed myself to it by making the entry under HKCU? Thanks!
A: fwiw:Google Chrome installs the way you did, but also suffers from the same problem since it installs in the profiles "local settings\app data" directory, which doesn't roam [1].
Rather than fix the install\uninstall problem, would it be reasonable to have your app roam with the user? Is it small and xcopy installable such that you could install it under Doc & settings\Application Data some place, which does roam?
[1] http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/distrib/dseb_ovr_wpeu.mspx?mfr=true
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Time Travel functions in postgresql Can anyone recommend for/against the time-travel functions in postgresql's contrib/spi module? Is there an example available anywhere?
Tnx
A: The argument for time-travel would be being able to look at tables that are updated often at an earlier insertion/deletion point. Say a table of stock prices for a firms investment portfolio.
The argument against would be the extra storage space it eats up.
Here is an Example of use.
A: See This discussion for an alternative approach to historical reporting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Populating a PHP array: check for index first? If I'm deep in a nest of loops I'm wondering which of these is more efficient:
if (!isset($array[$key])) $array[$key] = $val;
or
$array[$key] = $val;
The second form is much more desirable as far as readable code goes. In reality the names are longer and the array is multidimensional. So the first form ends up looking pretty gnarly in my program.
But I'm wondering if the second form might be slower. Since the code is in one of the most frequently-executed functions in the program, I'd like to use the faster form.
Generally speaking this code will execute many times with the same value of "$key". So in most cases $array[$key] will already be set, and the isset() will return FALSE.
To clarify for those who fear that I'm treating non-identical code as if it were identical: as far as this part of the program is concerned, $val is a constant. It isn't known until run-time, but it's set earlier in the program and doesn't change here. So both forms produce the same result. And this is the most convenient place to get at $val.
A: isset() is very fast with ordinary variables, but you have an array here. The hash-map algorithm for arrays is quick, but it's still takes more time than doing nothing.
Now, first form can be faster if you have more values that are set, than those that are not, simply because it just looks up for hash without fetching or setting the value. So, that could be a point of difference: pick the first form if you have more 'hits' at keys that are set, and pick the second one if you have more 'misses'.
Please note that those two pieces of code are not identical. The first form will not set the value for some key when it's already set - it prevents 'overwriting'.
A: Have you measured how often you run into the situation that $array[$key] is set before you try to set it? I think one cannot give a general advice on this, because if there are actually a lot of those cases, the isset check could possibly save some time by avoiding unnessecary sets on the array. However, if this is just rarely the case, the overhead could slow you down …. The best thing would be to do a benchmark on your actual code.
However, be aware that both codes can lead to different results! If $val is not always the same for a $array[$key] combination, the former code would always set the value to the first $val for that $array[$key] where the latter code would always set it to the last value of that combination.
(I guess you are aware of that and $val is always the same for $array[$key], but some reader stopping by might not.)
A: For an array you actually want: array_key_exists($key, $array) instead of isset($array[$key]).
A: The overhead of a comparison which may or may not be true seems like it should take longer.
What does running the script in both configurations show for performance time?
A: You should check the array upto but not including the level you are going to set.
If you're going to set
$anArray[ 'level1' ][ 'level2' ][ 'level3' ] = ...
You should make sure that the path upto level2 actually exists prior to setting level3.
$anArray[ 'level1' ][ 'level2' ]
No puppies will actually be killed if you don't, but they might be annoyed depending on your particular environment.
You don't have to check the index you are actually setting, because setting it automatically means it is declared, but in the interest of good practice you should make sure nothing is magically created.
There is an easy way to do this:
<?php
function create_array_path( $path, & $inArray )
{
if ( ! is_array( $inArray ) )
{
throw new Exception( 'The second argument is not an array!' );
}
$traversed = array();
$current = &$inArray;
foreach( $path as $subpath )
{
$traversed[] = $subpath;
if ( ! is_array( $current ) )
{
$current = array();
}
if ( ! array_key_exists( $subpath, $current ) )
{
$current[ $subpath ] = '';
}
$current = &$current[ $subpath ];
}
}
$myArray = array();
create_array_path( array( 'level1', 'level2', 'level3' ), $myArray );
print_r( $myArray );
?>
This will output:
Array
(
[level1] => Array
(
[level2] => Array
(
[level3] =>
)
)
)
A: The extra function call to isset() is almost guaranteed to have more overhead than any assignment. I would be extremely surprised if the second form is not faster.
A: Do you need an actual check to see if the key is there? With an assignment to a blank array the isset() will just slow the loop down. And unless you do a second pass with data manipulation I strongly advise against the isset check. This is population, not manipulation.
A: i am a newbie to PHP but a combination of both could be with the ternary operator
$array[$key] = !isset($array[$key]) ? $val : $array[$key];
that's one way to go with it.
A: You can take a look at the PHP source code to see the difference. Didn't check whether this would be different in later versions of PHP, but it would seem in PHP3 the associative array functionality is in php3/php3_hash.c.
In the function _php3_hash_exists, the following things are done:
*
*key is hashed
*correct bucket found
*bucket walked, until correct item found or not
Function _php3_hash_add_or_update:
*
*hashed
*bucket found
*walked, existing overridden if existed
*
*if didn't exist, new one added
Therefore it would seem just setting it is faster, because there is just one function call and this hashing and bucket finding business will only get done once.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you measure all the queries that flow through your software? In one of his blog articles, the proprietor of this web site posed this question to the reader, "You're automatically measuring all the queries that flow through your software, right?"
How do you do this? Every line of code that makes a query against the database is followed by a line of code that increments a counter? Or, are there tools that sit between your app and the DB and do this for you?
A: SQL Server Profiler is my tool of choice, but only for the DB end obviously.
It should be noted, this is for optimizing queries and performance, and debugging. This is not a tool to be left running all the time, as it can be resource intensive.
A: I don't know exactly what Jeff was trying to say, but I would guess that he expects you to use whatever query performance monitoring facility you have for your database.
Another approach is to use wrappers for database connections in your code. For example, in Java, assuming you have a DataSource that all of your classes use, you can write your own implementation of DataSource that uses an underlying DataSource to create Connection objects. Your DataSource should wrap those connections in your own Connection objects, which can keep track of the data that flows though them.
A: I have a C++ wrapper that I use for all my database work. That wrapper (in debug mode) basically does an EXPLAIN QUERY PLAN on every statement that it runs. If it gets back a response that an index is not being used, it ASSERTS. Great way to make sure indexes are used (but only for debug mode)
A: We just bought a software product called dynaTrace to do this. It uses byte code instrumentation (MSIL in our case since we use .Net but it does Java as well) to do this. It basically instruments around the methods we choose and around various framework methods to capture the time it takes for each method to execute.
In regards to database calls, it keeps track of each call made (through ADO.Net) and the parameters in the call, along with the execution time. You can then go from the DB call and walk through the execution path the program took to get there. It will show every method call (that you have instrumented) in the path. It is quite badass.
You might use this in a number of different ways but typically this would be used in some kind of load testing scenario with some other product providing load through the various paths of your system. Then you get a list of your DB calls under load and can look at them.
You can also evaluate not just the execution of one call but the count of them to prevent the death of a thousand cuts.
A: For Perl, DBI::Profile.
A: If your architecture is well designed, it should be fairly easy to intercept all data access calls and measure query execution time. A fairly easy way of doing this is by using an aspect around DB calls (if your language/framework supports aspect programming). Another way is to use a special driver that intercepts all calls, redirect to a real driver and measure query time execution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you specify a particular JRE for a Browser applet? I have an third-party applet that requires JRE v1.5_12 to work correctly. THe user is installing JRE v1.6.07 or better. It used to be with 1.5 and below, that I could have multiple JRE's on the machine and specify which one to use - but with 1.6 that apepars to be broken. How do I tell the browser I want to use v1.5_12 instead of the latest one installed?
A: For security reasons, you can no longer force it to use older JRE's. Say release 12 has a huge security hole, and everyone installs release 13 to patch it. Evil java applets could just say "run with release 12 please" and then carry out their exploits, rendering the patches useless.
Most likely you have some code with security holes that the newer JRE is blocking, because it would cause a security risk. Fix your code, should be pretty minor changes, then you wont have to worry.
See this page for more info on the change.
A: The new applet engine (that will be shipped with 1.6u10 when Sun gets around to officially shipping it) gives you a tremendous amount of control in this area. It's going to take awhile to get enough systems on 6u10 to where you can actually rely on the functionality (unless you are corporate) - but it is coming (seems like it's about 5 years too late).
Here's a JavaWorld article describing this at a very high level: article text
6u10 also has a deployment toolkit that provides super easy to use javascript snippets that you can include in your applet deployment pages. These snippets handle JRE version checking, user notification, JRE downloading on demand, and a number of other things that are otherwise a hassle (not impossible, just a pain). The deployment kit has been designed to fail gracefully, so it does amazing things if 6u10 or above is installed, and drops back to decent behavior for older JREs.
One really, really nice thing about the new applet engine is that it runs in a separate process space from the browser. This has a couple of very big advantages, including the ability to have multiple applets running in different versions of the JRE (yes, you can specify different required JREs, including restrictions on how old and how new of JRE you support - the applet engine will re-use JREs if it can, but it has the ability to start up a different one if it needs).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Questions when moving from MbUnit to MsTest Our team is looking to switch from using mbunit to mstest, but there appears to be some disconnect between the two.
Does anyone know how to simulate the CombinatorialTest and Factory attributes from mbunit when using mstest?
I think that is our only stumbling block before doing the switch.
A: As far as I'm aware, you basically need to write a test method that generates all of the combinations (or calls the factory and iterates through the items) that calls your original test (now no longer a test method) a bunch of times.
Unfortunately, these do not show up as individual tests in results - they show up as just one test- so you have to pretty explicit in your error output. This means that as with this approach, as soon as one fails it stops the rest (you can get around this by keeping a big list of results, but that's yet more overhead)
I'd think twice before going to mstest right now unless you have to - the lack of a test runner on a clean machine is killer, and it's neither extensible nor frequently updated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Generate table relationship diagram from existing schema (SQL Server) Is there a way to produce a diagram showing existing tables and their relationships given a connection to a database?
This is for SQL Server 2008 Express Edition.
A: For SQL statements you can try reverse snowflakes. You can join at sourceforge or the demo site at http://snowflakejoins.com/.
A: Try DBVis - download at https://www.dbvis.com/download - there is a pro version (not needed) and a open version that should suffice.
All you have to do is to get the right JDBC - database driver for SQL Server, the tool shows tables and references orthogonal, hierarchical, in a circle ;-) etc. just by pressing one single button. I use the free version for years now.
A: Why don't you just use the database diagram functionality built into SQL Server?
A: Visio Professional has a database reverse-engineering feature if yiu create a database diagram. It's not free but is fairly ubiquitous in most companies and should be fairly easy to get.
Note that Visio 2003 does not play nicely with SQL2005 or SQL2008 for reverse engineering - you will need to get 2007.
A: DeZign for Databases should be able to do this just fine.
A: Yes you can use SQL Server 2008 itself but you need to install SQL Server Management Studio Express (if not installed ) . Just right Click on Database Diagrams and create new diagram. Select the exisiting tables and if you have specified the references in your tables properly. You will be able to see the complete diagram of selected tables.
For further reference see Getting started with SQL Server database diagrams
A: SQLDeveloper can do this.
http://sqldeveloper.solyp.com/
A: SchemaCrawler for SQL Server can generate database diagrams, with the help of GraphViz. Foreign key relationships are displayed (and can even be inferred, using naming conventions), and tables and columns can be excluded using regular expressions.
A: MySQL WorkBench is free software and is developed by Oracle, you can import an SQL File or specify a database and it will generate an SQL Diagram which you can move around to make it more visually appealing.
It runs on GNU/Linux and Windows and it's free and has a professional look..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "203"
} |
Q: What are some string encapsulation classes which specify both meaning and behavior for their contents? .NET has System.Uri for Uris and System.IO.FileInfo for file paths. I am looking for classes which are traditionally object oriented in that they specify both meaning and behavior for the string which is used in the object's construction. What other useful string encapsulation classes exist?
Things such as regular expressions and StringBuilders are useful for the gross manipulation of strings but they aren't what I'm looking for.
A: Maybe System.Security.SecureString for strings which you do not want to be available in public memory.
using (System.Security.SecureString password = new System.Security.SecureString())
{
password.AppendChar('s');
password.AppendChar('e');
password.AppendChar('c');
password.AppendChar('r');
password.AppendChar('e');
password.AppendChar('t');
password.MakeReadOnly();
}
A: System.Net.Mail.MailAddress someMailAddress = new System.Net.Mail.MailAddress("me@example.org", "John Doe");
System.Console.WriteLine(someMailAddress.Address); // me@example.org
System.Console.WriteLine(someMailAddress.User); // me
System.Console.WriteLine(someMailAddress.Host); // example.org
System.Console.WriteLine(someMailAddress.DisplayName); // John Doe
System.Console.WriteLine(someMailAddress); // "John Doe" <me@example.org>
Does not change too much in the behaviour of the string, but provides a quite nice way of saving a mail address in a type-safe way. Also, this object can be added to a mail message object. :)
A: Probably trivial, but there are also System.IO.DirectoryInfo and System.Info.Path
A: I've seen several projects storing Guids either as their string representation or as byte[] instead of using the native Guid class.
Guid id = Guid.NewGuid()
Console.WriteLine(id);
A: System.Text.StringBuilder
and
System.Text.RegularExpressions.Regex
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you set a default value for a MySQL Datetime column? How do you set a default value for a MySQL Datetime column?
In SQL Server it's getdate(). What is the equivalant for MySQL? I'm using MySQL 5.x if that is a factor.
A: IMPORTANT EDIT:
It is now possible to achieve this with DATETIME fields since MySQL 5.6.5, take a look at the other post below...
Previous versions can't do that with DATETIME...
But you can do it with TIMESTAMP:
mysql> create table test (str varchar(32), ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
Query OK, 0 rows affected (0.00 sec)
mysql> desc test;
+-------+-------------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+-------------------+-------+
| str | varchar(32) | YES | | NULL | |
| ts | timestamp | NO | | CURRENT_TIMESTAMP | |
+-------+-------------+------+-----+-------------------+-------+
2 rows in set (0.00 sec)
mysql> insert into test (str) values ("demo");
Query OK, 1 row affected (0.00 sec)
mysql> select * from test;
+------+---------------------+
| str | ts |
+------+---------------------+
| demo | 2008-10-03 22:59:52 |
+------+---------------------+
1 row in set (0.00 sec)
mysql>
CAVEAT: IF you define a column with CURRENT_TIMESTAMP ON as default, you will need to ALWAYS specify a value for this column or the value will automatically reset itself to "now()" on update. This means that if you do not want the value to change, your UPDATE statement must contain "[your column name] = [your column name]" (or some other value) or the value will become "now()". Weird, but true. I am using 5.5.56-MariaDB
A: For all who use the TIMESTAMP column as a solution i want to second the following limitation from the manual:
http://dev.mysql.com/doc/refman/5.0/en/datetime.html
"The TIMESTAMP data type has a range of '1970-01-01 00:00:01' UTC to '2038-01-19 03:14:07' UTC. It has varying properties, depending on the MySQL version and the SQL mode the server is running in. These properties are described later in this section. "
So this will obviously break your software in about 28 years.
I believe the only solution on the database side is to use triggers like mentioned in other answers.
A: Working fine with MySQL 8.x
CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,
`dateUpdated` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `mobile_UNIQUE` (`mobile`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
A: While defining multi-line triggers one has to change the delimiter as semicolon will be taken by MySQL compiler as end of trigger and generate error.
e.g.
DELIMITER //
CREATE TRIGGER `MyTable_UPDATE` BEFORE UPDATE ON `MyTable`
FOR EACH ROW BEGIN
-- Set the udpate date
Set new.UpdateDate = now();
END//
DELIMITER ;
A: In version 5.6.5, it is possible to set a default value on a datetime column, and even make a column that will update when the row is updated. The type definition:
CREATE TABLE foo (
`creation_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
`modification_time` DATETIME ON UPDATE CURRENT_TIMESTAMP
)
Reference:
http://optimize-this.blogspot.com/2012/04/datetime-default-now-finally-available.html
A: While you can't do this with DATETIME in the default definition, you can simply incorporate a select statement in your insert statement like this:
INSERT INTO Yourtable (Field1, YourDateField) VALUES('val1', (select now()))
Note the lack of quotes around the table.
For MySQL 5.5
A: Here is how to do it on MySQL 5.1:
ALTER TABLE `table_name` CHANGE `column_name` `column_name`
TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;
I have no clue why you have to enter the column name twice.
A: If you are trying to set default value as NOW(), I don't think MySQL supports that. In MySQL, you cannot use a function or an expression as the default value for any type of column, except for the TIMESTAMP data type column, for which you can specify the CURRENT_TIMESTAMP as the default.
A: I was able to solve this using this alter statement on my table that had two datetime fields.
ALTER TABLE `test_table`
CHANGE COLUMN `created_dt` `created_dt` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',
CHANGE COLUMN `updated_dt` `updated_dt` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP;
This works as you would expect the now() function to work. Inserting nulls or ignoring the created_dt and updated_dt fields results in a perfect timestamp value in both fields. Any update to the row changes the updated_dt. If you insert records via the MySQL query browser you needed one more step, a trigger to handle the created_dt with a new timestamp.
CREATE TRIGGER trig_test_table_insert BEFORE INSERT ON `test_table`
FOR EACH ROW SET NEW.created_dt = NOW();
The trigger can be whatever you want I just like the naming convention [trig]_[my_table_name]_[insert]
A: I think it simple in mysql since mysql the inbuilt function called now() which gives current time(time of that insert).
So your query should look like similarly
CREATE TABLE defaultforTime(
`creation_time` DATETIME DEFAULT CURRENT_TIMESTAMP,
`modification_time` DATETIME default now()
);
Thank you.
A: If you set ON UPDATE CURRENT_TIMESTAMP it will take current time when row data update in table.
CREATE TABLE bar(
`create_time` TIMESTAMP CURRENT_TIMESTAMP,
`update_time` TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
)
A: You can use triggers to do this type of stuff.
CREATE TABLE `MyTable` (
`MyTable_ID` int UNSIGNED NOT NULL AUTO_INCREMENT ,
`MyData` varchar(10) NOT NULL ,
`CreationDate` datetime NULL ,
`UpdateDate` datetime NULL ,
PRIMARY KEY (`MyTable_ID`)
)
;
CREATE TRIGGER `MyTable_INSERT` BEFORE INSERT ON `MyTable`
FOR EACH ROW BEGIN
-- Set the creation date
SET new.CreationDate = now();
-- Set the udpate date
Set new.UpdateDate = now();
END;
CREATE TRIGGER `MyTable_UPDATE` BEFORE UPDATE ON `MyTable`
FOR EACH ROW BEGIN
-- Set the udpate date
Set new.UpdateDate = now();
END;
A: For all those who lost heart trying to set a default DATETIME value in MySQL, I know exactly how you feel/felt. So here is is:
ALTER TABLE `table_name` CHANGE `column_name` DATETIME NOT NULL DEFAULT 0
Carefully observe that I haven't added single quotes/double quotes around the 0
I'm literally jumping after solving this one :D
A: If you have already created the table then you can use
To change default value to current date time
ALTER TABLE <TABLE_NAME>
CHANGE COLUMN <COLUMN_NAME> <COLUMN_NAME> DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP;
To change default value to '2015-05-11 13:01:01'
ALTER TABLE <TABLE_NAME>
CHANGE COLUMN <COLUMN_NAME> <COLUMN_NAME> DATETIME NOT NULL DEFAULT '2015-05-11 13:01:01';
A: CREATE TABLE `testtable` (
`id` INT(10) NULL DEFAULT NULL,
`colname` DATETIME NULL DEFAULT '1999-12-12 12:12:12'
)
In the above query to create 'testtable', i used '1999-12-12 12:12:12' as default value for DATETIME column colname
A: MySQL 5.6 has fixed this problem.
ALTER TABLE mytable CHANGE mydate datetime NOT NULL DEFAULT 'CURRENT_TIMESTAMP'
A: MySQL (before version 5.6.5) does not allow functions to be used for default DateTime values. TIMESTAMP is not suitable due to its odd behavior and is not recommended for use as input data. (See MySQL Data Type Defaults.)
That said, you can accomplish this by creating a Trigger.
I have a table with a DateCreated field of type DateTime. I created a trigger on that table "Before Insert" and "SET NEW.DateCreated=NOW()" and it works great.
A: this is indeed terrible news.here is a long pending bug/feature request for this. that discussion also talks about the limitations of timestamp data type.
I am seriously wondering what is the issue with getting this thing implemented.
A: For me the trigger approach has worked the best, but I found a snag with the approach. Consider the basic trigger to set a date field to the current time on insert:
CREATE TRIGGER myTable_OnInsert BEFORE INSERT ON `tblMyTable`
FOR EACH ROW SET NEW.dateAdded = NOW();
This is usually great, but say you want to set the field manually via INSERT statement, like so:
INSERT INTO tblMyTable(name, dateAdded) VALUES('Alice', '2010-01-03 04:30:43');
What happens is that the trigger immediately overwrites your provided value for the field, and so the only way to set a non-current time is a follow up UPDATE statement--yuck! To override this behavior when a value is provided, try this slightly modified trigger with the IFNULL operator:
CREATE TRIGGER myTable_OnInsert BEFORE INSERT ON `tblMyTable`
FOR EACH ROW SET NEW.dateAdded = IFNULL(NEW.dateAdded, NOW());
This gives the best of both worlds: you can provide a value for your date column and it will take, and otherwise it'll default to the current time. It's still ghetto relative to something clean like DEFAULT GETDATE() in the table definition, but we're getting closer!
A: You can use now() to set the value of a datetime column, but keep in mind that you can't use that as a default value.
A: I'm running MySql Server 5.7.11 and this sentence:
ALTER TABLE table_name CHANGE date_column datetime NOT NULL DEFAULT '0000-00-00 00:00:00'
is not working. But the following:
ALTER TABLE table_name CHANGE date_column datetime NOT NULL DEFAULT '1000-01-01 00:00:00'
just works.
As a sidenote, it is mentioned in the mysql docs:
The DATE type is used for values with a date part but no time part. MySQL retrieves and displays DATE values in 'YYYY-MM-DD' format. The supported range is '1000-01-01' to '9999-12-31'.
even if they also say:
Invalid DATE, DATETIME, or TIMESTAMP values are converted to the “zero” value of the appropriate type ('0000-00-00' or '0000-00-00 00:00:00').
A: Use the following code
DELIMITER $$
CREATE TRIGGER bu_table1_each BEFORE UPDATE ON table1 FOR EACH ROW
BEGIN
SET new.datefield = NOW();
END $$
DELIMITER ;
A: If you are trying to set default value as NOW(),MySQL supports that you have to change the type of that column TIMESTAMP instead of DATETIME. TIMESTAMP have current date and time as default..i think it will resolved your problem..
A: Take for instance If I had a table named 'site' with a created_at and an update_at column that were both DATETIME and need the default value of now, I could execute the following sql to achieve this.
ALTER TABLE `site` CHANGE `created_at` `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;
ALTER TABLE `site` CHANGE `created_at` `created_at` DATETIME NULL DEFAULT NULL;
ALTER TABLE `site` CHANGE `updated_at` `updated_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;
ALTER TABLE `site` CHANGE `updated_at` `updated_at` DATETIME NULL DEFAULT NULL;
The sequence of statements is important because a table can not have two columns of type TIMESTAMP with default values of CUREENT TIMESTAMP
A: This is my trigger example:
/************ ROLE ************/
drop table if exists `role`;
create table `role` (
`id_role` bigint(20) unsigned not null auto_increment,
`date_created` datetime,
`date_deleted` datetime,
`name` varchar(35) not null,
`description` text,
primary key (`id_role`)
) comment='';
drop trigger if exists `role_date_created`;
create trigger `role_date_created` before insert
on `role`
for each row
set new.`date_created` = now();
A: You can resolve the default timestamp. First consider which character set you are using for example if u taken utf8 this character set support all languages and if u taken laten1 this character set support only for English. Next setp if you are working under any project you should know client time zone and select you are client zone. This step are mandatory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1019"
} |
Q: Rendering order in Firefox I am building the diagram component in JavaScript. It has two layers rendered separately: foreground and background.
To determine the required size of the background:
*
*render the foreground
*measure the height of the result
*render the foreground and the
background together
In code it looks like this:
var foreground = renderForegroundIntoString();
parentDiv.innerHTML = foreground;
var height = parentDiv.children[0].clientHeight;
var background = renderBackgroundIntoString(height);
parentDiv.innerHTML = foreground + background;
Using IE7, this is a piece of cake. However, Firefox2 is not really willing to render the parentDiv.innerHTML right away, therefore I cannot read out the foreground height.
When does Firefox execute the rendering and how can I delay my background generation till foreground rendering is completed, or is there any alternative way to determine the height of my foreground elements?
[Appended after testing Dan's answer (thanx Dan)]
Within the body of the callback method (called back by setTimeout(...)) I can see, the rendering of the innerHTML is still not complete.
A: You should never, ever rely on something you just inserted into the DOM being rendered by the next line of code. All browsers will group these changes together to some degree, and it can be tricky to work out when and why.
The best way to deal with it is to execute the second part in response to some kind of event. Though it doesn't look like there's a good one you can use in that situation, so failing that, you can trigger the second part with:
setTimeout(renderBackground, 0)
That will ensure the current thread is completed before the second part of the code is executed.
A: I don't think you want parentDiv.children[0] (children is not a valid property in FF3 anyway), instead you want parentDiv.childNodes[0], but note that this includes text nodes that may have no height. You could try looping waiting for parentDiv's descendants to be rendered like so:
function getRenderedHeight(parentDiv) {
if (parentDiv.childNodes) {
var i = 0;
while (parentDiv.childNodes[i].nodeType == 3) { i++; }
//Now parentDiv.childNodes[i] is your first non-text child
return parentDiv.childNodes[i].clientHeight;
//your other code here ...
} else {
setTimeout("isRendered("+parentDiv+")",200);
}
}
and then invoke by: getRenderedHeight(parentDiv) after setting the innerHTML.
Hope that gives some ideas, anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I create a ColdFusion web service client that uses WS-Security? I've exposed several web services in our product using Java and WS-Security. One of our customers wants to consume the web service using ColdFusion. Does ColdFusion support WS-Security? Can I get around it by writing a Java client and using that in ColdFusion?
(I don't know much about ColdFusion).
A: I'm assuming you mean you need to pass the security in as part of the SOAP header. Here's a sample on how to connect to a .Net service. Same approach should apply w/ Java, just the url's would be different.
<cfset local.soapHeader = xmlNew()>
<cfset local.soapHeader.TheSoapHeader = xmlElemNew(local.soapHeader, "http://someurl.com/", "TheSoapHeader")>
<cfset local.soapHeader.TheSoapHeader.UserName.XmlText = "foo">
<cfset local.soapHeader.TheSoapHeader.UserName.XmlAttributes["xsi:type"] = "xsd:string">
<cfset local.soapHeader.TheSoapHeader = xmlElemNew(local.soapHeader, "http://webserviceUrl.com/", "TheSoapHeader")>
<cfset local.soapHeader.TheSoapHeader.Password.XmlText = "bar">
<cfset local.soapHeader.TheSoapHeader.Password.XmlAttributes["xsi:type"] = "xsd:string">
<cfset theWebService = createObject("webservice","http://webserviceUrl.com/Webservice.asmx?WSDL")>
<cfset addSOAPRequestHeader(theWebService, "ignoredNameSpace", "ignoredName", local.soapHeader, false)>
<cfset aResponse = theWebService.SomeMethod(arg1)>
Hope this is what you needed.
A: This is probably more accurate to produce the 'simple' xml soap header. The example above is missing a few lines.
Local['soapHeader'] = xmlNew();
Local['soapHeader']['UsernameToken'] = xmlElemNew(local.soapHeader, "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", "UsernameToken");
Local['soapHeader']['UsernameToken']['username'] = xmlElemNew(local.soapHeader, "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", "username");
Local['soapHeader']['UsernameToken']['username'].XmlText = Arguments.szUserName;
Local['soapHeader']['UsernameToken']['username'].XmlAttributes["xsi:type"] = "xsd:string";
Local['soapHeader']['UsernameToken']['password'] = xmlElemNew(local.soapHeader, "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", "password");
Local['soapHeader']['UsernameToken']['password'].XmlText = Arguments.szPassword;
Local['soapHeader']['UsernameToken']['password'].XmlAttributes["xsi:type"] = "xsd:string";
addSOAPRequestHeader(ws, "ignoredNameSpace", "ignoredName", Local.soapHeader, false);
A: I've never done any ws-security, and don't know if ColdFusion can consume it or not, but to answer your secondary question:
Can I get around it by writing a java client and using that in coldfusion?
Yes, absolutely. ColdFusion can easily use Java objects and methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Where can I find a tutorial to get started learning jQuery? Where is a good place to get started learning how to use jQuery? It seems to be all the rage nowadays. I know some basics of JavaScript but I'm by no means an expert.
A: Here's a 4 part (so far) series on Jquery Basics.
A: JQuery web site has some nice tutorials itself.
A: As well as the jQuery in Action book that's already been mentioned, there's Learning jQuery and jQuery Reference Guide from Packt. They work well as a pair: the Learning book has plenty of examples that they walk through in some detail - once you know what's possible, the Reference Guide helps you find the right method from the fairly comprehensive options available.
A: If you're into video tutorials, check out the jQuery for Absolute Beginners Series. It really helped to make jQuery click with me, more than traditional tutorials.
A: Have you looked through jQuery.com? Their documentation is really excellent.
A: Officially from jQuery
http://docs.jquery.com/Tutorials
Or try anything on this site that compiles a bunch of jQuery learning material:
http://www.noupe.com/tutorial/51-best-of-jquery-tutorials-and-examples.html
A: I can whole-heartedly recommend Manning's 'JQuery In Action'. VERY readable, and does a great job of explaining the framework-specific abstractions.
A: There are a lot of them out there, google it, and the jQuery official site itself has a huge list of tutorials and excellent documentation with working examples. If that's not enough, try http://jqueryfordesigner.com, http://bassistance.de. I have also written some at my blog, http://www.chazzuka.com/blog.
A: I ran across a site with JQuery videos recently.
There was also a Hanselminutes podcast on the subject of jQuery.
A: I've found the links from "The Complete jQuery Resource List for You to Become an Almighty Developer" to be very helpful.
A: There are lots of articles on JQuery on the web. Here's one for a beginner:
http://www.dotnetcube.com/post/Getting-started-with-JQuery-in-ASPNET.aspx
A: I started with 15 days of jquery
A: I've just run into this one. I recommend it because it explains the basics and gives a solid foundation that you can build on.
http://blog.mikecouturier.com/2010/02/beginning-with-jquery-solid-foundation_22.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do you access ViewState collection from PreviousPage on cross-page postback? In ASP.net 2.0, the PreviousPage property of a web page does not have a ViewState collection. I want to use this collection to transfer information between pages.
A: Use HttpContext.Current.Items instead...ViewState is only good for the page it is on.
A: View State is exclusive to the page.
If you want to transfer items,
*
*you can persist the data in a database, file, forms auth ticket or other cookie (Dont use Session or HttpContext.Current.Cache if you can help it)
*do a cross page post - from your first page, post back to the second page (and get the details from HttpContext.Current.Request.Form[] collection)
*put the values in a query string
A: You can avoid using PreviousPageType directive, by using some base page class that can hold your object.
For example you have class
public class BaseCrossPage:System.Web.UI.Page
{
public List<Guid> Invitees = new List<Guid>();
}
So if first page derive from this class
public partial class Default : BaseCrossPage
{
protected void Page_Load(object sender, EventArgs e)
{
this.Invitees = LoadInvitees();
}
}
Then the page that you have posted to can access that object, assuming that previous page derived from BaseCrossPage...
public partial class secondPage : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
BaseCrossPage p = (BaseCrossPage)PreviousPage;
List<Guid> Invitees = p.InvitedTeams
}
}
kind of "viewstate" between pages...
A: You can't directly. (See http://msdn2.microsoft.com/en-us/library/ms178139(vs.80).aspx
Here's what you can do -
Create public properties on the first page exposing the information you want to share. On the second page, set the PreviousPageType to the first page in the header of aspx file:
<%@ previouspagetype virtualpath="~/firstpage.aspx" %>
Then, get the values of these properties in the Load event of the second page:
If (Not MyBase.IsPostBack) Then
_someValue = Me.PreviousPage.SomeValue
End If
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What to use for membership in ASP.NET I'm not very experienced at using ASP.NET, but I've used built in membership providers for simple WebForms application, and I found them PITA when trying to extend the way they work (add/remove few fields and redo controls accordingly).
Now I'm preparing for MVC (ASP.NET MVC or Monorail based) project, and I'm thinking - is there a better way to handle users? Have them log in/log out, keep certain parts of the site available to certain users (like logged in users, or something similar to "share this with friends" feature of many social networking sites, where you can designate users that have access to certain things.
How best to acheave this in the way that will scale well?
I guess, I wasn't clear on that. To rephrase my question:
Would you use standard ASP.NET membership provider for a web-facing app, or something else (what)?
A: The Membership Provider in ASP.NET is very handy and extensible. It's simple to use the "off the shelf" features like Active Directory, SQL Server, and OpenLDAP. The main advantage is the ability to not reinvent the wheel. If your needs are more nuanced than that you can build your own provider by extending overriding the methods that the ASP.NET controls use.
I am building my own Custom Membership Provider for an e-commerce website. Below are some resources for more information on Membership Providers. I asked myself the same questions when I start that project.
These resources were useful to me for my decision:
*
*Writing a Custom Membership Provider - DevX
*How do I create a Customer Membership Provider - ASP.NET, Microsoft
*Implementing a Membership Provider - MSDN
*Examining ASP.NET 2.0's Membership, Roles, and Profile - 4GuysFromRolla
*Create Custom Membership Provider for ASP.NET Website Security - David Hayden
*Setting up a Custom Membership Provider - Channel 9
I personally don't think there is a need to use something other than the builtin stuff unless you either want to abuse yourself or your needs are impossible to satisfy by the builtin functionality.
A: Have you considered using ActiveDirectory for this? Or, perhaps, OpenLDAP? You can manage each user's groups, permissions, 'authority', and so on.
A: It depends.
If it's an internal application, Active Directory or OpenLDAP might be the way to go.
If it's a public application, I suggest to look at aspnet_regsql. You will be able to setup a database with Authentication in no time.
A:
keep certain parts of the site available to certain users (like logged
in users, or something similar to "share this with friends" feature of
many social networking sites
I guess you must custom code your thing.
I also do not like the asp.net Membership and custom code my membership needs...
A nice membership provider is a really missing thing in asp.net side...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Using Database entries to dynamically create a visio diagram Is this possible?
We have a Configuration Management Database that stores information such as our servers, what datacentre they're stored in, applications that reside on them, as well as interfaces that send data from one application to another.
We would like to use Visio to connect to our SQL 2005 database, and automatically generate a flow diagram that details these dependancies and relationships.
So again - is this possible? If so, does anyone know of some documentation that details how to do this?
A: Is this what you are looking out for?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Database Design: A proper table design for large number of column values I wish to perform an experiment many different times. After every trial, I am left with a "large" set of output statistics -- let's say, 1000. I would like to store the outputs of my experiments in a table, but what's the best way...?
Option 1
Have a table with 1000 columns. Seems like a bad idea. What if the number of statistics one day exceeds the maximum number of columns?
Option 2
Have a table with three columns. Let's say, ID, StatisticType, and StatisticValue. That way, you can have as many statistics as you want. However, reading a single experiments statistics becomes more complicated. Moreover, what if different statistics are different data types??
Any suggestions?
A: Option 2, with ID, TrialID, StatisticID, StatisticValue
With proper indexing, it will perform fairly well (you can use PIVOT to get the values out on columns fairly easily in SQL Server 2005).
When the statistics are different datatypes, the problem becomes more interesting, but in many cases, I just up-size the datatype (sometimes ints just end up in the money field). For other non-compatible types, the best design in my mind is really separate tables for each type, but I've also seen multiple columns or a free-form text column.
A: I second Cody's answer (here), with some additional thoughts and explanation.
The key of the table will be trialID, statisticType. There will be one row for each statistic for each trial, and 1000 rows for each trial. To get the values for a single experiment, select the rows for the specific trialID (as shown by matli.
You could add a "Trial Master" table that has single row for each trial (trialID as key) with relevant information (date, time, comments, person ...) about that particular trial. This will allow grouping and analysis based on trial attributes .. for instance did morning trials perform differently than afternoon trials, or did trials by Tarzan perform differently than trials by Jane?
You might also add a "Stat Master" table that has a row for each statisticType and that contains attributes about the statistic. This could be valuable if the various stats have different attributes, or if you want to group certain stats.
Have fun!
A: Columns in relational databases are a good place to store data that is referenced in searches, ordering and other information processing. If you're just going to store a large amount of values, you can use some other format, like XML, and store them all in a single column. XML will give you both readability, maintainability, flexibility and maybe even some searchability (SQL Server 2005+) in this case.
A: You can have one table for statistic types, including their datatype and then a separate table for every datatype, e.g., NumericStats, TextStats, DateTimeStats, which all have a foreign key to the StatisticTypes table.
A: If your DBMS offers an XML datatype, you may want to consider it.
Pros:
*
*Fetch all output statistics from a trial from one row
*With the right schema, the number of statistics can differ from trial to trial
*Most DBMSs with XML compress your data nicely
Cons:
*
*Ties your implementation to a particular DBMS
*Not as easy to query your results
Cheers.
A: It doesn't matter. Since you haven't mentioned what you plan to use the data for, how you store it is pretty much meaningless. You could store it in CSV, and meet your requirements (which were, basically, how do I store 1000 values).
The queries you wish to run against this data, and the domain that you are modeling makes all the difference in the world.
A: Three columns: ID, Experiment and Value. It's not that complicated to get the result from one experiment, for example: SELECT * FROM table WHERE Experiment = 5;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How should I convert Java code to C# code? I'm porting a Java library to C#. I'm using Visual Studio 2008, so I don't have the discontinued Microsoft Java Language Conversion Assistant program (JLCA).
My approach is to create a new solution with a similar project structure to the Java library, and to then copy the java code into a c# file and convert it to valid c# line-by-line. Considering that I find Java easy to read, the subtle differences in the two languages have surprised me.
Some things are easy to port (namespaces, inheritance etc.) but some things have been unexpectedly different, such as visibility of private members in nested classes, overriding virtual methods and the behaviour of built-in types. I don't fully understand these things and I'm sure there are lots of other differences I haven't seen yet.
I've got a long way to go on this project. What rules-of-thumb I can apply during this conversion to manage the language differences correctly?
A: Your doing it in the only sane way you can...the biggest help will be this document from Dare Obasanjo that lists the differences between the two languages:
http://www.25hoursaday.com/CsharpVsJava.html
BTW, change all getter and setter methods into properties...No need to have the C# library function just the same as the java library unless you are going for perfect interface compatibility.
A: Couple other options worth noting:
*
*J# is Microsoft's Java language
implementation on .NET. You can
access Java libraries (up to version
1.4*, anyways).
*actually Java 1.1.4 for java.io/lang,
and 1.2 for java.util + keep in mind that J# end of
life is ~ 2015-2017 for J# 2.0 redist
*Mono's IKVM also runs Java on
the CLR, with access to other .NET
programs.
*Microsoft Visual Studio 2005 comes
with a "Java language conversion
assistant" that converts Java
programs to C# programs
automatically for you.
A: One more quick-and-dirty idea: you could use IKVM to convert the Java jar to a .NET assembly, then use Reflector--combined with the FileDisassembler Add-in--to disassemble it into a Visual C# project.
(By the way, I haven't actually used IKVM--anyone care to vouch that this process would work?)
A: If you have a small amount of code then a line by line conversion is probably the most efficient.
If you have a large amount of code I would consider:
*
*Looking for a product that does the conversation for you.
*Writing a script (Ruby or Perl might be a good candidate) to do the conversion for you - at least the monotonous stuff! It could be a simple search/replace for keyword differences and renaming of files. Gives you more time/fingers to concentrate on the harder stuff.
A: I'm not sure if it is really the best way to convert the code line by line especially if the obstacles become overwhelming. Of course the Java code gives you a guideline and the basic structure but I think at the end the most important thing is that the library does provide the same functionality like it does in Java.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Color scaling function I am trying to visualize some values on a form. They range from 0 to 200 and I would like the ones around 0 be green and turn bright red as they go to 200.
Basically the function should return color based on the value inputted. Any ideas ?
A: red = (float)val / 200 * 255;
green = (float)(200 - val) / 200 * 255;
blue = 0;
return red << 16 + green << 8 + blue;
A: You don't say in what environment you're doing this. If you can work with HSV colors, this would be pretty easy to do by setting S = 100 and V = 100, and determining H by:
H = 0.4 * value + 120
Converting from HSV to RGB is also reasonably easy.
[EDIT] Note: in contrast to some other proposed solutions, this will change color green -> yellow -> orange -> red.
A: Pick a green that you like (RGB1 = #00FF00, e.g.) and a Red that you like (RGB2 = #FF0000, e.g.) and then calculate the color like this
R = R1 * (200-i)/200 + R2 * i/200
G = G1 * (200-i)/200 + G2 * i/200
B = B1 * (200-i)/200 + B2 * i/200
A: For best controllable and accurate effect, you should use the HSV color space. With HSV, you can easily scale Hue, Saturation and/or Brightness seperate from each other. Then, you do the transformation to RGB.
A: extending upon @tzot's code... you can also set up a mid-point in between the start and end points, which can be useful if you want a "transition color"!
//comment: s = start_triplet, m = mid_triplet, e = end_triplet
function transition3midpoint = (value, maximum, s, m, e):
mid = maximum / 2
if value < mid
return transition3(value, mid, s, m)
else
return transition3(value - mid, mid, m, e)
A: Basically, the general method for smooth transition between two values is the following function:
function transition(value, maximum, start_point, end_point):
return start_point + (end_point - start_point)*value/maximum
That given, you define a function that does the transition for triplets (RGB, HSV etc).
function transition3(value, maximum, (s1, s2, s3), (e1, e2, e3)):
r1= transition(value, maximum, s1, e1)
r2= transition(value, maximum, s2, e2)
r3= transition(value, maximum, s3, e3)
return (r1, r2, r3)
Assuming you have RGB colours for the s and e triplets, you can use the transition3 function as-is. However, going through the HSV colour space produces more "natural" transitions. So, given the conversion functions (stolen shamelessly from the Python colorsys module and converted to pseudocode :):
function rgb_to_hsv(r, g, b):
maxc= max(r, g, b)
minc= min(r, g, b)
v= maxc
if minc == maxc then return (0, 0, v)
diff= maxc - minc
s= diff / maxc
rc= (maxc - r) / diff
gc= (maxc - g) / diff
bc= (maxc - b) / diff
if r == maxc then
h= bc - gc
else if g == maxc then
h= 2.0 + rc - bc
else
h = 4.0 + gc - rc
h = (h / 6.0) % 1.0 //comment: this calculates only the fractional part of h/6
return (h, s, v)
function hsv_to_rgb(h, s, v):
if s == 0.0 then return (v, v, v)
i= int(floor(h*6.0)) //comment: floor() should drop the fractional part
f= (h*6.0) - i
p= v*(1.0 - s)
q= v*(1.0 - s*f)
t= v*(1.0 - s*(1.0 - f))
if i mod 6 == 0 then return v, t, p
if i == 1 then return q, v, p
if i == 2 then return p, v, t
if i == 3 then return p, q, v
if i == 4 then return t, p, v
if i == 5 then return v, p, q
//comment: 0 <= i <= 6, so we never come here
, you can have code as following:
start_triplet= rgb_to_hsv(0, 255, 0) //comment: green converted to HSV
end_triplet= rgb_to_hsv(255, 0, 0) //comment: accordingly for red
maximum= 200
… //comment: value is defined somewhere here
rgb_triplet_to_display= hsv_to_rgb(transition3(value, maximum, start_triplet, end_triplet))
A: Looking through this wikipedia article I personally would pick a path through a color space, and map the values onto that path.
But that's a straight function. I think you might be better suited to a javascript color chooser you can find with a quick color that will give you the Hex, and you can store the Hex.
A: If you use linear ramps for Red and Green values as Peter Parker suggested, the color for value 100 will basically be puke green (127, 127, 0). You ideally want it to be a bright orange or yellow at that midpoint. For that, you can use:
Red = max(value / 100, 1) * 255
Green = (1 - max(value / 100, 1)) * 255
Blue = 0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Is there a universally accepted value for no value? I've seen N/A (always caps) used most often but I don't know if there is a standard. My data will be seen on Google so I would like to use the most recognized value.
A: It all depends on what you are trying to tell the user when there is no value.
For example, does no value mean that the data isn't available yet, but will be later? Or is the data point not applicable to the current record?
I would choose a value that imparts the most information.
A: N/A means "Not Available" or "Not Applicable". I guess it's the most universally accepted term. But it really depends on the context if that's the best term.
A: 'null' is a common term I've seen quite often, especially when it involves programming.
A: N/A means Not Available. "None" can work as well.
Also you really shouldn't rate someone down unless you know what you are talking about.
http://en.wikipedia.org/wiki/N/A
(Or just google N/A for 100 other references)
A: Actually, I believe that N/A means Not Applicable and is often used when filling out forms.
I don't think there is a generally accepted standard for no value. It depends on the problem domain.
For instance, null is common in the database and programming language arena.
In other cases, you might use "No value", "Empty", or "Blank".
A: N/A is used for when there can't be a value, as the field/property does not apply in the context of the data item.
Usually, when the field/property applies in the context of the data item, but it does not have a value, people either leave it blank. (Or put (empty), though that's certainly less often)
A: N/A also means Not Available or Not Applicable, so it is not a good idea. NULL is only used in databases, so regular users would not understand that. Best idea is to leave field blank or use a dash: -
It is also a good idea to mark the space where value should be shown with a different color. If you decide to leave it blank, you use a different background color. If you use - (or ---) you can then just use a different color for that character and have the same background. It all depends on your use case: where exactly do you need to display it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to get 'Next Page' link with Scrubyt I'm trying to use Scrubyt to get the details from this page http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php?section=events. I've managed to get the titles and detail URLs from the list, but I can't use next_page to get the scraper to go to the next page. I assume that's cause I'm not using the correct pattern for the next page link. I tried the string "Next Page", and I've also tried the XPath. Any other ideas?
The code is below:
require 'rubygems'
require 'scrubyt'
nuffield_data = Scrubyt::Extractor.define do
fetch 'http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php?section=events'
event do
title 'The Coast of Mayo'
#url "href", :type => :attribute
link_url
end
next_page "Next Page", :limit => 2
end
nuffield_data.to_xml.write($stdout,1)
A: Try this with a slightly different URL:
fetch 'http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php'
scrubyt seems to be having issues with "?section=events" query on the end of the URL.
When it looks for the next page it is trying to return this URL:
http://www.nuffieldtheatre.co.uk/cn/events/?pageNum_rsSearch=1&totalRows_rsSearch=39§ion=events
instead of:
http://www.nuffieldtheatre.co.uk/cn/events/event_listings.php?pageNum_rsSearch=1&totalRows_rsSearch=39§ion=events
Removing the query string on the end of the URL seems to fix this - you might want to file this as a bug.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Obfuscate / Mask / Scramble personal information I'm looking for a homegrown way to scramble production data for use in development and test. I've built a couple of scripts that make random social security numbers, shift birth dates, scramble emails, etc. But I've come up against a wall trying to scramble customer names. I want to keep real names so we can still use or searches so random letter generation is out. What I have tried so far is building a temp table of all last names in the table then updating the customer table with a random selection from the temp table. Like this:
DECLARE @Names TABLE (Id int IDENTITY(1,1),[Name] varchar(100))
/* Scramble the last names (randomly pick another last name) */
INSERT @Names SELECT LastName FROM Customer ORDER BY NEWID();
WITH [Customer ORDERED BY ROWID] AS
(SELECT ROW_NUMBER() OVER (ORDER BY NEWID()) AS ROWID, LastName FROM Customer)
UPDATE [Customer ORDERED BY ROWID] SET LastName=(SELECT [Name] FROM @Names WHERE ROWID=Id)
This worked well in test, but completely bogs down dealing with larger amounts of data (>20 minutes for 40K rows)
All of that to ask, how would you scramble customer names while keeping real names and the weight of the production data?
UPDATE: Never fails, you try to put all the information in the post, and you forget something important. This data will also be used in our sales & demo environments which are publicly available. Some of the answers are what I am attempting to do, to 'switch' the names, but my question is literally, how to code in T-SQL?
A: I use generatedata. It is an open source php script which can generate all sorts of dummy data.
A: A very simple solution would be to ROT13 the text.
A better question may be why you feel the need to scramble the data? If you have an encryption key, you could also consider running the text through DES or AES or similar. Thos would have potential performance issues, however.
A: When doing something like that I usually write a small program that first loads a lot of names and surnames in two arrays, and then just updates the database using random name/surname from arrays. It works really fast even for very big datasets (200.000+ records)
A: I use a method that changes characters in the name to other characters that are in the same "range" of usage frequency in English names. Apparently, the distribution of characters in names is different than it is for normal conversational English. For example, "x" and "z" occur 0.245% of the time, so they get swapped. The the other extreme, "w" is used 5.5% of the time, "s" 6.86% and "t", 15.978%. I change "s" to "w", "t" to "s" and "w" to "t".
I keep the vowels "aeio" in a separate group so that a vowel is only replaced by another vowel. Similarly, "q", "u" and "y" are not replaced at all. My grouping and decisions are totally subjective.
I ended up with 7 different "groups" of 2-5 characters , based mostly on frequency. characters within each group are swapped with other chars in that same group.
The net result is names that kinda look like the might be names, but from "not around here".
Original name Morphed name
Loren Nimag
Juanita Kuogewso
Tennyson Saggywig
David Mijsm
Julie Kunewa
Here's the SQL I use, which includes a "TitleCase" function. There are 2 different versions of the "morphed" name based on different frequencies of letters I found on the web.
-- from https://stackoverflow.com/a/28712621
-- Convert and return param as Title Case
CREATE FUNCTION [dbo].[fnConvert_TitleCase] (@InputString VARCHAR(4000) )
RETURNS VARCHAR(4000)AS
BEGIN
DECLARE @Index INT
DECLARE @Char CHAR(1)
DECLARE @OutputString VARCHAR(255)
SET @OutputString = LOWER(@InputString)
SET @Index = 2
SET @OutputString = STUFF(@OutputString, 1, 1,UPPER(SUBSTRING(@InputString,1,1)))
WHILE @Index <= LEN(@InputString)
BEGIN
SET @Char = SUBSTRING(@InputString, @Index, 1)
IF @Char IN (' ', ';', ':', '!', '?', ',', '.', '_', '-', '/', '&','''','(','{','[','@')
IF @Index + 1 <= LEN(@InputString)
BEGIN
IF @Char != '''' OR UPPER(SUBSTRING(@InputString, @Index + 1, 1)) != 'S'
SET @OutputString = STUFF(@OutputString, @Index + 1, 1,UPPER(SUBSTRING(@InputString, @Index + 1, 1)))
END
SET @Index = @Index + 1
END
RETURN ISNULL(@OutputString,'')
END
Go
-- 00.045 x 0.045%
-- 00.045 z 0.045%
--
-- Replace(Replace(Replace(TS_NAME,'x','#'),'z','x'),'#','z')
--
-- 00.456 k 0.456%
-- 00.511 j 0.511%
-- 00.824 v 0.824%
-- kjv
-- Replace(Replace(Replace(Replace(TS_NAME,'k','#'),'j','k'),'v','j'),'#','v')
--
-- 01.642 g 1.642%
-- 02.284 n 2.284%
-- 02.415 l 2.415%
-- gnl
-- Replace(Replace(Replace(Replace(TS_NAME,'g','#'),'n','g'),'l','n'),'#','l')
--
-- 02.826 r 2.826%
-- 03.174 d 3.174%
-- 03.826 m 3.826%
-- rdm
-- Replace(Replace(Replace(Replace(TS_NAME,'r','#'),'d','r'),'m','d'),'#','m')
--
-- 04.027 f 4.027%
-- 04.200 h 4.200%
-- 04.319 p 4.319%
-- 04.434 b 4.434%
-- 05.238 c 5.238%
-- fhpbc
-- Replace(Replace(Replace(Replace(Replace(Replace(TS_NAME,'f','#'),'h','f'),'p','h'),'b','p'),'c','b'),'#','c')
--
-- 05.497 w 5.497%
-- 06.686 s 6.686%
-- 15.978 t 15.978%
-- wst
-- Replace(Replace(Replace(Replace(TS_NAME,'w','#'),'s','w'),'t','s'),'#','t')
--
--
-- 02.799 e 2.799%
-- 07.294 i 7.294%
-- 07.631 o 7.631%
-- 11.682 a 11.682%
-- eioa
-- Replace(Replace(Replace(Replace(Replace(TS_NAME,'e','#'),'i','ew'),'o','i'),'a','o'),'#','a')
--
-- -- dont replace
-- 00.222 q 0.222%
-- 00.763 y 0.763%
-- 01.183 u 1.183%
-- Obfuscate a name
Select
ts_id,
Cast(ts_name as varchar(42)) as [Original Name]
Cast(dbo.fnConvert_TitleCase(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(TS_NAME,'x','#'),'z','x'),'#','z'),'k','#'),'j','k'),'v','j'),'#','v'),'g','#'),'n','g'),'l','n'),'#','l'),'r','#'),'d','r'),'m','d'),'#','m'),'f','#'),'h','f'),'p','h'),'b','p'),'c','b'),'#','c'),'w','#'),'s','w'),'t','s'),'#','t'),'e','#'),'i','ew'),'o','i'),'a','o'),'#','a')) as VarChar(42)) As [morphed name] ,
Cast(dbo.fnConvert_TitleCase(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(TS_NAME,'e','t'),'~','e'),'t','~'),'a','o'),'~','a'),'o','~'),'i','n'),'~','i'),'n','~'),'s','h'),'~','s'),'h','r'),'r','~'),'d','l'),'~','d'),'l','~'),'m','w'),'~','m'),'w','f'),'f','~'),'g','y'),'~','g'),'y','p'),'p','~'),'b','v'),'~','b'),'v','k'),'k','~'),'x','~'),'j','x'),'~','j')) as VarChar(42)) As [morphed name2]
From
ts_users
;
A: Why not just use some sort of Random Name Generator?
A: Use a temporary table instead and the query is very fast. I just ran on 60K rows in 4 seconds. I'll be using this one going forward.
DECLARE TABLE #Names
(Id int IDENTITY(1,1),[Name] varchar(100))
/* Scramble the last names (randomly pick another last name) */
INSERT #Names
SELECT LastName
FROM Customer
ORDER BY NEWID();
WITH [Customer ORDERED BY ROWID] AS
(SELECT ROW_NUMBER() OVER (ORDER BY NEWID()) AS ROWID, LastName FROM Customer)
UPDATE [Customer ORDERED BY ROWID]
SET LastName=(SELECT [Name] FROM #Names WHERE ROWID=Id)
DROP TABLE #Names
A: The following approach worked for us, lets say we have 2 tables Customers and Products:
CREATE FUNCTION [dbo].[GenerateDummyValues]
(
@dataType varchar(100),
@currentValue varchar(4000)=NULL
)
RETURNS varchar(4000)
AS
BEGIN
IF @dataType = 'int'
BEGIN
Return '0'
END
ELSE IF @dataType = 'varchar' OR @dataType = 'nvarchar' OR @dataType = 'char' OR @dataType = 'nchar'
BEGIN
Return 'AAAA'
END
ELSE IF @dataType = 'datetime'
BEGIN
Return Convert(varchar(2000),GetDate())
END
-- you can add more checks, add complicated logic etc
Return 'XXX'
END
The above function will help in generating different data based on the data type coming in.
Now, for each column of each table which does not have word "id" in it, use following query to generate further queries to manipulate the data:
select 'select ''update '' + TABLE_NAME + '' set '' + COLUMN_NAME + '' = '' + '''''''' + dbo.GenerateDummyValues( Data_type,'''') + '''''' where id = '' + Convert(varchar(10),Id) from INFORMATION_SCHEMA.COLUMNS, ' + table_name + ' where RIGHT(LOWER(COLUMN_NAME),2) <> ''id'' and TABLE_NAME = '''+ table_name + '''' + ';' from INFORMATION_SCHEMA.TABLES;
When you execute above query it will generate update queries for each table and for each column of that table, for example:
select 'update ' + TABLE_NAME + ' set ' + COLUMN_NAME + ' = ' + '''' + dbo.GenerateDummyValues( Data_type,'') + ''' where id = ' + Convert(varchar(10),Id) from INFORMATION_SCHEMA.COLUMNS, Customers where RIGHT(LOWER(COLUMN_NAME),2) <> 'id' and TABLE_NAME = 'Customers';
select 'update ' + TABLE_NAME + ' set ' + COLUMN_NAME + ' = ' + '''' + dbo.GenerateDummyValues( Data_type,'') + ''' where id = ' + Convert(varchar(10),Id) from INFORMATION_SCHEMA.COLUMNS, Products where RIGHT(LOWER(COLUMN_NAME),2) <> 'id' and TABLE_NAME = 'Products';
Now, when you execute above queries you will get final update queries, that will update the data of your tables.
You can execute this on any SQL server database, no matter how many tables do you have, it will generate queries for you that can be further executed.
Hope this helps.
A: Another site to generate shaped fake data sets, with an option for T-SQL output:
https://mockaroo.com/
A: Here's a way using ROT47 which is reversible, and another which is random. You can add a PK to either to link back to the "un scrambled" versions
declare @table table (ID int, PLAIN_TEXT nvarchar(4000))
insert into @table
values
(1,N'Some Dudes name'),
(2,N'Another Person Name'),
(3,N'Yet Another Name')
--split your string into a column, and compute the decimal value (N)
if object_id('tempdb..#staging') is not null drop table #staging
select
substring(a.b, v.number+1, 1) as Val
,ascii(substring(a.b, v.number+1, 1)) as N
--,dense_rank() over (order by b) as RN
,a.ID
into #staging
from (select PLAIN_TEXT b, ID FROM @table) a
inner join
master..spt_values v on v.number < len(a.b)
where v.type = 'P'
--select * from #staging
--create a fast tally table of numbers to be used to build the ROT-47 table.
;WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
--Here we put it all together with stuff and FOR XML
select
PLAIN_TEXT
,ENCRYPTED_TEXT =
stuff((
select
--s.Val
--,s.N
e.ENCRYPTED_TEXT
from #staging s
left join(
select
N as DECIMAL_VALUE
,char(N) as ASCII_VALUE
,case
when 47 + N <= 126 then char(47 + N)
when 47 + N > 126 then char(N-47)
end as ENCRYPTED_TEXT
from cteTally
where N between 33 and 126) e on e.DECIMAL_VALUE = s.N
where s.ID = t.ID
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 0, '')
from @table t
--or if you want really random
select
PLAIN_TEXT
,ENCRYPTED_TEXT =
stuff((
select
--s.Val
--,s.N
e.ENCRYPTED_TEXT
from #staging s
left join(
select
N as DECIMAL_VALUE
,char(N) as ASCII_VALUE
,char((select ROUND(((122 - N -1) * RAND() + N), 0))) as ENCRYPTED_TEXT
from cteTally
where (N between 65 and 122) and N not in (91,92,93,94,95,96)) e on e.DECIMAL_VALUE = s.N
where s.ID = t.ID
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 0, '')
from @table t
A: Encountered the same problem myself and figured out an alternative solution that may work for others.
The idea is to use MD5 on the name and then take the last 3 hex digits of it to map into a table of names. You can do this separately for first name and last name.
3 hex digits represent decimals from 0 to 4095, so we need a list of 4096 first names and 4096 last names.
So conv(substr(md5(first_name), 3),16,10) (in MySQL syntax) would be an index from 0 to 4095 that could be joined with a table that holds 4096 first names. The same concept could be applied to last names.
Using MD5 (as opposed to a random number) guarantees a name in the original data will always be mapped to the same name in the test data.
You can get a list of names here:
https://gist.github.com/elifiner/cc90fdd387449158829515782936a9a4
A: I am working on this at my company right now -- and it turns out to be a very tricky thing. You want to have names that are realistic, but must not reveal any real personal info.
My approach has been to first create a randomized "mapping" of last names to other last names, then use that mapping to change all last names. This is good if you have duplicate name records. Suppose you have 2 "John Smith" records that both represent the same real person. If you changed one record to "John Adams" and the other to "John Best", then your one "person" now has 2 different names! With a mapping, all occurrences of "Smith" get changed to "Jones", and so duplicates ( or even family members ) still end up with the same last name, keeping the data more "realistic".
I will also have to scramble the addresses, phone numbers, bank account numbers, etc...and I am not sure how I will approach those. Keeping the data "realistic" while scrambling is certainly a deep topic. This must have been done many times by many companies -- who has done this before? What did you learn?
A: Frankly, I'm not sure why this is needed. Your dev/test environments should be private, behind your firewall, and not accessible from the web.
Your developers should be trusted, and you have legal recourse against them if they fail to live up to your trust.
I think the real question should be "Should I scramble the data?", and the answer is (in my mind) 'no'.
If you're sending it offsite for some reason, or you have to have your environments web-accessible, or if you're paranoid, I would implement a random switch. Rather than build a temp table, run switches between each location and a random row in the table, swapping one piece of data at a time.
The end result will be a table with all the same data, but with it randomly reorganized. It should also be faster than your temp table, I believe.
It should be simple enough to implement the Fisher-Yates Shuffle in SQL...or at least in a console app that reads the db and writes to the target.
Edit (2): Off-the cuff answer in T-SQL:
declare @name varchar(50)
set @name = (SELECT lastName from person where personID = (random id number)
Update person
set lastname = @name
WHERE personID = (person id of current row)
Wrap this in a loop, and follow the guidelines of Fisher-Yates for modifying the random value constraints, and you'll be set.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Is it faster to sort a list after inserting items or adding them to a sorted list If I have a sorted list (say quicksort to sort), if I have a lot of values to add, is it better to suspend sorting, and add them to the end, then sort, or use binary chop to place the items correctly while adding them. Does it make a difference if the items are random, or already more or less in order?
A: If you add enough items that you're effectively building the list from scratch, you should be able to get better performance by sorting the list afterwards.
If items are mostly in order, you can tweak both incremental update and regular sorting to take advantage of that, but frankly, it usually isn't worth the trouble. (You also need to be careful of things like making sure some unexpected ordering can't make your algorithm take much longer, q.v. naive quicksort)
Both incremental update and regular list sort are O(N log N) but you can get a better constant factor sorting everything afterward (I'm assuming here that you've got some auxiliary datastructure so your incremental update can access list items faster than O(N)...). Generally speaking, sorting all at once has a lot more design freedom than maintaining the ordering incrementally, since incremental update has to maintain a complete order at all times, but an all-at-once bulk sort does not.
If nothing else, remember that there are lots of highly-optimized bulk sorts available.
A: I'd say, let's test it! :)
I tried with quicksort, but sorting an almost sorting array with quicksort is... well, not really a good idea. I tried a modified one, cutting off at 7 elements and using insertion sort for that. Still, horrible performance. I switched to merge sort. It might need quite a lot of memory for sorting (it's not in-place), but the performance is much better on sorted arrays and almost identical on random ones (the initial sort took almost the same time for both, quicksort was only slightly faster).
This already shows one thing: The answer to your questions depends strongly on the sorting algorithm you use. If it will have poor performance on almost sorted lists, inserting at the right position will be much faster than adding at the end and then re-sorting it; and merge sort might be no option for you, as it might need way too much external memory if the list is huge. BTW I used a custom merge sort implementation, that only uses 1/2 of external storage to the naive implementation (which needs as much external storage as the array size itself).
If merge sort is no option and quicksort is no option for sure, the best alternative is probably heap sort.
My results are: Adding the new elements simply at the end and then re-sorting the array was several magnitudes faster than inserting them in the right position. However, my initial array had 10 mio elements (sorted) and I was adding another mio (unsorted). So if you add 10 elements to an array of 10 mio, inserting them correctly is much faster than re-sorting everything. So the answer to your question also depends on how big the initial (sorted) array is and how many new elements you want to add to it.
A: In principle, it's faster to create a tree than to sort a list. The tree inserts are O(log(n)) for each insert, leading to overall O(nlog(n)). Sorting in O(nlog(n)).
That's why Java has TreeMap, (in addition to TreeSet, TreeList, ArrayList and LinkedList implementations of a List.)
*
*A TreeSet keeps things in object comparison order. The key is defined by the Comparable interface.
*A LinkedList keeps things in the insertion order.
*An ArrayList uses more memory, is faster for some operations.
*A TreeMap, similarly, removes the need to sort by a key. The map is built in key order during the inserts and maintained in sorted order at all times.
However, for some reason, the Java implementation of TreeSet is quite a bit slower than using an ArrayList and a sort.
[It's hard to speculate as to why it would be dramatically slower, but it is. It should be slightly faster by one pass through the data. This kind of thing is often the cost of memory management trumping the algorithmic analysis.]
A: It's about the same. Inserting an item into a sorted list is O(log N), and doing this for every element in the list, N, (thus building the list) would be O(N log N) which is the speed of quicksort (or merge sort which is closer to this approach).
If you instead inserted them onto the front it would be O(1), but doing a quicksort after, it would still be O(N log N).
I would go with the first approach, because it has the potential to be slightly faster. If the initial size of your list, N, is much greater than the number of elements to insert, X, then the insert approach is O(X log N). Sorting after inserting to the head of the list is O(N log N). If N=0 (IE: your list is initially empty), the speed of inserting in sorted order, or sorting afterwards are the same.
A: Inserting an item into a sorted list takes O(n) time, not O(log n) time. You have to find the place to put it, taking O(log n) time. But then you have to shift over all the elements - taking O(n) time. So inserting while maintaining sorted-ness is O(n ^ 2), where as inserting them all and then sorting is O(n log n).
Depending on your sort implementation, you can get even better than O(n log n) if the number of inserts is much smaller than the list size. But if that is the case, it doesn't matter either way.
So do the insert all and sort solution if the number of inserts is large, otherwise it probably won't matter.
A: Usually it's far better to use a heap. in short, it splits the cost of maintaining order between the pusher and the picker. Both operations are O(log n), instead of O(n log n), like most other solutions.
A: If you're adding in bunches, you can use a merge sort. Sort the list of items to be added, then copy from both lists, comparing items to determine which one gets copied next. You could even copy in-place if resize your destination array and work from the end backwards.
The efficiency of this solution is O(n+m) + O(m log m) where n is the size of the original list, and m is the number of items being inserted.
Edit: Since this answer isn't getting any love, I thought I'd flesh it out with some C++ sample code. I assume that the sorted list is kept in a linked list rather than an array. This changes the algorithm to look more like an insertion than a merge, but the principle is the same.
// Note that itemstoadd is modified as a side effect of this function
template<typename T>
void AddToSortedList(std::list<T> & sortedlist, std::vector<T> & itemstoadd)
{
std::sort(itemstoadd.begin(), itemstoadd.end());
std::list<T>::iterator listposition = sortedlist.begin();
std::vector<T>::iterator nextnewitem = itemstoadd.begin();
while ((listposition != sortedlist.end()) || (nextnewitem != itemstoadd.end()))
{
if ((listposition == sortedlist.end()) || (*nextnewitem < *listposition))
sortedlist.insert(listposition, *nextnewitem++);
else
++listposition;
}
}
A: If the list is a) already sorted, and b) dynamic in nature, then inserting into a sorted list should always be faster (find the right place (O(n)) and insert (O(1))).
However, if the list is static, then a shuffle of the remainder of the list has to occur (O(n) to find the right place and O(n) to slide things down).
Either way, inserting into a sorted list (or something like a Binary Search Tree) should be faster.
O(n) + O(n) should always be faster than O(N log n).
A: At a high level, it's a pretty simple problem, because you can think of sorting as just iterated searching. When you want to insert an element into an ordered array, list, or tree, you have to search for the point at which to insert it. Then you put it in, at hopefully low cost. So you could think of a sort algorithm as just taking a bunch of things and, one by one, searching for the proper position and inserting them. Thus, an insertion sort (O(n* n)) is an iterated linear search (O(n)). Tree, heap, merge, radix, and quick sort (O(n*log(n))) can be thought of as iterated binary search (O(log(n))). It is possible to have an O(n) sort, if the underlying search is O(1) as in an ordered hash table. (An example of this is sorting 52 cards by flinging them into 52 bins.)
So the answer to your question is, inserting things one at a time, versus saving them up and then sorting them should not make much difference, in a big-O sense. You could of course have constant factors to deal with, and those might be significant.
Of course, if n is small, like 10, the whole discussion is silly.
A: You should add them before and then use a radix sort this should be optimal
http://en.wikipedia.org/wiki/Radix_sort#Efficiency
A: (If the list you're talking about is like C# List<T>.) Adding some values to right positions into a sorted list with many values is going to require less operations. But if the number of values being added becomes large, it will require more.
I would suggest using not a list but some more suitable data structure in your case. Like a binary tree, for example. A sorted data structure with minimal insertion time.
A: Inserting an item into a sorted list is O(log n), while sorting a list is O(n log N)
Which would suggest that it's always better to sort first and then insert
But remeber big 'O' only concerns the scaling of the speed with number of items, it might be that for your application an insert in the middle is expensive (eg if it was a vector) and so appending and sorting afterward might be better.
A: If this is .NET and the items are integers, it's quicker to add them to a Dictionary (or if you're on .Net 3.0 or above use the HashSet if you don't mind losing duplicates)This gives you automagic sorting.
I think that strings would work the same way as well. The beauty is you get O(1) insertion and sorting this way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "96"
} |
Q: How can I add SqlParameters without knowing the name / type? I am creating a DB wrapper and am in the need of adding SQL paramters to my stament however I do not know the parameter names or type, how can this be done? I have seen many other libraries do this...
I just want the order of values to be mapped to the stored procedure...I thought the following code would work:
public DataTable ExecuteDataTable(string storedProcName, params object[] args)
{
SqlCommand cmd = new SqlCommand(storedProcName, conn);
cmd.CommandType = CommandType.StoredProcedure;
// inserting params like this does not work...
for (int i = 0; i < args.Length; i++)
{
cmd.Parameters.Insert(i, args[0]);
}
DataTable dt = new DataTable();
dt.Load(cmd.ExecuteReader());
return dt;
}
Any ideas of how to accomplish this? Note: I know there are other libraries such as the Enterprise Library that already does this, but I'm in a situation where that won't help...
Thanks.
A: try here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's better in regards to performance? type[,] or type[][]? Is it more performant to have a bidimensional array (type[,]) or an array of arrays (type[][]) in C#?
Particularly for initial allocation and item access
A: Of course, if all else fails... test it! Following gives (in "Release", at the console):
Size 1000, Repeat 1000
int[,] set: 3460
int[,] get: 4036 (chk=1304808064)
int[][] set: 2441
int[][] get: 1283 (chk=1304808064)
So a jagged array is quicker, at least in this test. Interesting! However, it is a relatively small factor, so I would still stick with whichever describes my requirement better. Except for some specific (high CPU/processing) scenarios, readability / maintainability should trump a small performance gain. Up to you, though.
Note that this test assumes you access the array much more often than you create it, so I have not included timings for creation, where I would expect rectangular to be slightly quicker unless memory is highly fragmented.
using System;
using System.Diagnostics;
static class Program
{
static void Main()
{
Console.WriteLine("First is just for JIT...");
Test(10,10);
Console.WriteLine("Real numbers...");
Test(1000,1000);
Console.ReadLine();
}
static void Test(int size, int repeat)
{
Console.WriteLine("Size {0}, Repeat {1}", size, repeat);
int[,] rect = new int[size, size];
int[][] jagged = new int[size][];
for (int i = 0; i < size; i++)
{ // don't count this in the metrics...
jagged[i] = new int[size];
}
Stopwatch watch = Stopwatch.StartNew();
for (int cycle = 0; cycle < repeat; cycle++)
{
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
rect[i, j] = i * j;
}
}
}
watch.Stop();
Console.WriteLine("\tint[,] set: " + watch.ElapsedMilliseconds);
int sum = 0;
watch = Stopwatch.StartNew();
for (int cycle = 0; cycle < repeat; cycle++)
{
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
sum += rect[i, j];
}
}
}
watch.Stop();
Console.WriteLine("\tint[,] get: {0} (chk={1})", watch.ElapsedMilliseconds, sum);
watch = Stopwatch.StartNew();
for (int cycle = 0; cycle < repeat; cycle++)
{
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
jagged[i][j] = i * j;
}
}
}
watch.Stop();
Console.WriteLine("\tint[][] set: " + watch.ElapsedMilliseconds);
sum = 0;
watch = Stopwatch.StartNew();
for (int cycle = 0; cycle < repeat; cycle++)
{
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
sum += jagged[i][j];
}
}
}
watch.Stop();
Console.WriteLine("\tint[][] get: {0} (chk={1})", watch.ElapsedMilliseconds, sum);
}
}
A: I believe that [,] can allocate one contiguous chunk of memory, while [][] is N+1 chunk allocations where N is the size of the first dimension. So I would guess that [,] is faster on initial allocation.
Access is probably about the same, except that [][] would involve one extra dereference. Unless you're in an exceptionally tight loop it's probably a wash. Now, if you're doing something like image processing where you are referencing between rows rather than traversing row by row, locality of reference will play a big factor and [,] will probably edge out [][] depending on your cache size.
As Marc Gravell mentioned, usage is key to evaluating the performance...
A: It really depends. The MSDN Magazine article, Harness the Features of C# to Power Your Scientific Computing Projects, says this:
Although rectangular arrays are generally superior to jagged arrays in terms of structure and performance, there might be some cases where jagged arrays provide an optimal solution. If your application does not require arrays to be sorted, rearranged, partitioned, sparse, or large, then you might find jagged arrays to perform quite well.
A: type[,] will work faster. Not only because of less offset calculations. Mainly because of less constraint checking, less memory allocation and greater localization in memory. type[][] is not a single object -- it's 1 + N objects that must be allocated and can be away from each other.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Count the items from a IEnumerable without iterating? private IEnumerable<string> Tables
{
get
{
yield return "Foo";
yield return "Bar";
}
}
Let's say I want iterate on those and write something like processing #n of #m.
Is there a way I can find out the value of m without iterating before my main iteration?
I hope I made myself clear.
A: Just adding extra some info:
The Count() extension doesn't always iterate. Consider Linq to Sql, where the count goes to the database, but instead of bringing back all the rows, it issues the Sql Count() command and returns that result instead.
Additionally, the compiler (or runtime) is smart enough that it will call the objects Count() method if it has one. So it's not as other responders say, being completely ignorant and always iterating in order to count elements.
In many cases where the programmer is just checking if( enumerable.Count != 0 ) using the Any() extension method, as in if( enumerable.Any() ) is far more efficient with linq's lazy evaluation as it can short-circuit once it can determine there are any elements. It's also more readable
A: No, not in general. One point in using enumerables is that the actual set of objects in the enumeration is not known (in advance, or even at all).
A: You can use System.Linq.
using System;
using System.Collections.Generic;
using System.Linq;
public class Test
{
private IEnumerable<string> Tables
{
get {
yield return "Foo";
yield return "Bar";
}
}
static void Main()
{
var x = new Test();
Console.WriteLine(x.Tables.Count());
}
}
You'll get the result '2'.
A: I think the easiest way to do this
Enumerable.Count<TSource>(IEnumerable<TSource> source)
Reference: system.linq.enumerable
A: Going beyond your immediate question (which has been thoroughly answered in the negative), if you're looking to report progress whilst processing an enumerable, you might want to look at my blog post Reporting Progress During Linq Queries.
It lets you do this:
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerReportsProgress = true;
worker.DoWork += (sender, e) =>
{
// pretend we have a collection of
// items to process
var items = 1.To(1000);
items
.WithProgressReporting(progress => worker.ReportProgress(progress))
.ForEach(item => Thread.Sleep(10)); // simulate some real work
};
A: I used such way inside a method to check the passed in IEnumberable content
if( iEnum.Cast<Object>().Count() > 0)
{
}
Inside a method like this:
GetDataTable(IEnumberable iEnum)
{
if (iEnum != null && iEnum.Cast<Object>().Count() > 0) //--- proceed further
}
A: It depends on which version of .Net and implementation of your IEnumerable object.
Microsoft has fixed the IEnumerable.Count method to check for the implementation, and uses the ICollection.Count or ICollection< TSource >.Count, see details here https://connect.microsoft.com/VisualStudio/feedback/details/454130
And below is the MSIL from Ildasm for System.Core, in which the System.Linq resides.
.method public hidebysig static int32 Count<TSource>(class
[mscorlib]System.Collections.Generic.IEnumerable`1<!!TSource> source) cil managed
{
.custom instance void System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 )
// Code size 85 (0x55)
.maxstack 2
.locals init (class [mscorlib]System.Collections.Generic.ICollection`1<!!TSource> V_0,
class [mscorlib]System.Collections.ICollection V_1,
int32 V_2,
class [mscorlib]System.Collections.Generic.IEnumerator`1<!!TSource> V_3)
IL_0000: ldarg.0
IL_0001: brtrue.s IL_000e
IL_0003: ldstr "source"
IL_0008: call class [mscorlib]System.Exception System.Linq.Error::ArgumentNull(string)
IL_000d: throw
IL_000e: ldarg.0
IL_000f: isinst class [mscorlib]System.Collections.Generic.ICollection`1<!!TSource>
IL_0014: stloc.0
IL_0015: ldloc.0
IL_0016: brfalse.s IL_001f
IL_0018: ldloc.0
IL_0019: callvirt instance int32 class [mscorlib]System.Collections.Generic.ICollection`1<!!TSource>::get_Count()
IL_001e: ret
IL_001f: ldarg.0
IL_0020: isinst [mscorlib]System.Collections.ICollection
IL_0025: stloc.1
IL_0026: ldloc.1
IL_0027: brfalse.s IL_0030
IL_0029: ldloc.1
IL_002a: callvirt instance int32 [mscorlib]System.Collections.ICollection::get_Count()
IL_002f: ret
IL_0030: ldc.i4.0
IL_0031: stloc.2
IL_0032: ldarg.0
IL_0033: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> class [mscorlib]System.Collections.Generic.IEnumerable`1<!!TSource>::GetEnumerator()
IL_0038: stloc.3
.try
{
IL_0039: br.s IL_003f
IL_003b: ldloc.2
IL_003c: ldc.i4.1
IL_003d: add.ovf
IL_003e: stloc.2
IL_003f: ldloc.3
IL_0040: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext()
IL_0045: brtrue.s IL_003b
IL_0047: leave.s IL_0053
} // end .try
finally
{
IL_0049: ldloc.3
IL_004a: brfalse.s IL_0052
IL_004c: ldloc.3
IL_004d: callvirt instance void [mscorlib]System.IDisposable::Dispose()
IL_0052: endfinally
} // end handler
IL_0053: ldloc.2
IL_0054: ret
} // end of method Enumerable::Count
A: There is a new method in LINQ for .NET 6
Watch https://www.youtube.com/watch?v=sIXKpyhxHR8
Tables.TryGetNonEnumeratedCount(out var count)
A: IEnumerable doesn't support this. This is by design. IEnumerable uses lazy evaluation to get the elements you ask for just before you need them.
If you want to know the number of items without iterating over them you can use ICollection<T>, it has a Count property.
A: Result of the IEnumerable.Count() function may be wrong. This is a very simple sample to test:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Collections;
namespace Test
{
class Program
{
static void Main(string[] args)
{
var test = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 };
var result = test.Split(7);
int cnt = 0;
foreach (IEnumerable<int> chunk in result)
{
cnt = chunk.Count();
Console.WriteLine(cnt);
}
cnt = result.Count();
Console.WriteLine(cnt);
Console.ReadLine();
}
}
static class LinqExt
{
public static IEnumerable<IEnumerable<T>> Split<T>(this IEnumerable<T> source, int chunkLength)
{
if (chunkLength <= 0)
throw new ArgumentOutOfRangeException("chunkLength", "chunkLength must be greater than 0");
IEnumerable<T> result = null;
using (IEnumerator<T> enumerator = source.GetEnumerator())
{
while (enumerator.MoveNext())
{
result = GetChunk(enumerator, chunkLength);
yield return result;
}
}
}
static IEnumerable<T> GetChunk<T>(IEnumerator<T> source, int chunkLength)
{
int x = chunkLength;
do
yield return source.Current;
while (--x > 0 && source.MoveNext());
}
}
}
Result must be (7,7,3,3) but actual result is (7,7,3,17)
A: The System.Linq.Enumerable.Count extension method on IEnumerable<T> has the following implementation:
ICollection<T> c = source as ICollection<TSource>;
if (c != null)
return c.Count;
int result = 0;
using (IEnumerator<T> enumerator = source.GetEnumerator())
{
while (enumerator.MoveNext())
result++;
}
return result;
So it tries to cast to ICollection<T>, which has a Count property, and uses that if possible. Otherwise it iterates.
So your best bet is to use the Count() extension method on your IEnumerable<T> object, as you will get the best performance possible that way.
A: Here is a great discussion about lazy evaluation and deferred execution. Basically you have to materialize the list to get that value.
A: The best way I found is count by converting it to a list.
IEnumerable<T> enumList = ReturnFromSomeFunction();
int count = new List<T>(enumList).Count;
A: Simplifying all answer.
IEnumerable has not Count function or property. To get this, you can store count variable (with foreach, for example) or solve using Linq to get count.
If you have:
IEnumerable<> products
Then:
Declare: "using System.Linq;"
To Count:
products.ToList().Count
A: A friend of mine has a series of blog posts that provide an illustration for why you can't do this. He creates function that return an IEnumerable where each iteration returns the next prime number, all the way to ulong.MaxValue, and the next item isn't calculated until you ask for it. Quick, pop question: how many items are returned?
Here are the posts, but they're kind of long:
*
*Beyond Loops (provides an initial EnumerableUtility class used in the other posts)
*Applications of Iterate (Initial implementation)
*Crazy Extention Methods: ToLazyList (Performance optimizations)
A: Alternatively you can do the following:
Tables.ToList<string>().Count;
A: IEnumerable cannot count without iterating.
Under "normal" circumstances, it would be possible for classes implementing IEnumerable or IEnumerable<T>, such as List<T>, to implement the Count method by returning the List<T>.Count property. However, the Count method is not actually a method defined on the IEnumerable<T> or IEnumerable interface. (The only one that is, in fact, is GetEnumerator.) And this means that a class-specific implementation cannot be provided for it.
Rather, Count it is an extension method, defined on the static class Enumerable. This means it can be called on any instance of an IEnumerable<T> derived class, regardless of that class's implementation. But it also means it is implemented in a single place, external to any of those classes. Which of course means that it must be implemented in a way that is completely independent of these class' internals. The only such way to do counting is via iteration.
A: I use such code, if I have list of strings:
((IList<string>)Table).Count
A: I use IEnum<string>.ToArray<string>().Length and it works fine.
A: I would suggest calling ToList. Yes you are doing the enumeration early, but you still have access to your list of items.
A: It may not yield the best performance, but you can use LINQ to count the elements in an IEnumerable:
public int GetEnumerableCount(IEnumerable Enumerable)
{
return (from object Item in Enumerable
select Item).Count();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "361"
} |
Q: Show Only One Element with JQuery I need to show only one element at a time when a link is clicked on. Right now I'm cheating by hiding everything again and then toggling the element clicked on. This works, unless i want EVERYTHING to disappear again. Short of adding a "Hide All" button/link what can i do? I would like to be able to click on the link again, and hide it's content.
EDIT: Pseudo's code would have worked, but the html here mistakenly led you to believe that all the links were in one div. instead of tracking down where they all were, it is easier to call them by their ID.
Here's what I have so far:
$(document).ready(function(){
//hides everything
$("#infocontent *").hide();
//now we show them by which they click on
$("#linkjoedhit").click(function(event){
$("#infocontent *").hide();
$("#infojoedhit").toggle();
return false;
});
$("#linkgarykhit").click(function(event){
$("#infocontent *").hide();
$("#infogarykhit").toggle();
return false;
});
});
and the html looks like:
<div id="theircrappycode">
<a id="linkjoedhit" href="">Joe D</a><br/>
<a id="linkgarykhit" href="">Gary K</a>
</div>
<div id="infocontent">
<p id="infojoedhit">Information about Joe D Hitting.</p>
<p id="infogarykhit">Information about Gary K Hitting.</p>
</div
there are about 20 links like this. Because I am not coding the actual html, I have no control over the actual layout, which is horrendous. Suffice to say, this is the only way to organize the links/info.
A: $("#linkgarykhit").click(function(){
if($("#infogarykhit").css('display') != 'none'){
$("#infogarykhit").hide();
}else{
$("#infocontent *").hide();
$("#infogarykhit").show();
}
return false;
});
We could also DRY this up a bit:
function toggleInfoContent(id){
if($('#' + id).css('display') != 'none'){
$('#' + id).hide();
}else{
$("#infocontent *").hide();
$('#' + id).show();
}
}
$("#linkgarykhit").click(function(){
toggleInfoContent('infogarykhit');
return false;
});
$("#linkbobkhit").click(function(){
toggleInfoContent('infobobkhit');
return false;
});
A: If your markup "naming scheme" is accurate, you can avoid a lot of repetitious code by using a RegEx for your selector, and judicious use of jQuery's "not".
You can attach a click event one time to a jQuery collection that should do what you want so you don't need to add any JavaScript as you add more Jim's or John's, as so:
$(document).ready( function () {
$("#infocontent *").hide();
$("div#theircrappycode > a").click(function(event){
var toggleId = "#" + this.id.replace(/^link/,"info");
$("#infocontent *").not(toggleId).hide();
$(toggleId).toggle();
return false;
});
});
A: Here is a slightly different approach there are some similarities to Pseudo Masochist's code.
$(document).ready(function(){
$("#infocontent *").hide();
$("#theircrappycode > a").click(statlink.togvis);
});
var statlink = {
visId: "",
togvis: function(){
$("#" + statlink.visId).toggle();
statlink.visId = $(this).attr("id").replace(/link/, "info");
$("#" + statlink.visId).toggle();
}
};
Hope you find this useful also.
A: I just started with jQuery, so I don't know if this is dumb or not.
function DoToggleMagic(strParagraphID) {
strDisplayed = $(strParagraphID).css("display");
$("#infocontent *").hide();
if (strDisplayed == "none")
$(strParagraphID).toggle();
}
$(document).ready(function(){
//hides everything
$("#infocontent *").hide();
//now we show them by which they click on
$("#linkjoedhit").click(function(event){
DoToggleMagic("#infojoedhit");
return false;
});
$("#linkgarykhit").click(function(event){
DoToggleMagic("#infogarykhit");
return false;
});
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to render contextual difference between two timestamps in JavaScript? Let's say I've got two strings in JavaScript:
var date1 = '2008-10-03T20:24Z'
var date2 = '2008-10-04T12:24Z'
How would I come to a result like so:
'4 weeks ago'
or
'in about 15 minutes'
(should support past and future).
There are solutions out there for the past diffs, but I've yet to find one with support for future time diffs as well.
These are the solutions I tried:
John Resig's Pretty Date and Zach Leatherman's modification
Bonus points for a jQuery solution.
A: Looking at the solutions you linked... it is actually as simple as my frivolous comment!
Here's a version of the Zach Leatherman code that prepends "In " for future dates for you. As you can see, the changes are very minor.
function humane_date(date_str){
var time_formats = [
[60, 'Just Now'],
[90, '1 Minute'], // 60*1.5
[3600, 'Minutes', 60], // 60*60, 60
[5400, '1 Hour'], // 60*60*1.5
[86400, 'Hours', 3600], // 60*60*24, 60*60
[129600, '1 Day'], // 60*60*24*1.5
[604800, 'Days', 86400], // 60*60*24*7, 60*60*24
[907200, '1 Week'], // 60*60*24*7*1.5
[2628000, 'Weeks', 604800], // 60*60*24*(365/12), 60*60*24*7
[3942000, '1 Month'], // 60*60*24*(365/12)*1.5
[31536000, 'Months', 2628000], // 60*60*24*365, 60*60*24*(365/12)
[47304000, '1 Year'], // 60*60*24*365*1.5
[3153600000, 'Years', 31536000], // 60*60*24*365*100, 60*60*24*365
[4730400000, '1 Century'], // 60*60*24*365*100*1.5
];
var time = ('' + date_str).replace(/-/g,"/").replace(/[TZ]/g," "),
dt = new Date,
seconds = ((dt - new Date(time) + (dt.getTimezoneOffset() * 60000)) / 1000),
token = ' Ago',
prepend = '',
i = 0,
format;
if (seconds < 0) {
seconds = Math.abs(seconds);
token = '';
prepend = 'In ';
}
while (format = time_formats[i++]) {
if (seconds < format[0]) {
if (format.length == 2) {
return (i>1?prepend:'') + format[1] + (i > 1 ? token : ''); // Conditional so we don't return Just Now Ago
} else {
return prepend + Math.round(seconds / format[2]) + ' ' + format[1] + (i > 1 ? token : '');
}
}
}
// overflow for centuries
if(seconds > 4730400000)
return Math.round(seconds / 4730400000) + ' Centuries' + token;
return date_str;
};
A: Heh - I actually wrote a function to do this exact thing yesterday (and it's not on this computer so I'll just have to try to remember it)
I extended the Date prototype class, but this could quite easily just be put into a regular function.
Date.prototype.toRelativeTime = function(otherTime) {
// if no parameter is passed, use the current date.
if (otherTime == undefined) otherTime = new Date();
var diff = Math.abs(this.getTime() - otherTime.getTime()) / 1000;
var MIN = 60, // some "constants" just
HOUR = 3600, // for legibility
DAY = 86400
;
var out, temp;
if (diff < MIN) {
out = "Less than a minute";
} else if (diff < 15 * MIN) {
// less than fifteen minutes, show how many minutes
temp = Math.round(diff / MIN);
out = temp + " minute" + (temp == 1 ? "" : "s");
// eg: 12 minutes
} else if (diff < HOUR) {
// less than an hour, round down to the nearest 5 minutes
out = (Math.floor(diff / (5 * MIN)) * 5) + " minutes";
} else if (diff < DAY) {
// less than a day, just show hours
temp = Math.round(diff / HOUR);
out = temp + " hour" + (temp == 1 ? "" : "s");
} else if (diff < 30 * DAY) {
// show how many days ago
temp = Math.round(diff / DAY);
out = temp + " day" + (temp == 1 ? "" : "s");
} else if (diff < 90 * DAY) {
// more than 30 days, but less than 3 months, show the day and month
return this.getDate() + " " + this.getShortMonth(); // see below
} else {
// more than three months difference, better show the year too
return this.getDate() + " " + this.getShortMonth() + " " + this.getFullYear();
}
return out + (this.getTime() > otherTime.getTime() ? " from now" : " ago");
};
Date.prototype.getShortMonth = function() {
return ["Jan", "Feb", "Mar",
"Apr", "May", "Jun",
"Jul", "Aug", "Sep",
"Oct", "Nov", "Dec"][this.getMonth()];
};
// sample usage:
var x = new Date(2008, 9, 4, 17, 0, 0);
alert(x.toRelativeTime()); // 9 minutes from now
x = new Date(2008, 9, 4, 16, 45, 0, 0);
alert(x.toRelativeTime()); // 6 minutes ago
x = new Date(2008, 11, 1); // 1 Dec
x = new Date(2009, 11, 1); // 1 Dec 2009
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Does anyone know of a php framework that would handle progressive enhancement for Flash/Flex content? Ok, I'm using the term "Progressive Enhancement" kind of loosely here but basically I have a Flash-based website that supports deep linking and loads content dynamically - what I'd like to do is provide alternate content (text) for those either not having Flash and for search engine bots. So, for a user with flash they would navigate to:
http://www.samplesite.com/#specific_page
and they would see a flash site that would navigate to the "specific_page." Those without flash would see the "specific_page" rendered in text in the alternative content section.
Basically, I would use php/mysql to create a backend to handle all of this since the swf is also using dynamic data. The question is, does something out there that does this already exist?
A: Well, according to OSFlash (the open source flash people) both CakePHP and PHPWCMS can do what you need, although from a first glance at their sites' feature list, it is not entirely obvious.
Let us know if they do work!
A: There's an inherent problem with what you're trying to achieve.
The URL hash (or anchor) is client-side only - that token is not sent to the server. This means the only way (that I know of) to load the content you need for example.com/#some_page is to use AJAX, which can read the hash and then request the page-specific data from the server.
Done? No. Because this will kill search engine bots. A possible solution is to have example.com/some_page serve the same content (in fact, that could easily be a REST service that you've already made to return the AJAX or Flash-requested content), and provide a sitemap.xml which indexes those URIs to help out the search engines.
I know of no existing framework that does specifically these tasks, although it certainly seems like one could be made w/o too much trouble.
A: if you're using SWFAddress with Flash/Flex then you can read in the URL and then split that into an array and do as you wish:
SWFAddress.addEventListener ( SWFAddressEvent.CHANGE, onChange );
private function onChange ( e : SWFAddressEvent ) : void
{
var ar : Array = SWFAddress.getValue ().split ( '/' );
trace ( 'Array : ', ar );
}
For your non-flash stuff if you're using code igniter you'd be able to pull the url and convert that into an array as well.
Another alternative is to use FAUST. What you can do with FAUST is have PHP render out the HMTL as valid markup then FAUST will pull the HTML and pass that to Flash via Flash Vars as XML. This method makes search engines really really happy ( see http://www.bartoncreek.com ).
So to answer your question there are tools out there that will help you achieve your goals.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unit testing the app.config file with NUnit When you guys are unit testing an application that relies on values from an app.config file? How do you test that those values are read in correctly and how your program reacts to incorrect values entered into a config file?
It would be ridiculous to have to modify the config file for the NUnit app, but I can't read in the values from the app.config I want to test.
Edit: I think I should clarify perhaps. I'm not worried about the ConfigurationManager failing to read the values, but I am concerned with testing how my program reacts to the values read in.
A: I usually isolate external dependencies like reading a config file in their own facade-class with very little functionality. In tests I can create a mock version of this class that implements and use that instead of the real config file. You can create your own mockup's or use a framework like moq or rhino mocks for this.
That way you can easily try out your code with different configuration values without writing complex tests that first write xml-configuration files. The code that reads the configuration is usually so simple that it needs very little testing.
A: You can modify your config section at runtime in your test setup. E.g:
// setup
System.Configuration.Configuration config =
ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
config.Sections.Add("sectionname", new ConfigSectionType());
ConfigSectionType section = (ConfigSectionType)config.GetSection("sectionname");
section.SomeProperty = "value_you_want_to_test_with";
config.Save(ConfigurationSaveMode.Modified);
ConfigurationManager.RefreshSection("sectionname");
// carry out test ...
You can of course setup your own helper methods to do this more elegantly.
A: A more elegant solution is to use plain old dependency injection on the configuration settings themselves. IMHO this is cleaner than having to mock a configuration reading class/wrapper etc.
For example, say a class "Weather" requires a "ServiceUrl" in order to function (e.g. say it calls a web service to get the weather). Rather than having some line of code that actively goes to a configuration file to get that setting (whether that code be in the Weather class or a separate configuration reader that could be mocked as per some of the other responses), the Weather class can allow the setting to be injected, either via a parameter to the constructor, or possibly via a property setter. That way, the unit tests are extremely simple and direct, and don't even require mocking.
The value of the setting can then be injected using an Inversion of Control (or Dependency Injection) container, so the consumers of the Weather class don't need to explicitly supply the value from somewhere, as it's handled by the container.
A: You can call the set method of ConfigurationManager.AppSettings to set the values required for that particular unit test.
[SetUp]
public void SetUp()
{
ConfigurationManager.AppSettings.Set("SettingKey" , "SettingValue");
// rest of unit test code follows
}
When the unit test runs it will then use these values to run the code
A: That worked for me:
public static void BasicSetup()
{
ConnectionStringSettings connectionStringSettings =
new ConnectionStringSettings();
connectionStringSettings.Name = "testmasterconnection";
connectionStringSettings.ConnectionString =
"server=localhost;user=some;database=some;port=3306;";
ConfigurationManager.ConnectionStrings.Clear();
ConfigurationManager.ConnectionStrings.Add(connectionStringSettings);
}
A: You can both read and write to the app.config file with the ConfigurationManager class
A: I was facing similar problems with web.config.... I find an interesting solution. You can encapsulate configuration reading function, e.g. something like this:
public class MyClass {
public static Func<string, string>
GetConfigValue = s => ConfigurationManager.AppSettings[s];
//...
}
And then normally use
string connectionString = MyClass.GetConfigValue("myConfigValue");
but in unit test initialize "override" the function like this:
MyClass.GetConfigValue = s => s == "myConfigValue" ? "Hi", "string.Empty";
More about it:
http://rogeralsing.com/2009/05/07/the-simplest-form-of-configurable-dependency-injection/
A: You can always wrap the reading-in bit in an interface, and have a specific implementation read from the config file. You would then write tests using Mock Objects to see how the program handled bad values.
Personally, I wouldn't test this specific implementation, as this is .NET Framework code (and I'm assuming - hopefully - the MS has already tested it).
A: System.Configuration.Abstractions is a thing of beauty when it comes to testing this kind of stuff.
Here is the GitHub project site with some good examples: enter link description here
Here is the NuGet site: https://www.nuget.org/packages/System.Configuration.Abstractions/
I use this in almost all of my .NET projects.
A: Actually, thinking on it further, I suppose what I should do is create a ConfigFileReader class for use in my project and then fake it out in the unit test harness?
Is that the usual thing to do?
A: The simplest option is to wrap the methods that read configuration such that you can substitute in values during testing. Create an interface that you use for reading config and have an implementation of that interface get passed in as a constructor parameter or set on the object as a property (as you would using dependency injection/inversion of control). In the production environment, pass in an implementation that really reads from configuration; in the test environment, pass in a test implementation that returns a known value.
If you don't have the option of refactoring the code for testability yet still need to test it, Typemock Isolator provides the ability to actually mock the .NET framework configuration classes so you can just say "next time I ask for such-and-such appSettings value, return this known value."
A: I had the same issue,
you can use Nunit-console.exe c:\path1\testdll1.dll c:\path2\testdll2.dll
this works fine even though if both dlls point to different app.configs
ex testdll1.dll.config and testdll2.dll.config
if you want to use Nunit project config and wrap these two dlls then there is no way you can have two configs
you have to have project1.config if your Nunit project is project1.nunit in the same location as Project1.nunit sits.
hope this helps
A: Well, I just had the same problem...
I wanted to test a BL project that is referenced from a web site .
but i wanted to test the BL only. So in the pre-build event of the test project I copy the app.Config files into the bin\debug folder and reference them from the app.config ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
} |
Q: How do I include IBM XLC template *.c files in the make dependency file? For the XLC compiler, templated code goes in a *.c file. Then when your program is compiled that uses the template functions, the compiler finds the template definisions in the .c file and instantiates them.
The problem is that these .c files are not by default included when doing an xlC -qmakedepend to generate the build dependencies. So if you change one of those .c files, you won't automatically build everything that depends on it.
Has anyone found a good solution to this problem?
A: In short, the answer is to migrate off using the XLC's tempinc utility.
The tempinc utility requires you to set up your files with the template declarations in your header (.h or .hpp) file and your implementations in a .c file (this extension is mandatory). As the compiler finds template instantiations, it will put explicit instantiations in a another source file in your tempinc directory, forcing code to be generated for them. The compiler knows to find the template definitions declered in foo.h in foo.c.
The problem I specified is that the dependency builders don't know about this, and thus can't include your .c files in the dependencies.
With Version 6.0 IBM recommends using a the -qtemplateregistry setting rather than -qtempinc. Then, you can use a typical template set up of including the template definitions in your header file, which will then be visible to the dependency finder, or putting them in a separate file which you #include from your header file, and will also be found using the dependency finder.
If you are migrating from using -qtempinc, you can conditionally #include your template implementation file from your declaration file with code like below:
// end of foo.h
#ifndef __TEMPINC__
#include "foo.c"
#endif
Thus your code will build and link if you ever decide to go back to using the -qtempic setting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: IIS Returning Old User Names to my application Here's my scenario. I created an application which uses Integrated Windows Authentication in order to work. In Application_AuthenticateRequest(), I use HttpContext.Current.User.Identity to get the current WindowsPrincipal of the user of my website.
Now here's the funny part. Some of our users have recently gotten married, and their names change. (i.e. the user's NT Login changes from jsmith to jjones) and when my application authenticates them, IIS passes me their OLD LOGIN . I continue to see jsmith passed to my application until I reboot my SERVER! Logging off the client does not work. Restarting the app pool does not work. Only a full reboot.
Does anyone know what's going on here? Is there some sort of command I can use to flush whatever cache is giving me this problem? Is my server misconfigured?
Note: I definitely do NOT want to restart IIS, my application pools, or the machine. As this is a production box, these are not really viable options.
AviD -
Yes, their UPN was changed along with their login name. And Mark/Nick... This is a production enterprise server... It can't just be rebooted or have IIS restarted.
Follow up (for posterity):
Grhm's answer was spot-on. This problem pops up in low-volume servers where you don't have a lot of people using your applications, but enough requests are made to keep the users' identity in the cache. The key part of the KB which seems to describe why the cache item is not refreshed after the default of 10 minutes is:
The cache entries do time out, however chances are that recurring
queries by applications keep the existing cache entry alive for the
maximum lifetime of the cache entry.
I'm not exactly sure what in our code was causing this (the recurring queries), but the resolution which worked for us was to cut the LsaLookupCacheExpireTime value from the seemingly obscene default of 1 week to just a few hours. This, for us, cut the probability that a user would be impacted in the real world to essentially zero, and yet at the same time doesn't cause an extreme number of SID-Name lookups against our directory servers. An even better solution IMO would be if applications looked up user information by SID instead of mapping user data to textual login name. (Take note, vendors! If you're relying on AD authentication in your application, you'll want to put the SID in your authentication database!)
A: The problem as AviD identified is the Active Directory cache which you can control via the registry. Depending on your solution Avid's group policy options will fail or work depending if you are actually logging the users on or not.
How it is being cached depends on how you are authenticating on IIS. I suspect it could be Kerberos so to do the clearing if it is being caused by Kerberos you may want to try klist with the purge option which should purge kerberos tickets, which will force a reauth to AD on the next attempt and update the details.
I would also suggest looking at implementing this which is slightly more complex but far less error prone.
A: I know we've had cached credentials problems in IIS in the past here, too, and after Googling for days we came across an obscure (to us, at least) command you can use to view and clear cached credentials.
Start -> Run (or WinKey+R) and type control keymgr.dll
This fixed our problems for client machines. Haven't tried it on servers but it might be worth a shot if its the server caching credentials. Our problem was we were getting old credentials but only on a client machine basis. If the user logged in on a separate client machine, everything was fine, but if they used their own machine at their desk that they normally work on it had the cached old credentials.
A: If it's not an issue of changing only the NT Username, then it does seem that the authentication service is caching the old username.
You can define this to be disabled, go to the Local Security Settings (in Administrative Tools), and depending on version/edition/configuration the settings that are possible relevant (from memory) are "Number of previous logons to cache" and "Do not allow storage of credentials...".
Additional factors to take into account:
*
*Domain membership might affect this, as member servers may inherit domain settings
*You may still need to restart the whole server once for this to take affect (but then you won't have to worry about updates in the future).
*Logon performance might be affected.
As such, I recommend you test this first before deploying on production (of course).
A: I've had similar issues lately and as stated in Robert MacLean's answer, AviD's group policy changes don't work if you're not logging in as the users.
I found changing the LSA Lookup Cache size as described is MS KB946358 worked without rebooting or recycling any apppool or services.
I found this as an answer to this similar question: Wrong authentication after changing user's logon name.
You might want to look into the following system calls such as the following ones:
LookupAccountName()
LookupAccountSid()
LsaOpenPolicy()
You could use them to write a C++/CLI (/Managed-C++) app to interrogate the LSA cache.
A: Restarting IIS, not the whole machine, should do the trick.
A: When these users' names were changed, did you change only their NT Login names, or their UPN names too? the UPN names are the proper names, and used by Kerberos - which is the default protocol for IWA; however, if you just click to change their name in ActiveDirectory, only the NT Login name changes - even though thats what they would use to login (using the default windows GINA). Under the covers, windows would translate the (new) NT Login name to the (old) Kerberos name. This persists until AD is forced to update the Kerberos name according to the NT Login name...
A: Login to the server that runs the IIS using the new login name in question. This will refresh the credential without re-starting IIS or rebooting the server.
A: Just as an FYI we had the exact same issue. What appeared to work for us is to go into Active Directory and do a "Refresh". Immediately after this we had to recycle the application pool on the intranet sites that were having this issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Parallel HTTP requests in PHP using PECL HTTP classes [Answer: HttpRequestPool class]
The HttpRequestPool class provides a solution. Many thanks to those who pointed this out.
A brief tutorial can be found at: http://www.phptutorial.info/?HttpRequestPool-construct
Problem
I'd like to make concurrent/parallel/simultaneous HTTP requests in PHP. I'd like to avoid consecutive requests as:
*
*a set of requests will take too long to complete; the more requests the longer
*the timeout of one request midway through a set may cause later requests to not be made (if a script has an execution time limit)
I have managed to find details for making simultaneuos [sic] HTTP requests in PHP with cURL, however I'd like to explicitly use PHP's HTTP functions if at all possible.
Specifically, I need to POST data concurrently to a set of URLs. The URLs to which data are posted are beyond my control; they are user-set.
I don't mind if I need to wait for all requests to finish before the responses can be processed. If I set a timeout of 30 seconds on each request and requests are made concurrently, I know I must wait a maximum of 30 seconds (perhaps a little more) for all requests to complete.
I can find no details of how this might be achieved. However, I did recently notice a mention in the PHP manual of PHP5+ being able to handle concurrent HTTP requests - I intended to make a note of it at the time, forgot, and cannot find it again.
Single request example (works fine)
<?php
$request_1 = new HttpRequest($url_1, HTTP_METH_POST);
$request_1->setRawPostData($dataSet_1);
$request_1->send();
?>
Concurrent request example (incomplete, clearly)
<?php
$request_1 = new HttpRequest($url_1, HTTP_METH_POST);
$request_1->setRawPostData($dataSet_1);
$request_2 = new HttpRequest($url_2, HTTP_METH_POST);
$request_2->setRawPostData($dataSet_2);
// ...
$request_N = new HttpRequest($url_N, HTTP_METH_POST);
$request_N->setRawPostData($dataSet_N);
// Do something to send() all requests at the same time
?>
Any thoughts would be most appreciated!
Clarification 1: I'd like to stick to the PECL HTTP functions as:
*
*they offer a nice OOP interface
*they're used extensively in the application in question and sticking to what's already in use should be beneficial from a maintenance perspective
*I generally have to write fewer lines of code to make an HTTP request using the PECL HTTP functions compared to using cURL - fewer lines of code should also be beneficial from a maintenance perspective
Clarification 2: I realise PHP's HTTP functions aren't built in and perhaps I worded things wrongly there, which I shall correct. I have no concerns about people having to install extra stuff - this is not an application that is to be distributed, it's a web app with a server to itself.
Clarification 3: I'd be perfectly happy if someone authoritatively states that the PECL HTTP cannot do this.
A: Did you try HttpRequestPool (it's part of Http)? It looks like it would pool up the request objects and work them. I know I read somewhere that Http would support simultaneous requests and aside from pool I can't find anything either.
A: I once had to solve similar problem: doing multiple requests without cumulating the response times.
The solution ended up being a custom-build function which used non-blocking sockets.
It works something like this:
$request_list = array(
# address => http request string
#
'127.0.0.1' => "HTTP/1.1 GET /index.html\nServer: website.com\n\n",
'192.169.2.3' => "HTTP/1.1 POST /form.dat\nForm-data: ...",
);
foreach($request_list as $addr => $http_request) {
# first, create a socket and fire request to every host
$socklist[$addr] = socket_create();
socket_set_nonblock($socklist[$addr]); # Make operation asynchronious
if (! socket_connect($socklist[$addr], $addr, 80))
trigger_error("Cannot connect to remote address");
# the http header is send to this host
socket_send($socklist[$addr], $http_request, strlen($http_request), MSG_EOF);
}
$results = array();
foreach(array_keys($socklist) as $host_ip) {
# Now loop and read every socket until it is exhausted
$str = socket_read($socklist[$host_ip], 512, PHP_NORMAL_READ);
if ($str != "")
# add to previous string
$result[$host_ip] .= $str;
else
# Done reading this socket, close it
socket_close($socklist[$host_ip]);
}
# $results now contains an array with the full response (including http-headers)
# of every connected host.
It's much faster since thunked reponses are fetched in semi-parallel since socket_read doesn't wait for the response but returns if the socket-buffer isn't full yet.
You can wrap this in appropriate OOP interfaces. You will need to create the HTTP-request string yourself, and process the server response of course.
A: I'm pretty sure HttpRequestPool is what you're looking for.
To elaborate a little, you can use forking to achieve what you're looking for, but that seems unnecessarily complex and not very useful in a HTML context. While I haven't tested, this code should be it:
// let $requests be an array of requests to send
$pool = new HttpRequestPool();
foreach ($requests as $request) {
$pool->attach($request);
}
$pool->send();
foreach ($pool as $request) {
// do stuff
}
A: A friend pointed me to CurlObjects ( http://trac.curlobjects.com/trac ) recently, which I found quite useful for using curl_multi.
$curlbase = new CurlBase;
$curlbase->defaultOptions[ CURLOPT_TIMEOUT ] = 30;
$curlbase->add( new HttpPost($url, array('name'=> 'value', 'a' => 'b')));
$curlbase->add( new HttpPost($url2, array('name'=> 'value', 'a' => 'b')));
$curlbase->add( new HttpPost($url3, array('name'=> 'value', 'a' => 'b')));
$curlbase->perform();
foreach($curlbase->requests as $request) {
...
}
A: PHP's HTTP functions aren't built in, either - they're a PECL extension. If your concern is people having to install extra stuff, both solutions will have the same problem - and cURL is more likely to be installed, I'd imagine, as it comes default with every web host I've ever been on.
A: You could use pcntl_fork() to create a separate process for each request, then wait for them to end:
http://www.php.net/manual/en/function.pcntl-fork.php
Is there any reason you don't want to use cURL? The curl_multi_* functions would allow for multiple requests at the same time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Yahoo Username Regex I need a (php) regex to match Yahoo's username rules:
Use 4 to 32 characters and start with a letter. You may use letters, numbers, underscores, and one dot (.).
A: /[a-zA-Z][a-zA-Z0-9_]*\.?[a-zA-Z0-9_]*/
And check if strlen($username) >= 4 and <= 32.
A: A one dot limit? That's tricky.
I'm no regex expert, but I think this would get it, except for that:
[A-Za-z][A-Za-z0-9_.]{3,31}
Maybe you could check for the . requirement separately?
A: /^[A-Za-z](?=[A-Za-z0-9_.]{3,31}$)[a-zA-Z0-9_]*\.?[a-zA-Z0-9_]*$/
Or a little shorter:
/^[a-z](?=[\w.]{3,31}$)\w*\.?\w*$/i
A: Using lookaheads you could do the following:
^(?=[A-Za-z](?:\w*(?:\.\w*)?$))(\S{4,32})$
Because you didn't specify what type of regex you needed I added a lot of Perl 5 compatible stuff. Like (?: ... ) for non-capturing parens.
Note: I added the missing close paren back in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Does a pom.xml.template tell me everything I need to know to use the project as a dependency I'm trying to add the lucene sandbox contribution called term-highlighter to my pom.xml.
I'm not really that familiar with Maven, but the code has a pom.xml.template which
seems to imply if I add a dependency that looks like:
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-highlighter</artifactId>
</dependency>
It might work. Can someone help me out in adding a lucene-community project to my pom.xml file?
Thanks for the comments, it turns out that adding the version was all I needed, and I just guessed it should match the lucene-core version I was using.:
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-highlighter</artifactId>
<version>2.3.1</version>
</dependency>
A: You have to add the version number, but you only have to do it once in a project structure. That is, if the version number is defined in a parent pom, you don't have to give the version number again. (But you don't even have to provide the dependency in this case since the dependency will be inherited anyways.)
A: You have it right, but you probably want to add the version as well:
From The Maven 5 minute tutorial
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>Maven Quick Start Archetype</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Stop jQuery .load response from being cached I have the following code making a GET request on a URL:
$('#searchButton').click(function() {
$('#inquiry').load('/portal/?f=searchBilling&pid=' + $('#query').val());
});
But the returned result is not always reflected. For example, I made a change in the response that spit out a stack trace but the stack trace did not appear when I clicked on the search button. I looked at the underlying PHP code that controls the ajax response and it had the correct code and visiting the page directly showed the correct result but the output returned by .load was old.
If I close the browser and reopen it it works once and then starts to return the stale information. Can I control this by jQuery or do I need to have my PHP script output headers to control caching?
A: Another approach to put the below line only when require to get data from server,Append the below line along with your ajax url.
'?_='+Math.round(Math.random()*10000)
A: /**
* Use this function as jQuery "load" to disable request caching in IE
* Example: $('selector').loadWithoutCache('url', function(){ //success function callback... });
**/
$.fn.loadWithoutCache = function (){
var elem = $(this);
var func = arguments[1];
$.ajax({
url: arguments[0],
cache: false,
dataType: "html",
success: function(data, textStatus, XMLHttpRequest) {
elem.html(data);
if(func != undefined){
func(data, textStatus, XMLHttpRequest);
}
}
});
return elem;
}
A: Sasha is good idea, i use a mix.
I create a function
LoadWithoutCache: function (url, source) {
$.ajax({
url: url,
cache: false,
dataType: "html",
success: function (data) {
$("#" + source).html(data);
return false;
}
});
}
And invoke for diferents parts of my page for example on init:
Init: function (actionUrl1, actionUrl2, actionUrl3) {
var ExampleJS= {
Init: function (actionUrl1, actionUrl2, actionUrl3) ExampleJS.LoadWithoutCache(actionUrl1, "div1");
ExampleJS.LoadWithoutCache(actionUrl2, "div2");
ExampleJS.LoadWithoutCache(actionUrl3, "div3");
}
},
A: You have to use a more complex function like $.ajax() if you want to control caching on a per-request basis. Or, if you just want to turn it off for everything, put this at the top of your script:
$.ajaxSetup ({
// Disable caching of AJAX responses
cache: false
});
A: This is of particular annoyance in IE. Basically you have to send 'no-cache' HTTP headers back with your response from the server.
A: One way is to add a unique number to the end of the url:
$('#inquiry').load('/portal/?f=searchBilling&pid=' + $('#query').val()+'&uid='+uniqueId());
Where you write uniqueId() to return something different each time it's called.
A: For PHP, add this line to your script which serves the information you want:
header("cache-control: no-cache");
or, add a unique variable to the query string:
"/portal/?f=searchBilling&x=" + (new Date()).getTime()
A: If you want to stick with Jquery's .load() method, add something unique to the URL like a JavaScript timestamp. "+new Date().getTime()". Notice I had to add an "&time=" so it does not alter your pid variable.
$('#searchButton').click(function() {
$('#inquiry').load('/portal/?f=searchBilling&pid=' + $('#query').val()+'&time='+new Date().getTime());
});
A: Do NOT use timestamp to make an unique URL as for every page you visit is cached in DOM by jquery mobile and you soon run into trouble of running out of memory on mobiles.
$jqm(document).bind('pagebeforeload', function(event, data) {
var url = data.url;
var savePageInDOM = true;
if (url.toLowerCase().indexOf("vacancies") >= 0) {
savePageInDOM = false;
}
$jqm.mobile.cache = savePageInDOM;
})
This code activates before page is loaded, you can use url.indexOf() to determine if the URL is the one you want to cache or not and set the cache parameter accordingly.
Do not use window.location = ""; to change URL otherwise you will navigate to the address and pagebeforeload will not fire. In order to get around this problem simply use window.location.hash = "";
A: Here is an example of how to control caching on a per-request basis
$.ajax({
url: "/YourController",
cache: false,
dataType: "html",
success: function(data) {
$("#content").html(data);
}
});
A: You can replace the jquery load function with a version that has cache set to false.
(function($) {
var _load = jQuery.fn.load;
$.fn.load = function(url, params, callback) {
if ( typeof url !== "string" && _load ) {
return _load.apply( this, arguments );
}
var selector, type, response,
self = this,
off = url.indexOf(" ");
if (off > -1) {
selector = stripAndCollapse(url.slice(off));
url = url.slice(0, off);
}
// If it's a function
if (jQuery.isFunction(params)) {
// We assume that it's the callback
callback = params;
params = undefined;
// Otherwise, build a param string
} else if (params && typeof params === "object") {
type = "POST";
}
// If we have elements to modify, make the request
if (self.length > 0) {
jQuery.ajax({
url: url,
// If "type" variable is undefined, then "GET" method will be used.
// Make value of this field explicit since
// user can override it through ajaxSetup method
type: type || "GET",
dataType: "html",
cache: false,
data: params
}).done(function(responseText) {
// Save response for use in complete callback
response = arguments;
self.html(selector ?
// If a selector was specified, locate the right elements in a dummy div
// Exclude scripts to avoid IE 'Permission Denied' errors
jQuery("<div>").append(jQuery.parseHTML(responseText)).find(selector) :
// Otherwise use the full result
responseText);
// If the request succeeds, this function gets "data", "status", "jqXHR"
// but they are ignored because response was set above.
// If it fails, this function gets "jqXHR", "status", "error"
}).always(callback && function(jqXHR, status) {
self.each(function() {
callback.apply(this, response || [jqXHR.responseText, status, jqXHR]);
});
});
}
return this;
}
})(jQuery);
Place this somewhere global where it will run after jquery loads and you should be all set. Your existing load code will no longer be cached.
A: Try this:
$("#Search_Result").load("AJAX-Search.aspx?q=" + $("#q").val() + "&rnd=" + String((new Date()).getTime()).replace(/\D/gi, ''));
It works fine when i used it.
A: I noticed that if some servers (like Apache2) are not configured to specifically allow or deny any "caching", then the server may by default send a "cached" response, even if you set the HTTP headers to "no-cache". So make sure that your server is not "caching" anything before it sents a response:
In the case of Apache2 you have to
1) edit the "disk_cache.conf" file - to disable cache add "CacheDisable /local_files" directive
2) load mod_cache modules (On Ubuntu "sudo a2enmod cache" and "sudo a2enmod disk_cache")
3) restart the Apache2 (Ubuntu "sudo service apache2 restart");
This should do the trick disabling cache on the servers side.
Cheers! :)
A: This code may help you
var sr = $("#Search Result");
sr.load("AJAX-Search.aspx?q=" + $("#q")
.val() + "&rnd=" + String((new Date).getTime())
.replace(/\D/gi, ""));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "246"
} |
Q: Static and dynamic library linking In C++, static library A is linked into dynamic libraries B and C. If a class, Foo, is used in A which is defined in B, will C link if it doesn't use Foo?
I thought the answer was yes, but I am now running into a problem with xlc_r7 where library C says Foo is an undefined symbol, which it is as far as C is concerned. My problem with that is Library C isn't using the class referencing it. This links in Win32 (VC6) and OpenVMS.
Is this a linker discrepancy or a PBCAK?
New info:
*
*B depends on C, but not visa-versa.
*I'm not using /OPT:REF to link on Windows and it links without issue.
A: When you statically link, two modules become one. So when you compile C and link A into it, its as if you had copied all the source code of A into the source code of C, then compiled the combined source. So C.dll includes A, which has a dependency on B via Foo. You'll need to link C to B's link library in order to satisfy that dependency.
Note that according to your info, this will create a circular dependency between B and C.
A: Sounds like it's probably the linker (ld/unix), as (most versions that I've used of) ld links the libraries in from left to right - and if there is a reference in the first one that is required by a later one the usual trick is to append the first library (or any required library) to the end of the command.
It's a try it and see....
A: Is your link line for C including the export lib for B? If so then as Richard suggest it sounds like an ordering thing.
Another suggestion is to see if there is a linker option to ignore unreferenced symbols, if C doesn't need that functionality from A. For the Microsoft linker this achieved with the /OPT:REF switch.
A: The only reason why C wouldn't link is that the compiler thinks it does need the Foo symbol.
Since C doesn't refer to Foo symbols, there has to be another reason why the linker needs the symbol.
The only other reason I know of, is an export of some kind. I know only of Visual C++, so I suggest you search for some equivalent of __declspec( dllexport ) in the preprocessed files, and see what generates it.
Here's what I'd do: have the preprocessor output stored in a separate file and search it for occurences of Foo. Either it will occur as an export, or it has been referenced some way by the compiler.
A: If the definition of particular function is not required, then that library will not be linked during linking phase. In your case, as the definition of foo is present in the library B and not in library C. Thus the library C will not be loaded into the memory while loading the executable.
But it seems, you are using the foo() function in the library C as well, because of which you are getting the corresponding error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Restoring UI settings in C# WinForms - which event to respond to? When is the proper time to restore the UI layout settings of a System.Windows.Forms.Control?
I tried this:
FooBarGadget control = new FooBarGadget();
parent.Controls.Add(control);
control.FobnicatorWidth = lastLayoutSettings.FobWidth;
No cigar. Reason? control isn't finished laying out its internals; it's in its default size of 100x100 pixels after construction. Once it's done loading and actually displays in the UI, it will be 500x500 pixels. Therefore, setting the FobnicatorWidth to 200 pixels fails; it's larger than the control.
Is there a control.Loaded event - somwehere where I can restore my saved UI settings?
A: If you're creating this control as part of loading a new Form, a good place to reload saved settings would be in Form.OnLoad (or respond to the the Form.Load event). Another event that might be helpful is Control.HandleCreated, which happens when the underlying window of your control is created.
If neither of these helps, perhaps more information about your particular scenario will help us get to a better answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to get effect of sorted database view? I'd like to be able to (effectively) sort a database view - I know that conceptually order in a db view is invalid, but I have the following scenario to deal with:
*
*a third-party legacy application, that reads data from database tables using a select(*) from tablename statement
*the legacy application is very sensitive to the order of the records
*an application I've written to allow users to manage the data in the tables more easily, but inserts and deletes from the table naturally upset the order of the records.
Changing the statement in the legacy application to select (*) from tablename order by field would fix my problem, but isn't an option.
So - I've set up a staging table into which the data can be exported in the right order, but this is a resource-hungry option, means that the data isn't 'live' in the legacy application and is additional work for users.
I'd like to be able to get at an ordered version of the table with these contraints. Any ideas how?
Update - I'm working with Sybase 12.5, but I'd like to avoid a tightly coupled solution with a specific RDBMS - it might change.
I cannot add an 'order by' clause to a view, because of SQL standards as referred to in this Wikipedia entry
A: First off, I've had to work on this type of project before and it's truly a bitch. My condolences.
This is a little out there, but if your DBMS supports it, perhaps you could create a user defined table function that does an ordered select from your legacy table, then set up your view to select from the UDTF. It's not anything I've ever done before though.
A: It's not nice, but it works
CREATE VIEW OrderedTable
AS SELECT TOP (Select Count(*) from UnorderedTable) *
FROM UnorderedTable Order By field
A: You might try a table-valued function. You didn't specify your database vendor, but here's how you would do it in TSQL (Sql Server):
CREATE FUNCTION orderedTable()
RETURNS @returnTable TABLE
(val varchar(100)) AS
BEGIN
insert @returnTable (val)
select val from MyTable
order by val desc
RETURN
END
GO
SELECT * FROM orderedTable
A: If I'm understanding this correctly, you could solve this by:
1) Renaming the original table
2) Creating a view with the name of the table that the legacy app queries.
3) Define the view to be a query that orders the records in the way that legacy app expects.
A: In MS Sql Server, we can bastardize the standards and create a view such as:
SELECT TOP 100 PERCENT * FROM TABLE ORDER BY 1
which lets us get around issues like this. Since Sybase and Sql Server share T-SQL, I'd think there's a good chance you could do that as well.
Alternatively, you can set a clustered index on the field that it should be ordered by. This will force storage order, which will* return in that as "natural order".
*
*This is, admittedly, an implementation detail. The SQL standards (nor any specific vendor, AFAIK) don't guarantee ordering without an order by clause...but, hey, if you could use that, then you wouldn't be asking.
A: Following up on the info you guys have provided, looks like I can't do what I want to on Sybase ASE 12.5.
MSSQL Server and Sybase ASE 15.x should do what's needed - hopefully I'll be able to arrange something. Not really sure which one to accept til I've got something working, but I'll come back and accept an answer then.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Make tag the same maximum width regardless of capitalization of text within I'm trying to display a series of titles varying from 60 characters to 160 or so and the capitalization varies, some of it all caps, some half caps. When it's mostly lowercase the whole 160 characters of text fits in the width I want, but when it starts getting more caps (they must be wider), it starts over flowing.
Is there a way to use an attractive fixed witdh font (upper and lowercase widths the same too), or dynamically shrink the text to fit, or otherwise recognize how much space the text is going to take on the server side, and cut off the end dynamically? Or do you folks have a better solution?
A: Control the Overflow
The real trick is just setting a limit on size of the text box, and making sure that there aren't overflow problems. You can use overflow: hidden to take care of this, and display: block the element in order to give it the exact dimensions you need.
Monospace is Optional
Yes, you can use a monospace font.. there are only a few to choose from if you want a cross-browser solution. You can use a variable-width font, too.. the monospace will just help you get consistency with the capitalization problem you described. Using a monospace font will help you to choose a good width that will work for different text lengths. In my example below, I've arbitrarily chosen a width of 250 pixels, and typed strings until they were well past the limit, just for the purposes of illustration.
Line-heights and Margins
You want the line height of the text to match the height of the box.. in this case, I've used 20 pixels. If you need to create line height, you can add a bottom margin.
Side note: I've used an h3 here, because the text is repeated many times across the page. In general it's a better choice to use a lower level of header for more common text (just a semantic choice). Using an h1 will work the same way..
<html>
<head>
<title>h1 stackoverflow question</title>
<style type="text/css">
* { margin:0; padding:0 }
h3 {
display: block;
width: 250px;
height: 20px;
margin-bottom: 5px;
overflow: hidden;
font-family: Courier, Lucidatypewriter, monospace;
font: normal 20px/20px Courier;
border: 1px solid red;
}
</style>
</head>
<body>
<h3>Hello, World</h3>
<h3>Lorem Ipsum dolor sit Amet</h3>
<h3>Adipiscing Lorem dolor sit lorem ipsum</h3>
<h3>"C" is for Cookie, that's good enough for lorem ipsum</h3>
<h3>Oh, it's a lorem ipsum dolor sit amet. Adipiscing elit.</h3>
</body>
</html>
A: When you say "cut off the end dynamically", am I wrong in assuming that a CSS rule like:
h1 {
width: 400px; /* or whatever width */
overflow: hidden;
}
would "cut the end off" as you want?
A: You could also try an ellipsis solution. Truncate the text at a maximum width and apply an ellipsis. Something like:
My title is way too long for this...
CSS3 has text-overflow: ellipsis you can use, but it's not supported in Firefox.
Hedger Wang has found a workaround that I have used a couple times. Pretty handy.
A: You could fix the width and hide the overflow, style="width: Xpx; overflow: hidden;"
That will limit the width and cut off the end if it's too wide.
A: MrZebra is right in how to hide the overflow, but if there's an attractive fixed width font you want to use you can set it with CSS font-family, just be sure to give it a fallback for people without the font.
You could also use CSS to enforce the capitalization with 'text-transform', if you wanted (though from your reading, that's not your desire).
font-variant:small-caps might work, too.
A: You could try using javascript to programmatically test to see the width of the font, and if it's too large, take it down a step and try again. Instead of testing the width, see if the height of the element is more than one line (measured in ems, since you'll be changing the font size around).
var fontSize = "200%"; // your regular heading font size
var h1 = document.getElementById("myHeading");
while (h1.offsetHeight > oneLine) {
fontSize *= (parseInt(fontSize) - 5) + "%";
h1.style.fontSize = fontSize;
}
you'll have to figure out that "oneLine" bit for yourself, sorry.
A: Rather than trying to constrain the height of your <h1> I would adjust your CSS to make your site more fluid - afterall, you don't want your site to appear broken if the user increases their text size.
Try setting the h1's height in ems rather than pixels. If you add this to your CSS:
body {
font:62.5%/140% Courier, Lucidatypewriter, monospace;
}
It will make 1em = 10px, so then you can set your heading's height to:
h1, h3 {
....
height:2em;
....
}
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Regex for parsing directory and filename I'm trying to write a regex that will parse out the directory and filename of a fully qualified path using matching groups.
so...
/var/log/xyz/10032008.log
would recognize group 1 to be "/var/log/xyz" and group 2 to be "10032008.log"
Seems simple but I can't get the matching groups to work for the life of me.
NOTE: As pointed out by some of the respondents this is probably not a good use of regular expressions. Generally I'd prefer to use the file API of the language I was using. What I'm actually trying to do is a little more complicated than this but would have been much more difficult to explain, so I chose a domain that everyone would be familiar with in order to most succinctly describe the root problem.
A: What language? and why use regex for this simple task?
If you must:
^(.*)/([^/]*)$
gives you the two parts you wanted. You might need to quote the parentheses:
^\(.*\)/\([^/]*\)$
depending on your preferred language syntax.
But I suggest you just use your language's string search function that finds the last "/" character, and split the string on that index.
A: Reasoning:
I did a little research through trial and error method. Found out that all the values that are available in keyboard are eligible to be a file or directory except '/' in *nux machine.
I used touch command to create file for following characters and it created a file.
(Comma separated values below)
'!', '@', '#', '$', "'", '%', '^', '&', '*', '(', ')', ' ', '"', '\', '-', ',', '[', ']', '{', '}', '`', '~', '>', '<', '=', '+', ';', ':', '|'
It failed only when I tried creating '/' (because it's root directory) and filename container / because it file separator.
And it changed the modified time of current dir . when I did touch .. However, file.log is possible.
And of course, a-z, A-Z, 0-9, - (hypen), _ (underscore) should work.
Outcome
So, by the above reasoning we know that a file name or directory name can contain anything except / forward slash. So, our regex will be derived by what will not be present in the file name/directory name.
/(?:(?P<dir>(?:[/]?)(?:[^\/]+/)+)(?P<filename>[^/]+))/
Step by Step regexp creation process
Pattern Explanation
Step-1: Start with matching root directory
A directory can start with / when it is absolute path and directory name when it's relative. Hence, look for / with zero or one occurrence.
/(?P<filepath>(?P<root>[/]?)(?P<rest_of_the_path>.+))/
Step-2: Try to find the first directory.
Next, a directory and its child is always separated by /. And a directory name can be anything except /. Let's match /var/ first then.
/(?P<filepath>(?P<first_directory>(?P<root>[/]?)[^\/]+/)(?P<rest_of_the_path>.+))/
Step-3: Get full directory path for the file
Next, let's match all directories
/(?P<filepath>(?P<dir>(?P<root>[/]?)(?P<single_dir>[^\/]+/)+)(?P<rest_of_the_path>.+))/
Here, single_dir is yz/ because, first it matched var/, then it found next occurrence of same pattern i.e. log/, then it found the next occurrence of same pattern yz/. So, it showed the last occurrence of pattern.
Step-4: Match filename and clean up
Now, we know that we're never going to use the groups like single_dir, filepath, root. Hence let's clean that up.
Let's keep them as groups however don't capture those groups.
And rest_of_the_path is just the filename! So, rename it. And a file will not have / in its name, so it's better to keep [^/]
/(?:(?P<dir>(?:[/]?)(?:[^\/]+/)+)(?P<filename>[^/]+))/
This brings us to the final result. Of course, there are several other ways you can do it. I am just mentioning one of the ways here.
Regex Rules used above are listed here
^ means string starts with
(?P<dir>pattern) means capture group by group name. We have two groups with group name dir and file
(?:pattern) means don't consider this group or non-capturing group.
? means match zero or one.
+ means match one or more
[^\/] means matches any char except forward slash (/)
[/]? means if it is absolute path then it can start with / otherwise it won't. So, match zero or one occurrence of /.
[^\/]+/ means one or more characters which aren't forward slash (/) which is followed by a forward slash (/). This will match var/ or xyz/. One directory at a time.
A: Try this:
^(.+)\/([^\/]+)$
EDIT: escaped the forward slash to prevent problems when copy/pasting the Regex
A: In languages that support regular expressions with non-capturing groups:
((?:[^/]*/)*)(.*)
I'll explain the gnarly regex by exploding it...
(
(?:
[^/]*
/
)
*
)
(.*)
What the parts mean:
( -- capture group 1 starts
(?: -- non-capturing group starts
[^/]* -- greedily match as many non-directory separators as possible
/ -- match a single directory-separator character
) -- non-capturing group ends
* -- repeat the non-capturing group zero-or-more times
) -- capture group 1 ends
(.*) -- capture all remaining characters in group 2
Example
To test the regular expression, I used the following Perl script...
#!/usr/bin/perl -w
use strict;
use warnings;
sub test {
my $str = shift;
my $testname = shift;
$str =~ m#((?:[^/]*/)*)(.*)#;
print "$str -- $testname\n";
print " 1: $1\n";
print " 2: $2\n\n";
}
test('/var/log/xyz/10032008.log', 'absolute path');
test('var/log/xyz/10032008.log', 'relative path');
test('10032008.log', 'filename-only');
test('/10032008.log', 'file directly under root');
The output of the script...
/var/log/xyz/10032008.log -- absolute path
1: /var/log/xyz/
2: 10032008.log
var/log/xyz/10032008.log -- relative path
1: var/log/xyz/
2: 10032008.log
10032008.log -- filename-only
1:
2: 10032008.log
/10032008.log -- file directly under root
1: /
2: 10032008.log
A: What about this?
[/]{0,1}([^/]+[/])*([^/]*)
Deterministic :
((/)|())([^/]+/)*([^/]*)
Strict :
^[/]{0,1}([^/]+[/])*([^/]*)$
^((/)|())([^/]+/)*([^/]*)$
A: Most languages have path parsing functions that will give you this already. If you have the ability, I'd recommend using what comes to you for free out-of-the-box.
Assuming / is the path delimiter...
^(.*/)([^/]*)$
The first group will be whatever the directory/path info is, the second will be the filename. For example:
*
*/foo/bar/baz.log: "/foo/bar/" is the path, "baz.log" is the file
*foo/bar.log: "foo/" is the path, "bar.log" is the file
*/foo/bar: "/foo/" is the path, "bar" is the file
*/foo/bar/: "/foo/bar/" is the path and there is no file.
A: A very late answer, but hope this will help
^(.+?)/([\w]+\.log)$
This uses lazy check for /, and I just modified the accepted answer
http://regex101.com/r/gV2xB7/1
A: Try this:
/^(\/([^/]+\/)*)(.*)$/
It will leave the trailing slash on the path, though.
A: Given an example upload folder URL:
https://drive.google.com/drive/folders/14Q6d-KiwgTKE-qm5EOZvHeX86-Wf9Q5f?usp=sharing
The regular expression pattern is:
[-\w]{25,}
This pattern also works in Google Sheets as well as custom functions in Excel:
=REGEXEXTRACT(N2,"[-\w]{25,}")
The result is: 14Q6d-KiwgTKE-qm5EOZvHeX86-Wf9Q5f
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: Pseudocode Programming Process vs. Test Driven Development For those who haven't read Code Complete 2, the Pseudocode Programming Process is basically a way to design a routine by describing it in plain English first, then gradually revise it to more detailed pseudocode, and finally to code. The main benefit of this is to help you stay at the right level of abstraction by building systems top-down rather than bottom-up, thereby evolving a clean API in distinct layers. I find that TDD is less effective at this, because it focuses too much on doing the bare minimum to get a test to pass and encourages little up-front design. I also find that having to maintain a suite of unit tests for unstable code (code that's constantly being refactored) is quite difficult, because it's typically the case that you have a dozen unit tests for a routine that's only needed once or twice. When you do refactor - change a method signature, for example - most of the work you do is in updating the tests rather than the prod code. I prefer adding unit tests after a component's code has stabilized a bit.
My question is - of those who've tried both approaches, which do you prefer?
A: My team mixes both approaches and it's an awesome way to develop (at least for us). We need unit tests because we have a large and complex software system. But the Pseudocode Programming Process is hands-down the best approach to software design I've come across. To make them work together:
*
*We start by writing our classes,
and fill in with fully commented
method stubs, with inputs and
outputs.
*We use pair coding and peer review as a dialogue to refine and validate the design, still only with the method stubs.
*At this point we've now both designed our system and have some testable code. So we go ahead and write our unit tests.
*We go back in and start filling in the methods with comments for the logic that needs to be written.
*We write code; the tests pass.
The beauty of it is that by the time we actually write code, most of the work of implementation is already done, because so much of what we think of as implementation is actually code design. Also the early process replaces the need for UML - class and method stubs are just as descriptive, plus it'll actually be used. And we always stay at the appropriate level of abstraction.
Obviously the process is never really quite as linear as I've described - some quirk of implementation may mean that we need to revisit the high-level design. But in general, by the time we write unit tests the design is really quite stable (at the method level), so no need for lots of test rewriting.
A: With Test Driven Development you should still be doing some planning in the beginning. It should at first be a high level look at what you're trying to do. Don't come up with all the details, but get an idea in plain English of how to solve the problem.
Then start testing the problem. Once you've got the test in place, start to make it pass. If it isn't easy to do, you may need to revise your initial plan. If there are problems just revise. The test is not there to define the solution it is there to allow you to make changes so you can have a better solution while ensuring the stability.
I would say the best bet is to use TDD. The key is to realize that TDD doesn't mean "skip the planning". TDD means do a little bit of planning to get started well, and adjust as needed. You may not even need to adjust.
A: In general, I find pseudocode only really becomes relevant when the code required to solve the problem is much more complicated that the code required to test the solution. If this is not the case, I do not run into the difficulties you describe as the simplest thing that could possibly work is usually an acceptable solution for the amount of time worth spending on the problem.
If, on the other hand, the problem is complicated, I need to think through how to approach it before I can write even an initial naive solution - I still need to plan before I code; therefore, I use a combination of both approaches: an English description of what I will initially write, then a test harness, then naive solution code, then refinement.
A: I've used both along with Big Upfront Development, all three have their places depending on issues such as language, team dynamics and program size/complexity.
In dynamic languages (particularly ruby), I highly recommend TDD, it will help you catch errors that other languages would have caught at compile time.
In a large, complex system, the more design you do upfront the better off you will be. It seems like when I designed for a large project, every area that I hand-waved and said "this should be pretty straight forward" was a stumbling point later in the project.
If you are working alone on something small in a staticly-typed language, the list approach is reasonable and will save you a good deal of time over TDD (Test maintenance is NOT free, although writing the tests in the first place isn't too bad)--When there aren't any tests in the system you're working on, adding in tests isn't always admired and you might even draw some unwanted attention.
A: Just because the test passes, doesn't mean you're done.
TDD is best characterized by Red - Green - Refactor.
Having a test provides one (of two) goal lines. It's just the first, minimal set of requirements. The real goal is the same goal as "Pseudocode Programming Process" or any design discipline.
Also, the TDD is driven by testing, but that doesn't mean driven blindly by testing. You can iterate your testing the same way you iterate your code. There's no place for dogmatic adherence to a dumb plan here. This an an Agile technique -- that means adapt it to your team and your circumstances.
Design enough code to have a testable interface. Design enough tests to be sure the interface will work. Design some more tests and some more implementation until you see the need to refactor.
The real goal is Good Software. TDD can't exclude "goodness".
A technique is not a restrictive mandate. A techniques should be looked as a crutch to help you product good code. If I were smarter, richer and better-looking, I wouldn't need TDD. But since I'm as dumb as I am, I need a crutch to help me refactor.
A: For me TDD has an ace pseudocoding just can't compete with - both help you abstract and plan the development, but once you're finished development in TDD land you still have the unit tests.
AS useful an approach as CC2 described pseudocoding is, it just can't match that. TDD is only half about designing, it's also providing a rigorous scaffold you can evolve the project forward from. However I see no reason why you can't pseudocode to solve the problems TDD sets.
I must not develop organically.
Pseudocode is the mind-killer.
It is the little-death that brings project memory oblivion.
I will face my 90's methodology.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the pseudocode has gone there will be TDD.
Only unit-tests will remain.
(please don't flame me for that, I'm only half serious :P )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Memory leaking with [NSKeyedUnarchiver decodeObjectForKey] Every time I call this method my NSMutableData is leaking and I cannot figure out how to plug it. theData's retain count is upped by one after the decoder is allocated and initialized and I have no idea why. I am stuck with a retain count of 2 at the end of the method and attempting to release it causes an app crash.
- (void)readVenueArchiveFile:(NSString *)inFile key:(NSString *)inKey
{
NSMutableData *theData;
NSKeyedUnarchiver *decoder;
theData = [NSData dataWithContentsOfFile:inFile];
decoder = [[NSKeyedUnarchiver alloc] initForReadingWithData:theData];
venueIOList = [[decoder decodeObjectForKey:inKey] mutableCopy];
[decoder finishDecoding];
[decoder release];
}
A: Reducing peak memory footprint
In general, it is considered best practice to avoid generating autoreleased objects.
[Most of this paragraph amended from this question.] Since you typically(1) don't have direct control over their lifetime, autoreleased objects can persist for a comparatively long time and unnecessarily increase the memory footprint of your application. Whilst on the desktop this may be of little consequence, on more constrained platforms this can be a significant issue. On all platforms, therefore, and especially on more constrained platforms, where possible you are strongly discouraged from using methods that would lead to autoreleased objects and instead encouraged to use the alloc/init pattern.
I would suggest replacing this:
theData = [NSData dataWithContentsOfFile:inFile];
with:
theData = [[NSData alloc] initWithContentsOfFile:inFile];
then at the end of the method add:
[theData release];
This means that theData will be deallocated before the method exits.
You should end up with:
- (void)readVenueArchiveFile:(NSString *)inFile key:(NSString *)inKey
{
NSMutableData *theData;
NSKeyedUnarchiver *decoder;
theData = [[NSData alloc] initWithContentsOfFile:inFile];
decoder = [[NSKeyedUnarchiver alloc] initForReadingWithData:theData];
ListClassName *decodedList = [decoder decodeObjectForKey:inKey];
self.venueIOList = decodedList;
[decoder finishDecoding];
[decoder release];
[theData release];
}
This makes the memory management semantics clear, and reclaims memory as quickly as possible.
(1) You can take control by using your own local autorelease pools. For more on this, see Apple's Memory Management Programming Guide.
A: I would suggest replacing this line:
venueIOList = [[decoder decodeObjectForKey:inKey] mutableCopy];
with:
ListClassName *decodedList = [decoder decodeObjectForKey:inKey];
self.venueIOList = decodedList;
This makes the memory management of decodedList clear. It is considered best practice to assign instance variables using an accessor method (except in init methods). In your current implementation, if you ever invoke readVenueArchiveFile: a second time on the same object, you will leak (as you will if decodedList already has a value). Moreover, you can put the copy logic in your accessor method and forget about it rather than having to remember mutableCopy every time you assign a new value (assuming there's a good reason to make a mutable copy anyway?).
A: Don't worry about retain counts, worry about balance within a method. What you're doing in this method looks correct, assuming venueIOList is an instance variable.
To expand on my answer a little bit: The unarchiver might be retaining your data during the unarchive operation, and then sending the data -autorelease when it's done instead of -release. Since that's not something you did, it's not something you have to care about.
A: The ultimate source for refcount-related memory management enlightenment is still, IMO, "Hold Me, Use Me, Free Me" from Stepwise.
A: Your code is correct; there is no memory leak.
theData = [NSData dataWithContentsOfFile:inFile];
is equivalent to
theData = [[[NSData alloc] initWithContentsOfFile:inFile] autorelease];
At this point theData has a reference count of 1 (if less, it would be deallocated). The reference count will be automatically decremented at some point in the future by the autorelease pool.
decoder = [[NSKeyedUnarchiver alloc] initForReadingWithData:theData];
The decoder object keeps a reference to theData which increments its reference count to 2.
After the method returns, the autorelease pool decrements this value to 1. If you release theData at the end of this method, the reference count will become 0, the object will be deallocated, and your app will crash when you try to use it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to compile classes to JDK1.5 when ant is running in JDK1.6 My development environment is running in JDK1.6, and I need to compile some classes so they are compatible with a client running JDK1.5. How would I do this with the 'javac' ant target?
A: Command line : javac -target 1.5 sourcefiles
Ant: < javac srcdir="${src} destdir="${build}" target="1.5" />
A: <javac source="1.5"... />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Memory leak detection while running unit tests I've got a Win32 C++ app with a suite of unit tests. After the unit tests have finished running, I'd like a human-readable report on any unfreed memory to be automatically generated. Ideally, the report will have a stack with files & line number info for each unfreed allocation. It would be nice to have them generated in a consistent order to make it easy to diff it from one run to the next. (Basically, I would like the results of valgrind --leak-check=full, but on windows).
I've had success with UMDH getting this kind of info from running processes, but that tool only seems to work if you attach to an existing process. I want this to happen automatically every time I run my unit tests.
Is there a tool that can do this? If so, how do I use it?
Thanks!
A: To obtain this sort of information, we override new/delete and malloc/free, providing our own heap implementations that store stacktraces on allocation and produce a report when the heap is destroyed (as well as adding sentinels to detect buffer overruns).
This is a fair bit of work the first time you do it. This guy has written a freeware tool that handles all the hard bits - I have not tried it out myself, but his explanation of how he wrote it is useful when rolling your own.
A: If you're using MSVC, Microsoft's Debug heap functions can be used to generate the report you want, but it may not be as automatic as you'd like (you may need to write some custom code):
_CrtSetReportMode
_CrtSetReportFile
_CrtMemState
_CrtMemCheckpoint
_CrtMemDumpStatistics
_CrtSetReportFile
_CrtSetDbgFlag
A: You can define DEBUG_NEW and that turns on some leak detection, you need to define it before including any system include files. It only checks for leaks using the new operator and of course you must recompile your code so you can't attach it like valgrind.
See more info here:
http://msdn.microsoft.com/en-us/library/tz7sxz99(VS.80).aspx
A: I did this once, but it wasn't quite as automatic. I don't have access to that code now, but here's the idea:
I used the debug functions that Mike B has mentioned (btw, they only work in Debug).
The tests runner ran all tests twice, because during the first run memory is allocated for globals. The second time, the total number of allocated blocks was checked before and after each test (I think you can do it in setUp() and tearDown()). If the number was different, it meant a memory leak, and the test failed with an appropriate message. Of course, if the test itself fails, you should preserve its error message. Now to find the leak, I had to read the block allocation number of the last allocation using pBlockHeader, then set a breakpoint on it using _CrtSetBreakAlloc and run again.
More on this here: http://levsblog.wordpress.com/2008/10/31/unit-testing-memory-leaks/
A: I played around with the CRT Debug Heap functions Mike B pointed out, but ultimately I wasn't satisfied just getting the address of the leaked memory. Getting the stacks like UMDH provides makes debugging so much faster. So, in my main() function now I
launch UMDH using CreateProcess before and after I run the tests to take heap snapshots. I also wrote a trivial batch file that runs my test harness and then diffs the heap snapshots. So, I launch the batch file and get my test results and a text file with the full stacks of any unfreed allocations all in one shot.
UMDH picks up a lot of false positives, so perhaps some hybrid of the CrtDebug stuff and what I'm doing now would be a better solution. But for right now I'm happy with what I've got.
Now if I just had a way to detect if I was not closing any handles...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I write a decorator that restores the cwd? How do I write a decorator that restores the current working directory to what it was before the decorated function was called? In other words, if I use the decorator on a function that does an os.chdir(), the cwd will not be changed after the function is called.
A: You dont need to write it for you. With python 3.11, the developers have written it for you. Checkout their code at github.com/python/cpython. Its in module contextlib.
import contextlib
with contextlib.chdir('/path/to/cwd/to'):
pass
A: The answer for a decorator has been given; it works at the function definition stage as requested.
With Python 2.5+, you also have an option to do that at the function call stage using a context manager:
from __future__ import with_statement # needed for 2.5 ≤ Python < 2.6
import contextlib, os
@contextlib.contextmanager
def remember_cwd():
curdir= os.getcwd()
try: yield
finally: os.chdir(curdir)
which can be used if needed at the function call time as:
print "getcwd before:", os.getcwd()
with remember_cwd():
walk_around_the_filesystem()
print "getcwd after:", os.getcwd()
It's a nice option to have.
EDIT: I added error handling as suggested by codeape. Since my answer has been voted up, it's fair to offer a complete answer, all other issues aside.
A: def preserve_cwd(function):
def decorator(*args, **kwargs):
cwd = os.getcwd()
result = function(*args, **kwargs)
os.chdir(cwd)
return result
return decorator
Here's how it's used:
@preserve_cwd
def test():
print 'was:',os.getcwd()
os.chdir('/')
print 'now:',os.getcwd()
>>> print os.getcwd()
/Users/dspitzer
>>> test()
was: /Users/dspitzer
now: /
>>> print os.getcwd()
/Users/dspitzer
A: The path.py module (which you really should use if dealing with paths in python scripts) has a context manager:
subdir = d / 'subdir' #subdir is a path object, in the path.py module
with subdir:
# here current dir is subdir
#not anymore
(credits goes to this blog post from Roberto Alsina)
A: The given answers fail to take into account that the wrapped function may raise an exception. In that case, the directory will never be restored. The code below adds exception handling to the previous answers.
as a decorator:
def preserve_cwd(function):
@functools.wraps(function)
def decorator(*args, **kwargs):
cwd = os.getcwd()
try:
return function(*args, **kwargs)
finally:
os.chdir(cwd)
return decorator
and as a context manager:
@contextlib.contextmanager
def remember_cwd():
curdir = os.getcwd()
try:
yield
finally:
os.chdir(curdir)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Designing a GUI How would you, as a developer with little (or no) artistic inclination, design a GUI for an application? In particular, I'm thinking about desktop apps but anything that relates to Web apps is welcome as well.
I find it extremely hard to design something that both I and potential users find pleasing. I can look up color schemes on the net, but how would I know where to place buttons/textboxes/etc.?
Update: To clarify, I don't mean what controls and such to use. Rather, are there any guidelines/hints to when I should buttons, combos, textboxes and so on? How long should they be and where would I place them on the form?
A: The applications I developed get clicked thousands of times per hour, so everything comes down to efficiency. I like to think of it as a sort of currency with which you can generate lots of useful axioms:
If it saves a click, +1.
If it costs a click, -1.
If the user wastes time figuring out how it works, -1. (Most custom UI elements)
If it is fundamentally intuitive, +1. (Coverflow)
If it saves first-time users a click, +1. (Wizards)
If it costs long-term users a click, -1. (Wizards)
(Thus why you have to make sure your keyboard shortcuts and tab orders make sense.)
Etc.
Etc.
Everything gets subjectively weighted and tallied and you compromise where you have to. Ultimately, it might be a naive philosophy, but its served me fairly well. Extrapolate as you see fit.
A: Have you read http://msdn.microsoft.com/en-us/library/aa511258.aspx ?
Or read the wikipedia article: http://en.wikipedia.org/wiki/Human_interface_guidelines it contains links to some HIG
A: The first thing you need to do is get out of your developer-point-of-view. We tend to think in terms of forms, controls, buttons, lists, grids etc. And this tends to push us to solutions that are not always optimal for the user.
Users don't want to use our software. (except when you're programming games) They just want to get stuff done. So when desinging UI and user interactions it makes sense to start from there. Write down what a user wants to do with your software. Think about how a user would go about doing these things and what your application could do to make things easier.
Try to work with different tools than you use for programming. These make you think in UI widgets again. Start with a pencil and a piece of paper to sketch things, also try to think about the behaviour as well as the layout etc. If you've got a clear picture of what you want to build you can start thinking about how you're going to build it. That's when the widgets, buttons and pages come in.
A: This may be helpful: best-practices-principles-for-gui-design
A: On top of everything else that has been said in this page, I'd add that the less you notice a GUI, the better it is.
I mean, when the user interface isn't perceived by the user, it's because the user is getting his/her job done. Users notice the GUI when (a) it's beautiful (think Apple) or (b) it's crappy (think whatever GUI you have used that has got you frustrated).
A: If you're designing for the desktop, you can find guidelines for the operating system interface, which can help.
A: Joel Spolsky has a pretty good high level design tips:
http://www.joelonsoftware.com/uibook/chapters/fog0000000057.html
Above that, the biggest thing is just trying something out, making mockups. Maybe initial ones on paper, but at some point just try to put together a GUI without hooking up the code behind. See what you think, try some changes, ask other people for their opinions, and just experiment. Best design technique is to get feedback from people, preferably from the target audience if possible.
A: It's not clear if you're talking about how to create a good dialog window or how to create a coherent look and feel for a huge application.
A pretty good reference for how do design an effective and clear UI is User Interface Design for Programmers by ... wait for it ... Joel Spolsky.
A: Just think about who your users are from the perspective of your application. Is there one kind? More? Then for each kind of user, think about the big overarching things they want to accomplish. Present them those general choices, and then go from there into more appropriate interfaces for each task.
Finally, if you have a big set of steps, a wizard is nice because it lets you validate each step one at a time. This is obvious on a native app, but very handy on the web.
A: I have never seen GUI design as a fundamentally artistic activity. While it is true that a well designed user interface can be enhanced with artistic elements, but designing the underlying user interface is really an engineering effort. Certainly in larger projects a GUI specialist is a natural specialization, just like having build specialist, etc. But I think that it is the rare software engineer that cannot create an effective GUI - when they are given the time and resources to do so.
Most of us have learned most of what we know about building systems by recognizing goodness in existing system and then rolling up are sleeves and innovating, and by sticking to it until it works. The GUI is no exception.
A: There are a few UI patterns presented on the UI patterns site that you may find useful
Generally if you are designing form for windows follow the various microsoft guidelines.
One principal I keep in mind is use existing patterns of behaviour (double click to execute/activate an item etc.) as people already know these.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Is there a way to change something in Tool->Options via a macro? I'd like to be able to toggle easily between two values for "maximum number of parallel project builds" in Visual Studio 2008 (in Tools->Options->Projects and Solutions->Build and Run). (When I'm planning on doing concurrent work I'd like to reduce it from 4 to 3.) I'm not too well versed in writing macros for the IDE. When I try recording a macro, and perform all the actions (open the dialog, change the setting, click OK), the only thing that gets recorded is this:
DTE.ExecuteCommand ("Tools.Options")
Is my goal unattainable?
A: It appears to be impossible, according to the MSDN page for Determining Names of Property Items in Tools Options Pages
If it was possible, it would have been something like this:
Dim p = DTE.Properties("ProjectsAndSolutions","BuildAndRun")
p.Item("MaxNumParallelBuilds")
A: This appears to now be possible in VS2010. I'm no VB programmer, but here's what I got to work:
Sub EditConcurrentBuilds()
Dim p As EnvDTE.Properties = DTE.Properties("Environment", "ProjectsAndSolution")
Dim item As EnvDTE.Property = p.Item("ConcurrentBuilds")
Dim text As String = InputBox("Enter number of concurrent builds", "Concurrent Build Option")
Dim v As Integer = Val(text)
If (v > 0 And v < 5) Then
item.Value = text
End If
End Sub
In this case, 4 is the most processors I've got on my machine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Suggestions for replication of data from MS Sql 2005 and MySql My company currently has a transactional db running on Sql Server 2005. We are going to add a MySql (running on linux) reporting db. We'll need to get replication running from the MS-Sql db to the MySql db. It doesn't have to be real time but should be within a few minutes.
I've got pretty good MSSql Dev skills and so-so dba skills but no MySql background. The MySql guy on our team has no MSSql experience.
I was wondering if anybody has setup anything similar and might have some suggestions. I've seen some things on migrating data between the two but not much for on-going replication. Right now my best guess is to set something up in SSIS and run it under the Sql Agent. I'm going to work on the SSIS idea for now but welcome any suggestions.
A: Friend of mine for almost the same case (he copies some data from just a few tables from MSSQL to MySQL) built something like that:
*
*Added trigger to each table which will be replicated. Trigger save primary key, operation type (i)nsert/(u)pdate/(d)elete and source table name in special table (less or more).
*Small .NET app scans this special tables for new keys every few minutes and reads data from source MSSQL tables and save them in destination tables in MySQL (less or more).
This works fine because:
*
*Tables don't change a lot.
*He copies just a few columns.
Pros:
*
*Fast & easy to implement & change.
Cons:
*
*In house made tool is not perfect :).
A: I think it depends on what reporting software you'll be using on top of the MySQL database. If you're using Pentaho - they have software to handle this situation. If reporting is just going to be ad hoc and the structure will remain exactly the same, I would seriously consider setting up another MSSQL instance and working with that. If you already have MSSQL, don't putz around trying to make the two friendly with each other. You should be able to have the second MSSQL instance tied down to only limited resources so that the transactional db never gets impacted even if they're on the same machine.
A: A third-party application claims the ability to do this: Daffodil Replicator. I think it's available both as Open Source and Enterprise.
A: SSIS ETL seems like the simplest way to go. You could actually export to a staging area (CSV files) and than import to MySQL. This would take care of different format problems. If you get creative, MySQL supports the CSV storage engine (see here), so this could save the load step in SSIS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How big is too big for XP/SCRUM? In the earliest stages of planning the development of a new system, which development model to follow seems paramount. I've always held onto the belief that a classic waterfall (or hybrid waterfall/iterative prototyping) is the best approach for medium to large projects. It seems that once a project gets to be a certain size, the Agile/XP/Scrum paradigms can't account for complex requirements, a large team, the complexities between multiple sub-systems, the need for documentation, personnel changes, etc, etc.
What's the limit of such agile methodologies in terms of system size, team size, LOC, etc?
A: Scrum can be scaled using "Scrum of Scrums".
From the Scrum alliance comes this advice on conducting Scrum of Scrums meetings:
The scrum of scrums meeting is an important technique in scaling Scrum to large project teams. These meetings allow clusters of teams to discuss their work, focusing especially on areas of overlap and integration.
The book Agile and Iterative Development also discuss this issue.
A: I don't think there is a boundary, after all the ideas of scrum came out of car manufactoring and that's pretty big in terms of people. The thing with big projects is, that you need to start with a small team and grow it over time. Keep separate teams that interact via Scrum of Scrums and it will scale, if the people are willing to collaborate it will work. It's like always in our business: divide and conquer. Break the big hard problem into smaller manageble chunks.
A: Have a look at this blog post by Bernie Thompson.
It outlines a lot of the issues and trade-offs he ran into when scaling up Scrum / XP at Microsoft, and has some very thoughtful and interesting solutions.
There are other posts on the same blog that also deal with these issues of scale that concern you - IMO it's a gold-mine of ideas on "agile for grown-ups".
A: Within a team the communication channels are proportional to (N * N-1) / 2 as an upper bound, so could loosely be viewed as O(N^2). The decentralised nature of agile teams means that there is no central point of reference and the communication will grow closer to the upper bound than if there was such a point of reference.
Where you have a written specification and a more formal structure (see Painless Functional Specification for a discussion of spec documents) the communication is closer to a hub-and-spoke model, which has closer to O(N) channels (for N staff on the project). Most of the rule-of-thumb commentary I've seen puts the sweet spot for Agile teams at 6 or less and the upper bound at around 10, although your mileage may vary.
In the PFS articles Joel (yes, that Joel) discusses the role of a Programme Manager, whose role is to develop and own the specification. The Painless Functional Specifications series goes into this in quite a bit of detail and is also quite accessible to non-technical management - I've referred quite a few people to this article.
A: Picture Scrum/XP as a series of mini-Waterfalls. Initially, you want to do an upfront effort to get a good, well-defined backlog. Not necessarily the whole system, I'd argue that once you get one or two sprints worth of product backlog items, it's time to start sprinting. Concurrently with the sprint, you should be creating additional PBIs (and reprioritizing them appropriately).
The idea is that you can get business value delivered before the system is FULLY defined.
A: Scaling scrum or any agile approach is dependent on your environment.
If you have multiple projects with multiple teams, scaling is simply sharing best practices across teams. As soon as you start requiring integration between systems/projects, be wary. Tighter integration between teams is preferable at that point.
If you have one large project (I had a team of 45 at one point), there are different approaches to scaling. We chose to keep one team with multiple standups - developer standup separate from BA/QA standup. The iteration manager attended both and at least one from each side attended the other. We had one card wall, but it included pre-iteration stuff (stories in process of analysis, production bugs to chase) and post-iteration stuff (release/deployment work).
I've also been a part of one very large project with many scrum teams (~20 teams - some distributed - ranging from 10-20 members each). Each had separate standups, and there was a scrum-of-scrums and even a scrum-of-scrum-of-scrums. I think we made a mistake by segmenting the teams by functional area rather than workflows. Our segmentation created silos of code ownership with onerous integration management issues between teams.
In sum, it's not just about size for scaling... it's also about the content of the project. Feel free to share more specifics about your environment to hear more specific approaches to addressing scale in your environment.
A: Agile scales fine. It is not a rocket science. In fact it is all about modularity. Software development is a CAS (Complex Adaptive System) and, as almost any CAS, it has modules to rule the complexity better. Scrum of Scrums is one of the possible modular approach for development process scaling. Functional divisions (Developers, QA, etc) is an another modular approach. The worst case is when you do not have modules at all in a large project.
Depending on a project nature, team may decide what modules will work for the project. General pattern is to form several teams that work on some low cohesion modules. Each team should be quite autonomous, but interaction with another teams should be good.
The analogy from CAS is a human body for example. We have organs like heart and liver. They are separate modules (teams of cells :) that interacts via nervous system/blood/etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is there a "poor man's" alternative to RedGate for scripting out entire database schema? I'm in a situation where I would to generate a script for a database that I could run on another server and get a database identical to the original one, but without any of the data. In essence, I want to end up with a big create script that captures the database schema.
I am working in an environment that has SQL Server 2000 installed, and I am unable to install the 2005 client tools (in the event that they would help). I can't afford RedGate, but I really would like to have a database with identical schema on another server.
Any suggestions? Any simple .exe (no installation required) tools, tips, or T-SQL tricks would be much appreciated.
Update: The database I'm working with has 200+ tables and several foreign-key relationships and constraints, so manually scripting each table and pasting together the script is not a viable option. I'm looking for something better than this manual solution
Additional Update Unless I'm completely missing something, this is not a viable solution using the SQL 2000 tools. When I select the option to generate a create script on a database. I end up with a script that contains a CREATE DATABASE command, and creates none of the objects - the tables, the constraints, etc. SQL 2005's Management studio may handle the objects as well, but the database is in an environment where there is no way for me to connect an installation of Management Studio to it.
A: So, I assuming that you cannot install SQL Server Management Studio. Not even the free version. If this is a onetime thing, I would install the red gate tools. The trial version is fully functional for 14 days. Do your work, then uninstall. You might also check out http://www.xsqlsoftware.com/, they have similar functions as Red Gate. If your database is small, then they have a free option for you. I also like http://www.apexsql.com/.
Use the trial of one of these and then try again to convince your boss to buy one.
A: In addition to the above answers, I'd like to suggest that (for future projects, at least) you don't have you master database design in the database itself.
One way to achieve this is to either simply maintain your database tables, procedures etc as 'CREATE' scripts from day one, and devise a master script that will pull all of the individual scripts together for deployment to a database of your choosing.
A more sophisticated solution is to use something like Visual Studio Database Edition (Probably too pricey, if your comments are anything to go by) which allows you to treat each database object as a node in a project, whilst still allowing the whole thing to be deployed with a few clicks.
The explanation of both of these options is over-simplified, as there are a lot of other issues - versioning, data migration etc - but the main principle is the same.
Once you've extracted your script - using one of the other answers - you may want to consider using this script as the basis for your 'master'.
Just keep the 'design' out of the database - that's purely for 'implementations'.
Try to think of the process as similar to developing code - the source and the binaries are kept separate, with the latter being generated from the former.
A: I have used this program from CodeProject successfully many times before. Not only does it script out the schema, it can (optionally) script out all the INSERT statements you will need for recreating the database.
I've used it against SQL Server 2000 in the past. Obviously if you have a million rows in a table it might not be a good idea to script out the contents but it's a really neat tool actually to make a series of SQL scripts to replicate a database.
A: If it is one-off operation and you do not fancy ordering the object scripts yourself just download a free trial version of RedGate SQL Compare.
It is 14 days full working version so it will do the entire job for you – it is completely legitimate solution. And you will have all the scripts for the future use with your manual scripting. Anyway I am pretty sure that if you find out how handy the tool is you will buy it in some later stage and that is what they probably hope for by offering fully working trial. It somehow worked in that way in my case.
Just be aware that once the trial expires it affects all the tools so make sure to make the most of it.
A: Run SQL Server Management Studio, right click on the database and select Script Database as > Create to > file
That's for SQL Server 2005. SQL Server 2000 Enterprise Manager has a similar command. Just right-click on the database > All Tasks > Generate Scripts.
EDIT: In SQL Server 2005, you can select "Database" in the object explorer pane and select several databases in the details pane. Then, right-click on your selection and "Script Database as > Create to > file". This will cause it to put them all into one script and it will include all tables, keys, stored procedures, and constraints.
A: Michael Lang, when you right-click on the database and choose to create a script, there are several options boxes that you will need to click in order for everything to be generated. 2005 is much easier in this respect, but 2000 can do this, you just need to select the proper options.
A: If you're looking for a command line version, Sql Server Hosting Toolkit has worked for me in the past.
A: Microsoft Database Publishing Wizard (in Visual Studio 2008).
A: As you mentioned in SQL Server 2000 the command to use is:
While in Enterprise Manager select the database you want to script objects from and then Right Click and select All Tasks -> Generate SQL Scripts...
While in the Options Pane it is handy to select option Create one file per object it that way you can store each object in source control separately.
So then whenever you do some updates on a tableA you can check it out in source control to let others know that you work on it and the after you finish you can script that single object and save check it in.
To script a single object you can you the same option All Tasks -> Generate SQL Scripts... and then select just that one object.
The only problem with that approach is when you want to restore the complete database you need to take care of dependent objects in the sense that the top level object must be restored before the ones dependent on them.
Ironically when you script the whole database to one file the objects are not ordered in terms of dependency but rather based on the creation date. That leads to errors when you try to use it to restore the complete DB.
We ended up creating batch file that would create each object separately calling "osql "
And that all worked pretty well at that time
Now we use SQLCompare Pro and it safes us from or that hassle, but if you do not make release frequently you can live without it.
A: The others are correct, but in order to create a full database from scratch, you need to create a 'device' in SQL before you run the create tables, and procedures scripts...
Use ADODB, since just about every (if not every) Windows box has it installed to execute the script.
Hell, you could even write a vbScript that executes to build your entire database. Any domain tables you have, you need to remember to turn on the identity insert before you add the items to the DB.
I'm open to source code sharing if you need it. I had to write the very same thing a couple years ago, and ended up with 3 scripts, one that created the device, then the tables, then the procedures/triggers and domain data. I wrote the code to parse the default script and allow the user to name his own database, and logins, etc.. You may not have to go that far.
Hope this helps at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How to create enum object from its type and name of the value? I have a type (System.Type) of an enum and a string containing enumeration value to set.
E.g. given:
enum MyEnum { A, B, C };
I have typeof(MyEnum) and "B".
How do I create MyEnum object set to MyEnum.B?
A: I assume you don't have access to MyEnum, only to typeof(MyEnum):
void foo(Type t)
{
Object o = Enum.Parse(t, "B");
}
A: MyEnum enumValue = (MyEnum)Enum.Parse(typeof(MyEnum), "B");
You also have a case-insensitive overload.
A: You can do this with generics. I created a Utility class to wrap this:
public static class Utils {
public static T ParseEnum<T>(string value) {
return (T)Enum.Parse(typeof(T), value, true);
}
Then invoked like:
string s = "B";
MyEnum enumValue = Utils.ParseEnum<MyEnum>(s);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Binding a socket to port 80 in ansi c When I try to bind port 80 to a socket in c, i always get the error, that I don't have permission to use this port. is there an easy way to get this permission?
A: You will find this tutorial very helpful on network programming with C/C++.
And, by the way, ANSI C has no way to access the network. It is the OS supplied libraries (the BSD socket API, also ported to Windows as winsock) that provide this capability.
A: Ports 1024 and below are called Privileged Ports, binding to these ports requires elevated permission.
Ports above 1024 are called Emphemeral Ports. Binding to these requires no special permissions.
The easiest way to gain access to privilged ports is to be the root user.
A: Usually only the superuser (root) can bind to 'privileged' ports (i.e. those port numbers below 1024).
This means that you either have to run your program as root or make your executable 'suid root'.
Both of these have security consequences so you may want to consider using the suid approach and relinquishing superuser privileges once the bind call has been made.
A: If you are on a shared system (like a university computer) and not root then there is no 'easy' way to get that permission, by design.
A: It's just as @Charles Bailey puts it...and I would like to add that this is why one used to see http server addresses on 8080 by port specification in the URL as http://some.url:8080/
A: Traditionally only root can bind sockets to ports under 1024.
A: S.Lott's reply may have triggered very negative reactions but his idea is far from stupid: if the original question is for a real program (not a school assignment), developing it as an application behind an HTTP server is often a reasonable choice. That way, you can leave a lot of low-level details to a good and well debugged program, Apache.
The application does not have to be a CGI, it can be an Apache module. Apache, from version 2, is no longer just a HTTP server. It is now a platform to develop network programs. Writing an Apache module may be the correct answer to the original question (see Apache documentation)
A: Normal programs can't bind "privileged" ports - those below 1024. This is a mostly obsolete security feature of UNIX-like operating systems.
Running as a superuser, although suggested by many others here, is a bad solution to this problem. If you are running on a Debian or Ubuntu system, I suggest installing the authbind package, which will allow you to grant your program permission to open privileged ports without actually having to give your program any other special permissions.
If you're running on any other system, I suggest installing debian or ubuntu ;-).
A: Yes, you can easily bind to port 80. Use Apache. Write a web application. Apache binds to port 80 and runs your web application.
Are you trying to write the next Apache? If so, you'll need to learn about the setuid API call in your operating system.
If you're not writing a new version of Apache, most people use a non-privileged port. 8000 is popular, so is 8080.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Factoring web services? What are some guidelines for factoring of web services? On one extreme you have different procedures for every operation, and expose all types through WSDL. This seems to result in the WS interface changing as often as the business logic. On the other, a generic interface where the types and validation are performed a layer down from the WS interface. This second option seems to provide more interface stability, and other possibilities such as service chaining. I've flip-flopped between the two on multiple projects, and wanted some feedback on how others have approached this.
A: I don't think there's a clear-cut answer here. What is your business process like? What would express the semantics of your domain in the best way?
Additionally, you need to take into consideration issues of chatty vs. chunky interfaces. Are you running on a LAN? Over the Internet?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's the right way to display emoticons? In your own application that is.
[edit]
Alright, so I agree completely -- more than you know -- without equivocation that graphical emoticons are an abomination.
But! That doesn't help me when the project owners tell me that we must support graphical emoticons.
[/edit]
The problem more complex than it would initially seem, especially when you take into account editing, word wrapping, variable width fonts, and color emoticons.
So my question is really to people who have done this, have you come up with a satisfactory way of rendering emoticons that isn't just one massive hack?
To start the discussion:
In two implementations I've tried the following approaches.
In a 3D application where lines of text where rendered to "textures"; I replaced emoticon strings with images, matching the type of the text renders, so the emoticon becomes just another element of the text.
Essentially the text rendering engine had to be modified at the deepest levels. This produces good results, but is very time consuming, and very hard to get right (Or at least for me anyway ;P)
In the second approach (With a different language and platform) I decided to try a higher-level "fake" by replacing emoticon strings with a single character and drawing emotions over the replaced character. This of course has lots of limitations, but has the benefit of being fairly fast to implement, and it's possible to reach a reasonably stable state without an excess of effort.
A: *
*Create a new font. This is left as an exercise for the reader
*Transcode your
strings into the character set of
your new font.
*Draw:
*Make repeated calls operating environment function to calculate Text Metrics - to locate the position of your emoticons.
*Draw the emoticons before the text (if you're using the font to enhance whatever background image is - say a face, with the font glyphs drawing the black on the face) or after the text (in which case the glyph for each emoticon is moot, and only the size of the glyphs is important).
A: Is it an acceptable answer to suggest you should consider not converting emoticons? The entire point of textual emoticons is that they're recognizable...in text form.
[edit] Please don't let this opinion/suggestion dissuade anyone from helping answer this question. Sometimes you can't fight the clients, though it may be worth a couple more attempts.
A: Here's an idea: don't convert them in any way, but rotate by 90 degrees.
For example, you can render the image of the emoticon and then rotate that image 90 degrees clockwise and display it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: unexpected T_CONCAT_EQUAL I'm getting an unexpected T_CONCAT_EQUAL error on a line of the following form:
$arg1 .= "arg2".$arg3."arg4";
I'm using PHP5. I could simply go an do the following:
$arg1 = $arg1."arg2".$arg3."arg4";
but I'd like to know whats going wrong in the first place. Any ideas?
Thanks,
sweeney
A: This would happen when $arg1 is undefined (doesn't have a value, was never set.)
A: So the most accurate reason is that the above posted line of code:
$arg1 .= "arg2".$arg3."arg4";
was actually as follows in my source:
arg1 .= "arg2".$arg3."arg4";
The $ was missing from arg1. I dont know why the interpreter did not catch that first, but whatever. Thanks for the input Jeremy and Bailey - it lead me right to the problem.
A: sounds like you forgot a semicolon on the line above this one.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Set server or workstation GC How can I configure an application, or even an entire machine, to use either the server or workstation flavor of the CLR's garbage collection?
A: Have a look here.
I recommend giving the entire series of blog posts a good read - very informative.
A: I should mention that I found there are two ways of handling this, either for the entire application using a .config file (application or machine), using the gcConcurrent and gcServer elements, or it can be done on a code block level using GCSettings.LatencyMode.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: SetCursor reverts after a mouse move I am using SetCursor to set the system cursor to my own image. The code looks something like this:
// member on some class
HCURSOR _cursor;
// at init time
_cursor = LoadCursorFromFile("somefilename.cur");
// in some function
SetCursor(_cursor);
When I do this the cursor does change, but on the first mouse move message it changes back to the default system arrow cursor. This is the only code in the project that is setting the cursor. What do I need to do to make the cursor stay the way I set it?
A: You need to respond to the Windows message WM_SETCURSOR.
A: You need to make your HCURSOR handle not go out of scope. When the mouse moves, windows messages start flying all over the place, and it will wipe out your handle (in the example above).
Make an HCURSOR a private member of the class, and use that handle when you call LoadCursor...() and SetCursor(). When you are done, do not forget to free it, and clean it up, or you will end up with a resource leak.
A: It seems that I have two options. The first is the one that Mark Ransom suggested here, which is to respond to the windows WM_SETCURSOR message and call SetCursor at that time based on where the mouse is. Normally windows will only send you WM_SETCURSOR when the cursor is over your window, so you would only set the cursor in your window.
The other option is to set the default cursor for the window handle at the same time as I call SetCursor. This changes the cursor set by the default handler to WM_SETCURSOR. That code would look something like this:
// defined somewhere
HWND windowHandle;
HCURSOR cursor;
SetCursor(cursor);
SetClassLong(windowHandle, GCL_HCURSOR, (DWORD)cursor);
If you use the second method you have to call both SetCursor and SetClassLong or your cursor will not update until the next mouse move.
A: This behavior is intended to be this way. I think the most simple solution is: When creating your window class (RegisterClass || RegisterClassEx), set the WNDCLASS.hCursor || WNDCLASSEX.hCursor member to NULL.
A: As @Heinz Traub said the problem comes from the cursor defined on the RegisterClass or RegisterClassEx call. You probably have code like:
BOOL CMyWnd::RegisterWindowClass()
{
WNDCLASS wndcls;
// HINSTANCE hInst = AfxGetInstanceHandle();
HINSTANCE hInst = AfxGetResourceHandle();
if (!(::GetClassInfo(hInst, _T("MyCtrl"), &wndcls)))
{
// otherwise we need to register a new class
wndcls.style = CS_DBLCLKS | CS_HREDRAW | CS_VREDRAW;
wndcls.lpfnWndProc = ::DefWindowProc;
wndcls.cbClsExtra = wndcls.cbWndExtra = 0;
wndcls.hInstance = hInst;
wndcls.hIcon = NULL;
wndcls.hCursor = AfxGetApp()->LoadStandardCursor(IDC_ARROW);
wndcls.hbrBackground = (HBRUSH) (COLOR_3DFACE + 1);
wndcls.lpszMenuName = NULL;
wndcls.lpszClassName = _T("MyCtrl");
if (!AfxRegisterClass(&wndcls))
{
AfxThrowResourceException();
return FALSE;
}
}
return TRUE;
}
where the wndcls.hCursorsays what cursor will be used when WM_SETCURSOR message is thrown; it happens every time it occurs a mouse move and not only.
I solved a similar problem this way:
In the class' message map add an entry for the WM_SETCURSOR message:
BEGIN_MESSAGE_MAP(CMyWnd, CWnd)
//... other messages
ON_WM_SETCURSOR()
END_MESSAGE_MAP()
Add the method OnSetCursor, which will override the parent class' implementation:
BOOL CMyWnd::OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message)
{
if (SomeCondition())
return FALSE;
return __super::OnSetCursor(pWnd, nHitTest, message);
}
Explanation: when SomeCondition() is true, you will not call parent's implementation. May be you want always to have a cursor not superseded with parent class behavior, so you just need an even shorter method:
BOOL CMyWnd::OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message)
{
return FALSE;
}
And the declaration of the method in the header file is:
afx_msg BOOL OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I disable the "smart insert" function that is bound to the Tab key in the Visual Studio emacs mode? In both the Visual Studio emacs mode and the default mode the tab key is bound to Edit.InsertTab. However, in the emacs mode the tab button jumps to where it expects the next line to start instead of inserting a tab.
Is there a way to disable this "smart insert" while keeping the emacs key bindings?
A: Man, I found it for you :)))
In customization of keyboard shortcuts it is Edit.IncreaseLineIndent.
Set your own key and be happy.
(BTW, I cannot set the Tab key explicitly. Even changing CurrentSettings.vssettings did not help. But it is different story...)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Should I convert from MooTools to jQuery? I have a fairly large codebase that depends on MooTools v1.11 and am about to convert to version 1.2. Since this is a pretty major overhaul, I've toyed with the idea of converting to jQuery.
Anyone have advice on whether to update to jQuery or just stick with MooTools?
I mostly use MooTools for Ajax, drag and drop, and some minor effects.
A: The switch from MooTools 1.1.1 to 1.2.1 isn't that big of a deal. http://github.com/mootools/mootools-core/wikis/conversion-from-1-11-to-1-2
There's even a compatibility layer that makes MooTools 1.1.1 code function in 1.2.x. You may have to manually fix a few things here and there, but it's relatively minor.
Switching to jQuery, or YUI or DOJO, or anything else would require completely throwing out all the code you have and stating over. No client of mine would ever allow that kind of waste.
Also, if you're used to coding using proper MooTools Classes, then jQuery may come as a huge shock to your system. It's not that jQuery forces you to write unreadable and unmaintainable code, it's certainly possible to code very readable and maintainable code in any language. But jQuery has no built-in Class system to help you.
Especially with a large codebase, it is important to keep your code nicely organized.
I am quite biased of course.
A: There's a site up about it describing the differences in philosophy jqueryvsmootools.com
I think that's what it comes down to in the end. A functional DOM centric approach or an Object Oriented JavaScript approach.
A: As others have pointed out, jQuery is a library (for toying with DOM mostly), Mootools (1.2) is a full fledged javascript framework that allows you to organize your code in an object oriented fashion and thus keeping it easy to maintain.
I recommend you read this to really know what each one really is (jqueryvsmootools.com)
And this other link, so you know how to get best from both worlds ;):
http://ryanflorence.com/object-oriented-jquery-with-mootools-pigs-take-flight/
In the end it boils down to what you need. My general recommendation: if you need to grab some quick snippets and fancify your web application jQuery alone is good; if you are going to center your development in javascript, you should really try mootools: the code will scale and you should better be ready.
To upgrade your code from Mootools 1.1 you can use the upgrade helper, it will help you identify the code in conflict through the javascript console (mootools.net/blog/2009/12/31/mootools-1-1-upgrade-layer-beta/)
[Sorry for posting only 1 active link, this is my first answer]
A: It depends on how well you know jQuery and what your deadline is. If you know it well, it does take less lines of code, which means less bandwidth for your customers.
Also if you look at this site you can see that in the leading browser IE, jQuery has better performance than Mootools.
That being said, if everything is working in Mootools v1.11, why are you updating the scripts at all? Like the previous poster says, if it's not broken...
If it is not working properly in Mootools v1.11, how do you know it will work in Mootools v1.2 or even jQuery for that matter? It would be a shame to put a bunch of development time and either have some of the same bugs, or introduce new bugs because of the framework you use.
A: At this point Slickspeed is becoming totally unimportant, selectors are too fast anyway. Also just so you know, Sly by Herald Kirschner, a member of the Mootools dev team just released Sly, a selector engine that beats Sizzle. The stuff about animations being the defining factor for NOT choosing mootools that was left by one of the posters was basically ass backwards, Mootools has been the king of animations for years now. Whatever you pick it will all work. Mootools has a more classical feel that I think keeps me organized, jQuery is more function based and does not venture into classes that much. An one augments native types and the other doesn't, a different strategy but that is it.
*
*Daniel
A: I'm quite happy with Mootools. I've been tempted several times to try out jQuery because more and more people are using it these days, but somehow I still still don't get the nice OO feature like Mootools does. Another thing that I don't really like about jQuery is getting id with the dollar function by adding the hashkey(#). This can be problematic if you want to create html ids using the framework. If I were you, just upgrade to the latest version of Mootools. Mootools is not a bad library at all.
A: In your question you mentioned that you're using MooTools for "Ajax, drag and drop, and some minor effects". While Mootools can do that well, (and I may get blasted for saying this) in my opinion you're not really using Mootools for the right reason. We use Mootools in our applications, and really we cannot think about substituting JQuery for it. If the aim is to write object oriented, long term maintainable code that other applications in your organization can leverage, Mootools wins hands down.
And just selector speed is a wrong criteria to judge by (though I believe Mootools is up there now). Most places in our code we already have references to the elements we want to change. The most time is spent in actually manipulating the DOM once you have the element, and in our internal tests (will try to publish them) Mootools is much faster than JQuery in the usual kind of operations we perform. Our applications are the kind where we rely on previously built Mootools controls (built by us or others) to create application screens using data coming in from Web Services.
A: If it's not broken. Don't fix it.
jQuery might have X or Y but if everything is dependent on MooTools, you might have a lot of work ahead of you to convert from MooTools.
Keep MooTools if you used it extensively through out your site. However, if you only have 2-3 pages with minor effects... the change might be worth it.
A: Asses if you have the programmer hours to do the over haul.
Doing so will mean that you are going to rewrite the code from scratch. This again implies that you will have to go through the cycle of functionality, review, and test, fix bugs etc. That said one of the most important problems programmers not-so-competent with JavaScript make is that most pieces of code accessing DOM elements tend to create memory leaks (which is the highway case for most web developers). jQuery by nature does a lot to mitigate this. Or rather jQuery takes away the JavaScript from JavaScript.
2nd. one of the more compelling reasons to move to jquery is that your JavaScript code weight will decrease dramatically. This makes sense for a client side code intensive page. The concise nature of jquery will allow you to easily review code too.
The company I work (support.com) with had tons of Mootools code. During the start of 2008(after heated hours of debating-for which I was against moving to jQuery) we started migrating to jQuery in a phased manner. I haven’t regretted till date.
A: This argument is boring, Mootools is O-O so people form a proper O-O background appreciate its intelligence more than people from a PHP4 or HTML background.
A: If you're upgrading anyway, then it may be worth looking into.
jQuery seems to be well on its way to becoming the One True Javascript library (given that MS and others have decided to embrace it), so if this is code you intend to work on for a while, then it's probably a good idea to switch at some point (if only because there will be more places to get help and plugin code, as it's very likely to continue to be popular for a while, which will help ensure the long-term flexibility and maintainability of your code). So, given that you're having to convert it anyway, now might be the best time to do it.
I think jQuery becoming the framework to use is a good thing. It wouldn't have been my choice (I like MooTools, too), but it's certainly an excellent bit of code and definitely fits the purpose with at least the competence of its competition. I'm happy to see any kind of consistency, and I will be moving my code to jQuery at some point.
A: Why make the switch? I've converted code bases from 1.11 to 1.2, and it's pretty quick and easy (and I'm using it for more than just a few effects).
jQuery may be adopted by MS, according to one site it performs better in IE - but this is not about how well it does with IE, it's about how well it works for your site (is IE a major player on your site?).
Do you know jQuery? If you don't, then you've got to rewrite the code from scratch, and you'll be rewriting your code altogether.
Or are you just trying to come up with reasons to tell your manager that "we should do this in jQuery" because you want to learn it?
As far as "the one true framework" - that's a ridiculous claim that only users make, not developers.
A: The only compelling reason I could give for such a migration would be if making the switch would reduce the amount of code you have to maintain, and/or make things simpler. There is usually a lot of work involved in such a switch, so you would want to be able to go back after all that work is done and say "Yeah, it was worth it."
A: You should take this choice based on the purpose of your application.
jQuery is amazingly cool for animations, however I feel Mootools is more sophisticated, so If the important thing is the app and not the animations stick to Mootools
Speed is also a subject on this. As of today Mootools has a slightly slower performance but I rather not pay attention to it until Mootools 1.3 is released.
Checkout performance of the latest frameworks at http://slicktest.perrohunter.com
A: JQuery is a smaller codebase with wider support. If it meets your needs it might be a good switch. I would say that the trade off that you need to decide on is whether the migration effort and learning curve are worth the effort versus the wider feature set, smaller code size and popularity and support for JQuery.
If the change between versions of MooTools is really that steep then the migration might well be justified.
A: The article referred to at the jqueryvsmootools website puts it quite nicely:
If jQuery makes the DOM your playground, MooTools aims to make JavaScript your playground
So, coupled with the answer here "If it aint broke, don't fix it", I'd say address your needs and not public opinion.
Given that your site is already in Mootools, you need to assess whether jQuery offers anything you need which MooTools doesn't offer; and whether the bother of converting is less than the bother of writing the extension.
It seems that jQuery has the ability to quickly allow you to play with the DOM, but all the extra fancy stuff and other areas of work (like Dates) requires a plugin.
This helped me to answer my own question too!
There are some areas where they don't overlap, which is a pity. GUIs are easy to put together in jQuery, with lovely widgets and animations. MooTools, I find, either have sets with too few features, or are too heavy for general web use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: what is the best way to do keep alive socket checking in .NET? I am looking for a way to do a keep alive check in .NET. The scenario is for both UDP and TCP.
Currently in TCP what I do is that one side connects and when there is no data to send it sends a keep alive every X seconds.
I want the other side to check for data, and if non was received in X seconds, to raise an event or so.
One way i tried to do was do a blocking receive and set the socket's RecieveTimeout to X seconds. But the problem was whenever the Timeout happened, the socket's Receive would throw an SocketExeception and the socket on this side would close, is this the correct behaviour ? why does the socket close/die after the timeout instead of just going on ?
A check if there is data and sleep isn't acceptable (since I might be lagging on receiving data while sleeping).
So what is the best way to go about this, and why is the method i described on the other side failing ?
A: In case you have a tcp server which just writes data at irregular intervals and you'd like to have keep alive running in background:
tcpClient.Client.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 1);
tcpClient.Client.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveTime, 2);
tcpClient.Client.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveRetryCount, 2);
tcpClient.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
Will cause an async read to throw a timeout exception if the server doesn't (automatically usually) replies to tcp keep alives.
A: If you literally mean "KeepAlive", try the following.
public static void SetTcpKeepAlive(Socket socket, uint keepaliveTime, uint keepaliveInterval)
{
/* the native structure
struct tcp_keepalive {
ULONG onoff;
ULONG keepalivetime;
ULONG keepaliveinterval;
};
*/
// marshal the equivalent of the native structure into a byte array
uint dummy = 0;
byte[] inOptionValues = new byte[Marshal.SizeOf(dummy) * 3];
BitConverter.GetBytes((uint)(keepaliveTime)).CopyTo(inOptionValues, 0);
BitConverter.GetBytes((uint)keepaliveTime).CopyTo(inOptionValues, Marshal.SizeOf(dummy));
BitConverter.GetBytes((uint)keepaliveInterval).CopyTo(inOptionValues, Marshal.SizeOf(dummy) * 2);
// write SIO_VALS to Socket IOControl
socket.IOControl(IOControlCode.KeepAliveValues, inOptionValues, null);
}
Note the time units are in milliseconds.
A: According to MSDN, a SocketException thrown when ReceiveTimeout is exceeded in Receive call will not close the socket. There is something else going on in your code.
Check the caught SocketException details - maybe it's not a timeout after all. Maybe the other side of the connection shuts down the socket.
Consider enabling network tracing to diagnose the exact source of your problems: look for "Network Tracing" on MSDN (can't provide you with a link, since right now MSDN is down).
A: Since you cannot use the blocking (synchronous) receive, you will have to settle for the asynchronous handling. Fortunately that's quite easy to do with .NET. Look for the description of BeginReceive() and EndReceive(). Or check out this article or this.
As for the timeout behaviour I found no conclusive description of this. Since it's not documented otherwise you have to assume that it's the intended behaviour.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Asp.Net 2 Custom Server Controls Properties I am having a very hard time finding a standard pattern / best practice that deals with rendering child controls inside a composite based on a property value.
Here is a basic scenario. I have a Composite Control that has two child controls, a textbox and a dropdown. Lets say there is a property that toggles which child to render.
so:
myComposite.ShowDropdown = true;
If true, it shows a dropdown, otherwise it shows the textbox.
The property value should be saved across postbacks, and the the correct control should be displayed based on the postback value.
Any good examples out there?
A: You use ViewState to store property value so that it persists between postbacks but you have to do it correctly.
public virtual bool ShowDropdown
{
get
{
object o = ViewState["ShowDropdown"];
if (o != null)
return (bool)o;
return false; // Default value
}
set
{
bool oldValue = ShowDropdown;
if (value != oldValue)
{
ViewState["ShowDropdown"] = value;
}
}
}
Probably somewhere in your Render method you show or hide DropDown control based on the property value:
dropDown.Visible = ShowDropDown;
textBox.Visible = !ShowDropDown;
See also Composite Web Control Example.
A: I would think something like:
public bool ShowDropDown
{
get{ return (bool)ViewState["ShowDropDown"]; }
set{ ViewState["ShowDropDown"]; }
}
private void Page_Load(object sender, EventArgs e)
{
DropDaownControl.Visible = ShowDropDown;
TextBoxControl.Visible = !ShowDropDown;
}
/* some more code */
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Scroll of a texbox always on bottom There is a way to keep the scroll on bottom for a multi line textbox?
Something like in the vb6
txtfoo.selstart=len(txtfoo.text)
I'm trying with txtfoo.selectionstart=txtfoo.text.length without success.
Regards.
A: Ok, I found that the solution was to use
txtfoo.AppendText
instead of
txtfoo.text+="something"
A: The other solution is to use:
txtfoo.Text += "something";
txtfoo.SelectionStart = txtfoo.Text.Length;
txtfoo.ScrollToCaret();
A: Interesting question. I'm guessing that you are trying to select the text via form load? I can't get it working on form load, but I can on form click. Wierd. :)
Public Class Form1
Private Sub Form1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Click
ScrollTextbox()
End Sub
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
ScrollTextbox()
End Sub
Private Sub ScrollTextbox()
TextBox1.SelectionStart = TextBox1.TextLength
TextBox1.ScrollToCaret()
End Sub
End Class
If it is completely necessary, you could use a timer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to communicate with an Arduino over its serial interface in C++ on Linux? I have an RFID reader connected to an Arduino board. I'd like to connect to it over its serial interface, and whenever the RFID reader omits a signal ( when it has read an (RF)ID ), I'd like to retrieve it in my C++ program.
I already have the code for simply printing the RFID to serial from the Arduino.
What I don't know, is how to read it from C++ in Linux ?
I have looked at libserial, which looks straightforward. However, how can I have the C++ program react to a signal and then read the RFID, instead of listening continously? Is this necessary?
EDIT: In most examples I have read, the (c++) program sends input, and recieves output. I just want to listen and recieve output from the Arduino.
A: On unix you use the select() call to wait for an input.
The select() call acts like a sleep - using no CPU until the kernel receives the hardware interrupt and triggers the select().
http://tldp.org/HOWTO/Serial-Programming-HOWTO/index.html
A: I found the Boost::Asio library, which reads from serial interfaces asynchronously. Boost::Asio Documentation
A: The Communications part of the Interface section in the Arduino Playground has several examples of interfacing, including one with the Arduino as Linux TTY.
Try the Syntax and Programs forum and the Software Development forum on the Arduino site. There have been discussions about interfacing to many different languages and computers in the past.
And finally check out the Processing and Wiring sites. The Arduino IDE is based on the Processing language, and the Wiring environment and dev board is related to Arduino. Both sites have lots more examples and links to even more resources.
Edit: I just realized that I didn't answer your actual question. These are all general communications resources, but some may have hints towards how to alert the computer of a new RFID input.
A: Hi I created a Simple library for this: cArduino
https://github.com/ranma1988/cArduino
C++ can find auto find connect Arduino port, read, write
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Does Scrum alone = agile? I'm hearing about a lot of companies that act like they're agile but the only agile thing they do is the Scrum process. Is this enough to be considered agile? Using Scrum alone seems like the perfect excuse for a bad manager to get more meetings more often. Should I be weary of such companies?
A:
"I'm hearing about a lot of companies that act like they're agile but the only agile thing > they do is the Scrum process. Is this enough to be considered agile"
Short answer - yes. In my opinion anyway :-)
Of course - they have to be actually doing Scrum - rather than just sticking the name on the wall. There's a lot more to Scrum than daily stand-ups... and if that's all they're doing they're not doing it right.
Done correctly Scrum forces companies to identify the bottlenecks in how the organisation is running. By setting up regular timeboxed sprints, getting a decent feedback loop, and splitting responsibility across product owner and team appropriately you actually get useful baseline information on how to improve your process.
The organisation has to listen to that feedback - and act on it.
It's certainly not the only way to do agile. It might not even be the best way to introduce agile into an organisation. I'm more of an XP fan myself - and find that the extra practices provide a useful framework for kick-starting those process improvements.
That said - for many organisations - the biggest problem is bad split of responsibilities & the complete lack of a sane and rapid feedback loop. Scrum fixes that out of the gate.
Meetings are a very small part of that :-)
A: Bad managers will be outed by the transparency that Scrum promotes. Companies truly embracing Scrum are definitely worth a look.
A: Using SCRUM alone is not necessarily an excuse to get more meetings. Being able to track the work that's done every day and make decisions on how to modify (by cutting or rebalancing work) the rest of the sprint is quite useful on it's own and sound agile to me. :-)
Of course, if you don't have the other components of the agile process, you will have harder time to measure the success of your work, so you might think you are on track with the sprint, but in fact be nowhere near the point you should be at to deliver quality product on schedule.
Update: You shouldn't dismiss such company on that premise alone. HOwever, during the interview, you should use the chance to understand why they are using only SCRUM. If it's a matter of not having people to champion things like TDD or CI, than it might be a good fit for you, if you are willing to become the technical lead. If it's because they dismiss these processes as "overhead" or "stupid" or "unnecessary", then you should be wary of the company.
A: I've noticed that just using Scrum meetings alone is a pretty clear sign that the company has not correctly implemented Agile concepts.
Think about how easy Scrum meetings are, just fire up Outlook and give everyone a daily 15 minute meeting. But, slicing everything up into quick iterations and making sure new functionally is rapidly tested by end users takes a lot more work.
I'd guess, that most managers stop reading right after the Scrum part and they lose interest. But, their daily meeting requests live on forever.
A: Scrum is a project management methodology, first and foremost. Yes, if you are doing Scrum, you are probably beginning to think more about being agile, and delivering value to your customer. But it does not necessarily make you agile. For starters, Scrum doesn't talk about HOW you do software development. This is where things like XP come in - other methodologies and ideas that force you to review and change your working practices in order to become more efficient and effective.
So, rather than asking "do you do Scrum / XP / whatever" I would ask these companies about their overall processes and take a holistic view. Is the company focussing on delivery of maximum business value and driven by an ethos of continuous improvement? If so, then they are probably a lot more agile than one that says it does Scrum.
A: It's not possible to tell whether a team is agile just because somebody says that they're doing scrum.
There are good and bad scrum implementations but they key things about agile are:
*
*the ability of the project and team to think flexibly
*how self-organising the team is (do they have a control freak "architect" or manager? or is there a considerable amount of consensus decision making?)
It's all too easy to conform to the minimum requirements of what a team needs to do to be doing scrum without being truly agile. Those minimum requirements are only there to bring about a certain attitude and way of working.
It's possible for decision-making in a project to be ridgidly inflexible and controlled top-down and yet conform to the minimum requirements of scrum. Sadly, when I look for contracting engagements, I find the scrum-in-name only implementations outnumber the real thing by a considerable margin.
Personally, I'd choose to implement extreme programming within scrum. (In fact, Jeff Sutherland says he's never seen a top productivity scrum team that didn't do the XP practices.) However, I'm pretty confident that people could implement XP really badly too... ;-) It really comes down to the attitude in the team.
A: Agile != scrum.
Agile is about readyness for changes.
Agile is many times presented as an umbrella, set of different techniques, methods, to work in an environment supporting a change. Scrum is for project management, for development techniques there is xp, for better requirement process you can use BDD, for test TDD.
Starting with scrum is the first step on your agility way. Consider other techniques as well. It will take time, but there are real benefits. And there is nothing better than common understanding and good team spirit. Achieve that as the first.
A: The Agile manifesto is really a philosophy that pertains to a better ways of working. Scrum is an agile methodology, so yes a company using Scrum would typically be considered agile.
It is however entirely possible to forget the Agile philosophy when trying to implement Scrum. It can be easy to get caught up in the pursuit of the perfect scrum process and neglect the individuals and their interactions.
You should be weary of the companies that neglect individuals and interactions, and instead blindly favour strict process and tools.
However, this holds true regardless of their stated methodology.
A: Agile is a big, vague concept. Lots of things are Agile.
Scrum is a specific set of techniques for doing sprints and releases. It's agile because it fits the Agile Manifesto.
There are lots of other specific Agile techniques (all of the xDD's, for example.)
When in doubt, compare the companies actual practices against the Agile Manifesto.
A: scrum alone is equal to agile is totally a misconception. Agile is the umbrella under which there are several methods like scrum master, kanban, lean, XP. Now you say how can a part of the umbrella fullfil the idea of an umbrella as a whole. Therefore, scrum is a part of agile.
A: Scrum provides you with a framework to fix/improve your development process. It should be considered as a starting point to "jelled team" and more productive team. Most likely you will go beyond standard Scrum practices soon, but as a starting point it has some attractive properties:
*
*It is very easy to understand
*It can be applied to almost any project and team
*There are quite many people who make money and help companies with Scrum adoption
Also there it is really not so important to know whether Scrum = agile. It is better to focus on better productivity and do not bother yourself with such questions.
A: Yeah, I'd agree with some of the sentiment here. Be Agile is following the manifesto and assuring that you have the right alignment of priorities. SCRUM is just another variant with specific pieces written down. It is, if anything a management "tool".
With that said, remember, the tools are secondary, your people are your priority. Don't over-focus on the management style, focus instead of the people and the product.
A: Organization practicing only Scrum would most likely be seeing gains on software management and project visibility front. However, they are most likely not achieving a higher engineering quality and throughput potential by not incorporating XP principles like Unit Testing, Continuous Integration, Pair Programming etc., leaving their end of Sprint product NOT "Potentially Shippable".
A: People fall victim to their subjective perspectives. What I think Agile and Scrum is, another person may think somewhat differently. Luckily we have a set of guidelines in the Agile manifesto and principles and Scrum values but often companies end up becoming fixated on following the process instead of understanding it and its goals.
Agile Manifesto
We are uncovering better ways of developing software by doing it and
helping others do it. Through this work we have come to value:
*
*Individuals and interactions over processes and tools
*Working software over comprehensive documentation
*Customer collaboration over contract negotiation
*Responding to change over following a plan
That is, while there is value in the items on the right, we value the
items on the left more.
Scrum values
You can learn a lot about a company that uses Scrum by asking them about the values and how they adhere to them. This can give you an idea if the process of Scrum is just enforced without really considering the values that are linked to it.
All work performed in Scrum needs a set of values as the foundation
for the team's processes and interactions. And by embracing these five
values, the team makes them even more instrumental to its health and
success. - See more at:
https://www.scrumalliance.org/why-scrum/core-scrum-values-roles#sthash.qsmCTxdU.dpuf
*
*Focus
*Courage
*Openness
*Commitment
*Respect
Goal
The goal is to release quality software at the end of each iteration.
With the right influence, the values within the company can change. Unfortunately people are unpredictable so companies can slide back into bad habits when other changes are introduced. This is what makes software more challenging and exciting. It's finding ways to create the balance within the forces between technology and product.
Red flags
*
*If a company is more focused on the process instead of the goal.
*If you have to jump through all sorts of hoops and procedures to sign off the smallest change.
*The company doesn't have to get the process 100% right but if they are not continuously adapting and improving to reach their goal, instead of just following a process then they probably end up with a "Half-Arsed" implementation of Agile:
We have heard about new ways of developing software by paying
consultants and reading Gartner reports. Through this we have been
told to value:
*
*Individuals and interactions over processes and tools and we have mandatory processes and tools to control how those individuals (we
prefer the term ‘resources’) interact
*Working software over comprehensive documentation as long as that software is comprehensively documented
*Customer collaboration over contract negotiation within the boundaries of strict contracts, of course, and subject to rigorous
change control
*Responding to change over following a plan provided a detailed plan is in place to respond to the change, and it is followed precisely
That is, while the items on the left sound nice in theory, we’re an
enterprise company, and there’s no way we’re letting go of the items
on the right.
Compliance
Some companies may have heavy compliance procedures in place that hinders being Agile. This can include governance and other regulations that cannot be escaped. This can impact the Agile methodology by making it feel more cumbersome and heavy but it doesn't mean that those processes cannot be streamlined to be more accommodating.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: ActionScript 3.0 + Calculate timespan between two dates? In ActionScript 3.0, is there an automatic way to calculate the number of days, hours, minutes and seconds between two specified dates?
Basicly, what I need is the ActionScript equivalent of the .NET Timespan class.
Any idea?
A: for some a single function like this my be preferable...
[condensed from Richard Szalay's code]
public function timeDifference(startTime:Date, endTime:Date) : String
{
if (startTime == null) { return "startTime empty."; }
if (endTime == null) { return "endTime empty."; }
var aTms = Math.floor(endTime.valueOf() - startTime.valueOf());
return "Time taken: "
+ String( int(aTms/(24*60*+60*1000)) ) + " days, "
+ String( int(aTms/( 60*60*1000)) %24 ) + " hours, "
+ String( int(aTms/( 60*1000)) %60 ) + " minutes, "
+ String( int(aTms/( 1*1000)) %60 ) + " seconds.";
}
A: I created an ActionScript TimeSpan class with a similar API to System.TimeSpan to fill that void, but there are differences due to the lack of operator overloading. You can use it like so:
TimeSpan.fromDates(later, earlier).totalDays;
Below is the code for the class (sorry for the big post - I won't include the Unit Tests ;)
/**
* Represents an interval of time
*/
public class TimeSpan
{
private var _totalMilliseconds : Number;
public function TimeSpan(milliseconds : Number)
{
_totalMilliseconds = Math.floor(milliseconds);
}
/**
* Gets the number of whole days
*
* @example In a TimeSpan created from TimeSpan.fromHours(25),
* totalHours will be 1.04, but hours will be 1
* @return A number representing the number of whole days in the TimeSpan
*/
public function get days() : int
{
return int(_totalMilliseconds / MILLISECONDS_IN_DAY);
}
/**
* Gets the number of whole hours (excluding entire days)
*
* @example In a TimeSpan created from TimeSpan.fromMinutes(1500),
* totalHours will be 25, but hours will be 1
* @return A number representing the number of whole hours in the TimeSpan
*/
public function get hours() : int
{
return int(_totalMilliseconds / MILLISECONDS_IN_HOUR) % 24;
}
/**
* Gets the number of whole minutes (excluding entire hours)
*
* @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500),
* totalSeconds will be 65.5, but seconds will be 5
* @return A number representing the number of whole minutes in the TimeSpan
*/
public function get minutes() : int
{
return int(_totalMilliseconds / MILLISECONDS_IN_MINUTE) % 60;
}
/**
* Gets the number of whole seconds (excluding entire minutes)
*
* @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500),
* totalSeconds will be 65.5, but seconds will be 5
* @return A number representing the number of whole seconds in the TimeSpan
*/
public function get seconds() : int
{
return int(_totalMilliseconds / MILLISECONDS_IN_SECOND) % 60;
}
/**
* Gets the number of whole milliseconds (excluding entire seconds)
*
* @example In a TimeSpan created from TimeSpan.fromMilliseconds(2123),
* totalMilliseconds will be 2001, but milliseconds will be 123
* @return A number representing the number of whole milliseconds in the TimeSpan
*/
public function get milliseconds() : int
{
return int(_totalMilliseconds) % 1000;
}
/**
* Gets the total number of days.
*
* @example In a TimeSpan created from TimeSpan.fromHours(25),
* totalHours will be 1.04, but hours will be 1
* @return A number representing the total number of days in the TimeSpan
*/
public function get totalDays() : Number
{
return _totalMilliseconds / MILLISECONDS_IN_DAY;
}
/**
* Gets the total number of hours.
*
* @example In a TimeSpan created from TimeSpan.fromMinutes(1500),
* totalHours will be 25, but hours will be 1
* @return A number representing the total number of hours in the TimeSpan
*/
public function get totalHours() : Number
{
return _totalMilliseconds / MILLISECONDS_IN_HOUR;
}
/**
* Gets the total number of minutes.
*
* @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500),
* totalSeconds will be 65.5, but seconds will be 5
* @return A number representing the total number of minutes in the TimeSpan
*/
public function get totalMinutes() : Number
{
return _totalMilliseconds / MILLISECONDS_IN_MINUTE;
}
/**
* Gets the total number of seconds.
*
* @example In a TimeSpan created from TimeSpan.fromMilliseconds(65500),
* totalSeconds will be 65.5, but seconds will be 5
* @return A number representing the total number of seconds in the TimeSpan
*/
public function get totalSeconds() : Number
{
return _totalMilliseconds / MILLISECONDS_IN_SECOND;
}
/**
* Gets the total number of milliseconds.
*
* @example In a TimeSpan created from TimeSpan.fromMilliseconds(2123),
* totalMilliseconds will be 2001, but milliseconds will be 123
* @return A number representing the total number of milliseconds in the TimeSpan
*/
public function get totalMilliseconds() : Number
{
return _totalMilliseconds;
}
/**
* Adds the timespan represented by this instance to the date provided and returns a new date object.
* @param date The date to add the timespan to
* @return A new Date with the offseted time
*/
public function add(date : Date) : Date
{
var ret : Date = new Date(date.time);
ret.milliseconds += totalMilliseconds;
return ret;
}
/**
* Creates a TimeSpan from the different between two dates
*
* Note that start can be after end, but it will result in negative values.
*
* @param start The start date of the timespan
* @param end The end date of the timespan
* @return A TimeSpan that represents the difference between the dates
*
*/
public static function fromDates(start : Date, end : Date) : TimeSpan
{
return new TimeSpan(end.time - start.time);
}
/**
* Creates a TimeSpan from the specified number of milliseconds
* @param milliseconds The number of milliseconds in the timespan
* @return A TimeSpan that represents the specified value
*/
public static function fromMilliseconds(milliseconds : Number) : TimeSpan
{
return new TimeSpan(milliseconds);
}
/**
* Creates a TimeSpan from the specified number of seconds
* @param seconds The number of seconds in the timespan
* @return A TimeSpan that represents the specified value
*/
public static function fromSeconds(seconds : Number) : TimeSpan
{
return new TimeSpan(seconds * MILLISECONDS_IN_SECOND);
}
/**
* Creates a TimeSpan from the specified number of minutes
* @param minutes The number of minutes in the timespan
* @return A TimeSpan that represents the specified value
*/
public static function fromMinutes(minutes : Number) : TimeSpan
{
return new TimeSpan(minutes * MILLISECONDS_IN_MINUTE);
}
/**
* Creates a TimeSpan from the specified number of hours
* @param hours The number of hours in the timespan
* @return A TimeSpan that represents the specified value
*/
public static function fromHours(hours : Number) : TimeSpan
{
return new TimeSpan(hours * MILLISECONDS_IN_HOUR);
}
/**
* Creates a TimeSpan from the specified number of days
* @param days The number of days in the timespan
* @return A TimeSpan that represents the specified value
*/
public static function fromDays(days : Number) : TimeSpan
{
return new TimeSpan(days * MILLISECONDS_IN_DAY);
}
/**
* The number of milliseconds in one day
*/
public static const MILLISECONDS_IN_DAY : Number = 86400000;
/**
* The number of milliseconds in one hour
*/
public static const MILLISECONDS_IN_HOUR : Number = 3600000;
/**
* The number of milliseconds in one minute
*/
public static const MILLISECONDS_IN_MINUTE : Number = 60000;
/**
* The number of milliseconds in one second
*/
public static const MILLISECONDS_IN_SECOND : Number = 1000;
}
A: You can covert the two date times into milliseconds since the epoch, perform your math and then use the resultant milliseconds to calculate these higher timespan numbers.
var someDate:Date = new Date(...);
var anotherDate:Date = new Date(...);
var millisecondDifference:int = anotherDate.valueOf() - someDate.valueOf();
var seconds:int = millisecondDifference / 1000;
....
The LiveDocs are useful for this type of thing too. Sorry if the ActionScript is a bit off, but it has been a while.
I'd also recommend creating a set of static class methods that can perform these operations if you're doing a lot of this type of math. Sadly, this basic functionality doesn't really exist in the standard APIs.
A: There is no automatic way of doing this. The best you can achieve with the supplied classes is to fetch date1.time and date2.time, to give the number of milliseconds since 1st Jan 1970 for two numbers. You can then work out the number of milliseconds between them. With some basic maths, you can then derive the seconds, hours, days etc.
A: For the sake of accuracy the above post by Russell is correct until you get to 25 days difference, then the number becomes too large for the int variable.
Therefore declare the millisecondDifference:Number;
There may be some difference between the documented getTime() and valueOf(), but in effect I can't see it
A: var timeDiff:Number = endDate - startDate;
var days:Number = timeDiff / (24*60*60*1000);
var rem:Number = int(timeDiff % (24*60*60*1000));
var hours:Number = int(rem / (60*60*1000));
rem = int(rem % (60*60*1000));
var minutes:Number = int(rem / (60*1000));
rem = int(rem % (60*1000));
var seconds:Number = int(rem / 1000);
trace(days + " << >> " +hours+ " << >> " +minutes+ " << >> " +seconds);
or
var time:Number = targetDate - currentDate;
var secs:Number = time/1000;
var mins:Number = secs/60;
var hrs:Number = mins/60;
var days:Number = int(hrs/24);
secs = int(secs % 60);
mins = int(mins % 60);
hrs = int(hrs % 24);
trace(secs + " << >> " + mins + " << >> " + hrs + " << >> " + days);
A: ArgumentValidation is another class of Mr Szalays that does some checks to make sure each method has the right values to perform it's tasks without throwing unrecognisable errors. They are non-essential to get the TimeSpan class working so you could just comment them out and the class will work correctly.
Rich may post the Argument validation class on here as well as it's very handy but i'll leave that down to him ;P
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Server-side Report in Crystal 2008? I am looking to integrate Crystal Reports 2008 into a Windows Forms application. I would like to avoid direct connections from my client application to the database, while giving the user the "complete" report experience. Is it possible to for Crystal Reports 2008 to execute a report on a server into a client-side Windows Forms client control, similar to Microsoft Reporting Services?
A: I don't know if it is exactly what you are after, but I can think of 2 ways you could fudge it :
*
*You can set up your report so that the 'database' is an XSD file, with no knowledge of the real backend. Then at runtime you push the data to the report.
// Create an instance at runtime appropriate to your environment - example only :
ReportClass rc = new ReportClass();
rc.Load(crystalReportFileName);
rc.SetDataSource(myIEnumerableData);
CrystalReportViewer crv = new CrystalReportViewer();
crv.ReportSource = rc;
// Display the crystal viewer.
2 - You could do the same as 1 on a server (regardless of the database approach) , then save the report and push it out to the client.
// Some Server-side service / method etc
public byte[] GetMyReport()
{
ReportClass rc = new ReportClass();
rc.Load(crystalReportFileName);
rc.SetDataSource(myIEnumerableData);
rc.SaveAs(serverSideFile, True); // True is critical to save data with report
return .... // convert the created file to a byte array I suppose
}
// Client side
byte[] rep = Server. GetMyReport()
ReportClass rc = ..... // convert rep back to a crystal report
CrystalReportViewer crv = new CrystalReportViewer();
crv.ReportSource = rc;
A: This isn't really what you're asking, but Crystal Reports Server does server-side reporting.
On the downside, it's annoyingly expensive.
http://www.businessobjects.com/product/catalog/crystalreports_server/
A: I can't add a comment to the above as I have no points, but hope this helps.
Crystal Reports Server runs reports itself against the datasources, the idea being that clients without crystal reports or data access can run reports via the web, or the server runs scheduled reports and send the results out. I don't know if you can integrate it though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: If you are using getters and setters, how should you name the private member variables? As kind of a follow up to this question about prefixes, I agree with most people on the thread that prefixes are bad. But what about if you are using getters and setters? Then you need to differeniate the publicly accessible getter name from the privately stored variable. I normally just use an underscore, but is there a better way?
A: This is a completely subjective question. There is no "better" way.
One way is:
private int _x;
public get x():int { return _x; }
public set x(int val):void { _x = val; }
Another is:
private int x;
public get X():int { return x; }
public set X(int val):void { x = val; }
Neither is the right answer. Each has style advantages and disadvantages. Pick the one you like best and apply it consistently.
A: I like prefixing fields with an underscore, as others have mentioned.
private int _x;
I think this goes beyond straight personal preference though (as David Arno said in this thread). I think there's some real objective reasons for doing this:
*
*It means you avoid having to write "this.x = x" for assignments (especially in setters and constructors).
*It distinguishes your fields from your local variables/arguments. It's important to do this: fields are trickier to handle than locals, as their scope is wider / lifetime is longer. Adding in the extra character is a bit of a mental warning sign for coders.
*In some IDEs, the underscore will cause the auto-complete to sort the fields to the top of the suggestion list. This makes it easier to see all the fields for the class in one block. This in turn can be helpful; on big classes, you may not be able to see the fields (usually defined at the top of the class) on the same screen as the code you're working on. Sorting them to the top gives a handy reference.
(These conventions are for Java, but similar ones exist for other languages)
These things seems small but their prevalence definitely makes my life easier when I'm coding.
A: In java there is this.foo in python there is self.foo and other languages have similar things, so I don't see a need for naming something in a special way, when I can already use a language construct. In the same context good IDEs and editors understand member variables and give them a special highlight, so you can really see it w/o using special names.
A: In a case sensitive language I just use:
private int myValue;
public int MyValue
{
get { return myValue; }
}
Otherwise I would use an underscore
Private _myValue As Integer
Public ReadOnly Property MyValue As Integer
Get
Return _myValue
End Get
End Property
A: There are almost as many different ways of doing this as there are programmers doing this, but some of the more popular ways include (for a property Foo):
*
*mFoo
*m_foo
*_foo
*foo
A: I like writing "this.x = x". It's very clear to me. Plus, when using Eclipse, you can have it automatically generate your getters/setters this way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: C# equivalent of the IsNull() function in SQL Server In SQL Server you can use the IsNull() function to check if a value is null, and if it is, return another value. Now I am wondering if there is anything similar in C#.
For example, I want to do something like:
myNewValue = IsNull(myValue, new MyValue());
instead of:
if (myValue == null)
myValue = new MyValue();
myNewValue = myValue;
Thanks.
A: public static T isNull<T>(this T v1, T defaultValue)
{
return v1 == null ? defaultValue : v1;
}
myValue.isNull(new MyValue())
A: Use the Equals method:
object value2 = null;
Console.WriteLine(object.Equals(value2,null));
A: It's called the null coalescing (??) operator:
myNewValue = myValue ?? new MyValue();
A: Sadly, there's no equivalent to the null coalescing operator that works with DBNull; for that, you need to use the ternary operator:
newValue = (oldValue is DBNull) ? null : oldValue;
A: For working with DB Nulls, I created a bunch for my VB applications. I call them Cxxx2 as they are similar to VB's built-in Cxxx functions.
You can see them in my CLR Extensions project
http://www.codeplex.com/ClrExtensions/SourceControl/FileView.aspx?itemId=363867&changeSetId=17967
A: You Write Two Function
//When Expression is Number
public static double? isNull(double? Expression, double? Value)
{
if (Expression ==null)
{
return Value;
}
else
{
return Expression;
}
}
//When Expression is string (Can not send Null value in string Expression
public static string isEmpty(string Expression, string Value)
{
if (Expression == "")
{
return Value;
}
else
{
return Expression;
}
}
They Work Very Well
A: I've been using the following extension method on my DataRow types:
public static string ColumnIsNull(this System.Data.DataRow row, string colName, string defaultValue = "")
{
string val = defaultValue;
if (row.Table.Columns.Contains(colName))
{
if (row[colName] != DBNull.Value)
{
val = row[colName]?.ToString();
}
}
return val;
}
usage:
MyControl.Text = MyDataTable.Rows[0].ColumnIsNull("MyColumn");
MyOtherControl.Text = MyDataTable.Rows[0].ColumnIsNull("AnotherCol", "Doh! I'm null");
I'm checking for the existence of the column first because if none of query results has a non-null value for that column, the DataTable object won't even provide that column.
A: Use below methods.
/// <summary>
/// Returns replacement value if expression is null
/// </summary>
/// <param name="expression"></param>
/// <param name="replacement"></param>
/// <returns></returns>
public static long? IsNull(long? expression, long? replacement)
{
if (expression.HasValue)
return expression;
else
return replacement;
}
/// <summary>
/// Returns replacement value if expression is null
/// </summary>
/// <param name="expression"></param>
/// <param name="replacement"></param>
/// <returns></returns>
public static string IsNull(string expression, string replacement)
{
if (string.IsNullOrWhiteSpace(expression))
return replacement;
else
return expression;
}
A: public static T IsNull<T>(this T DefaultValue, T InsteadValue)
{
object obj="kk";
if((object) DefaultValue == DBNull.Value)
{
obj = null;
}
if (obj==null || DefaultValue==null || DefaultValue.ToString()=="")
{
return InsteadValue;
}
else
{
return DefaultValue;
}
}
//This method can work with DBNull and null value. This method is question's answer
A: This is meant half as a joke, since the question is kinda silly.
public static bool IsNull (this System.Object o)
{
return (o == null);
}
This is an extension method, however it extends System.Object, so every object you use now has an IsNull() method.
Then you can save tons of code by doing:
if (foo.IsNull())
instead of the super lame:
if (foo == null)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123"
} |
Q: Initializing C# auto-properties I'm used to writing classes like this:
public class foo {
private string mBar = "bar";
public string Bar {
get { return mBar; }
set { mBar = value; }
}
//... other methods, no constructor ...
}
Converting Bar to an auto-property seems convenient and concise, but how can I retain the initialization without adding a constructor and putting the initialization in there?
public class foo2theRevengeOfFoo {
//private string mBar = "bar";
public string Bar { get; set; }
//... other methods, no constructor ...
//behavior has changed.
}
You could see that adding a constructor isn't inline with the effort savings I'm supposed to be getting from auto-properties.
Something like this would make more sense to me:
public string Bar { get; set; } = "bar";
A: You can do it via the constructor of your class:
public class foo {
public foo(){
Bar = "bar";
}
public string Bar {get;set;}
}
If you've got another constructor (ie, one that takes paramters) or a bunch of constructors you can always have this (called constructor chaining):
public class foo {
private foo(){
Bar = "bar";
Baz = "baz";
}
public foo(int something) : this(){
//do specialized initialization here
Baz = string.Format("{0}Baz", something);
}
public string Bar {get; set;}
public string Baz {get; set;}
}
If you always chain a call to the default constructor you can have all default property initialization set there. When chaining, the chained constructor will be called before the calling constructor so that your more specialized constructors will be able to set different defaults as applicable.
A: This will be possible in C# 6.0:
public int Y { get; } = 2;
A: In the default constructor (and any non-default ones if you have any too of course):
public foo() {
Bar = "bar";
}
This is no less performant that your original code I believe, since this is what happens behind the scenes anyway.
A: Update - the answer below was written before C# 6 came along. In C# 6 you can write:
public class Foo
{
public string Bar { get; set; } = "bar";
}
You can also write read-only automatically-implemented properties, which are only writable in the constructor (but can also be given a default initial value):
public class Foo
{
public string Bar { get; }
public Foo(string bar)
{
Bar = bar;
}
}
It's unfortunate that there's no way of doing this right now. You have to set the value in the constructor. (Using constructor chaining can help to avoid duplication.)
Automatically implemented properties are handy right now, but could certainly be nicer. I don't find myself wanting this sort of initialization as often as a read-only automatically implemented property which could only be set in the constructor and would be backed by a read-only field.
This hasn't happened up until and including C# 5, but is being planned for C# 6 - both in terms of allowing initialization at the point of declaration, and allowing for read-only automatically implemented properties to be initialized in a constructor body.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "215"
} |
Q: How do you get a System.Web.HttpWebRequest object to use SSL 2.0? I don't know if I have all the information needed to phrase this question well, so bear with me.
I have a local web page (local meaning 192.168.*) that is protected with a self-signed SSL cert. I'm trying to access this page using a System.Net.HttpWebRequest object, but I'm running into a weird problem.
If this page is accessed in Internet Explorer with the "Use SSL 2.0" option turned off, the browser returns back an error as if it can't establish a connection. (In other words, a browser connection error, as opposed to a server-sent error.) If the "Use SSL 2.0" option is turned on, the page works fine and you get the standard warning that this is a self-signed cert, do you want to continue, etc. (Oddly enough, Firefox, which supposedly does not have SSL 2.0 turned on, works just fine.)
Now my problem is that I'm trying to access this page with an HttpWebRequest object and the error it's returning back is that the connection has been unexpectedly closed, just like the error IE throws when "Use SSL 2.0" is turned off. (I already have code in place to ignore the fact that it's a self-signed cert, but it's not even getting that far.)
How do I get the System.Net.HttpWebRequest to, well, "Use SSL 2.0" when it's making its request?
A: I've hit this issue myself when dealing with Ssl3, though I'm not sure if the same advice would work for SSL2?
To work around the issue I set the Ssl3 flag on the security protocol like so:
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
Check out these links for more details:
system.net.servicepointmanager.securityprotocol on MSDN
security protocol enumeration on MSDN
They might point you in the right direction if you're lucky :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: MySQL: Conditionally selecting next and previous rows http://thedailywtf.com/Articles/The-Hot-Room.aspx
You see how at the bottom there're links to the next and previous articles ("Unprepared For Divide_By_Zero" and "A Completely Different Game")? How do I do that, but selecting the next and previous non-private articles? This works for selecting the next article:
SELECT * FROM articles WHERE id > ? AND private IS NULL
But I cannot find a way to select the previous article.
What is the proper/efficient way to do this, preferably in one query?
A: Or extending Jeremy's answer...
In one query
(SELECT * FROM articles WHERE id > ?
AND private IS NULL
ORDER BY id ASC LIMIT 1)
UNION
(SELECT * FROM articles WHERE id < ?
AND private IS NULL
ORDER BY id DESC LIMIT 1)
A: Here's how I would do it:
-- next
SELECT * FROM articles WHERE id > ? AND private IS NULL ORDER BY id ASC LIMIT 1
-- previous
SELECT * FROM articles WHERE id < ? AND private IS NULL ORDER BY id DESC LIMIT 1
I'm not sure how to do it in one query. The only thing I can think of is possibly getting both the article you're displaying and the next article in one query, but that might be too confusing.
A: How about a nested select?
SELECT * FROM articles WHERE id IN (
SELECT id FROM articles WHERE id > ? AND private IS NULL ORDER BY id ASC LIMIT 1)
)
OR id IN (
SELECT id FROM articles WHERE id < ? AND private IS NULL ORDER BY id DESC LIMIT 1
);
A: You can get away with subselects etc in your particular case, but if you need anything more complicated (for example: given an initial balance and a list of payments and chargebacks, calculate account balance at every point of time) you probably would want to write a stored procedure that uses SQL REPEAT/WHILE/LOOP clauses and allows use of variables and so on.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Merging Rails databases I have two databases with the same structure. The tables have an integer as a primary key as used in Rails.
If I have a patients table, I will have one patient using primary key 123 in one database and another patient using the same primary key in the other database.
What would you suggest for merging the data from both databases?
A: If your databases are exactly the same (the data doesn't require custom processing) and there aren't too many records, you could do this (which allows for foreign keys):
Untested... But you get the idea
#All models and their foreign keys
tales = {Patients => [:doctor_id, :hospital_id],
Doctors => [:hospital_id],
Hospitals}
ActiveRecord::Base.establish_connection :development
max_id = tables.map do |model|
model.maximum(:id)
end.max + 1000
tables.each do |model, fks|
ActiveRecord::Base.establish_connection :development
records = model.find(:all)
ActiveRecord::Base.establish_connection :production
records.each do |record|
#update the foreign keys
fks.each do |attr|
record[attr] += max_id if not record[attr].nil?
end
record.id += max_id
model.create record.attributes
end
end
If you have a LOT of records you might have to portion this out somehow... do it in groups of 10k or something.
A: Set both your databases up with entries in config/database.yml, then generate a new migration.
Use ActiveRecord::Base.establish_connection to switch between the two databases in the migration like this:
def self.up
ActiveRecord::Base.establish_connection :development
patients = Patient.find(:all)
ActiveRecord::Base.establish_connection :production
patients.each { |patient| Patient.create patient.attributes.except("id") }
end
YMMV depending on the number of records and the associations between models.
A: BTW it probably makes more sense for this to be a rake or capistrano task rather than a migration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is Http Streaming Comet possible in Safari? By HTTP Streaming Comet, I mean the "forever iframe" / "forever xhr" variations that don't close the connection after data has been pushed from the server, as opposed to standard polling and long polling which close and resend a new request for every server push event.
I looked at the dojo.io.cometd package and it seems they only have polling implementations. I also found this example, but it doesn't seem to work in webkit even after a fair bit of tinkering (I got it to work everywhere else). This announcement from the safari blog seems to suggest that it's possible with xhr, but I couldn't find any code or documentation, nor I could get it to work.
Does anyone know of a technique, script, library or demo that accomplishes HTTP streaming comet in Webkit browsers (Safari and Chrome)?
Update
After a bit more tinkering, I found that there are two things that need to be done in order to get http streaming working in Safari via XHR:
*
*The response needs to have a Content-Type: multipart/x-mixed-replace
*The response needs to send a few "noise" characters before the browser begins to display updates consistently. I'm assuming this has something to do with filling some internal buffer.
Update 2
I finally got it to work in all browsers using an iframe technique. The caveat to the solution is that only WebKit-based browsers should receive the multipart/x-mixed-replace header.
A: According to Wikipedia, HTTP Streaming comet is supposed to be possible in every browser. "Page Layout with Frames that Aren't", Ajax: The Definitive Guide. O'Reilly Media, pp. 320. ISBN 0596528388, is the reference that is quoted for this information, so maybe this book has a suggestion on how to do this.
Also http://meteorserver.org/ has a demo which I just confirmed works in Chrome, of a client side library + a server which pushes data to the client.
A: It's definitely possible: GMail does it. If you watch the Resources section of the developer tools in the latest Webkit, you can watch it in action. They have a request called "bind" that stays open more or less indefinitely. My understanding is that when new mail arrives, it comes across that connection.
A: Yes.
You need to include a large amount (at least 256 bytes) of junk at the front of the response in order to get Safari to behave.
A: Although this is a old post, I did do some search and find the following articles really helpfull
http://www.shanison.com/2010/05/10/stop-the-browser-%E2%80%9Cthrobber-of-doom%E2%80%9D-while-loading-comet-forever-iframe/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: What coding projects are used to create art and beauty? Today the blinkenlights stereoscope project starts as part of the nuit blanche art event in Toronto. The Toronto city hall is transferred into a giant matrix display. There are tools to create custom animations and an iphone application to view the live stream.
I think this is a great example of using technology for art and beauty. What other coding/programming projects are out there for the sake of art and beauty?
Update:
Youtube Video of blinkenlights in action: http://www.youtube.com/watch?v=jTZosieGhIQ
A: There's a ton of art projects involving technology / code over at processing.org
A: I've tinkered with http://processing.org here and there over the last few years. Along similar lines, there's the Context Free project, at http://www.contextfreeart.org
The idea behind Context Free is very cool. Rather than directly placing graphic elements, using imperative semantics, you define a grammar that declaratively defines the relationship of graphic elements. The runtime system generates an image by walking the grammar and instantiating elements that conform with its rules.
Fun stuff.
A: I find code_swarm totally beautiful, awe inspiring and hypnotic, does that count?
(source: ucdavis.edu)
A: Personaly I think this is a great example of using code to produce art and not only that but the art expresses something all geek types can understand and respect http://www.opte.org/maps/
(source: bitgravity.com)
A: Any place where you will find contemporary art, so museums and art festivals. Interactive installations often use a dataflow programming language such as max/msp or puredata to process input and do something interesting with it.
There are also festivals which focus more on how code can be used to create art. An example is http://piksel.no in Bergen, Norway. Piksel.no has seminars and workshops, live events, etc.. There are similar festivals in other countries.
And you could stretch the definition of art a bit and include the demoscene. I do consider some demoscene productions to be art, but they are rare gems among the drudge. Still, have a look at http://scene.org/awards.php to get some idea of what kind of stuff that community comes up with.
A: Every code project you work on should be a form of art, and wonderful to behold.
I am only being partly facetious here. Part of why people enjoy computer programming is the sculptural/artistic nature of the whole thing. Code for beauty of structure and appearance, and you often also have better code...
A: The computer program AARON by Harold Cohen has been painting artwork for over 30 years.
(source: ucsd.edu)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is this an acceptable way to make a simple "scheduler" in C#? I'm making a simple scheduler with C# in .Net. All it does is execute a simple select statement on a table in a SQL Server DB once per minute (this does not need to scale or anything... the db does not have a high load). Here is my proposed implementation:
static void Main(string[] args)
{
while (true)
{
System.Threading.Thread.Sleep(timeout); // timeout is, say, 60000
CheckTable();
}
}
Is this ok? What is a better way?
p.s. Someone suggested using the Windows Forms Timer class... however that seems like overkill.
Cheers!
A: While it is technically legal you are probably better of using a timer. They are not much more code to set up and you can let the runtime take care of spawning new threads. If you ever needed to use this again in another program it would also create a performance bottleneck where a timer would not.
The timer will add more code though since you need to use a timer trigger event.
A: Close.
*
*This will run every (1 minute + time to call proc). Maybe that's OK, maybe it isn't. If it isn't OK you need to subtract the amount of time it took to ran.
*You should have a try-catch block around it. You don't want it to die entirely just because of a temporary database or network issue.
A: Quartz.net is a good solution for timing needs. It's pretty easy to setup (good tutorial on the site), and gives you a lot more flexibility than Timers. The CronTrigger is really powerful, and easy to configure.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: ReSharper-- Unstable for anybody else? I have seen Jetbrain's ReSharper tool on many "must-have" tool lists. I've installed it on a few occasions over the last few years and it's turned my Visual Studio sluggish and erratic. I generally uninstall it after a week or two because it make VS flaky, I want to like it, but I can't get past the instability.
So what's the deal? Am I having bad luck? Does the tool have issues but the usefulness out weighs the issues? Anyone else out there have trouble with it? Are there some troublesome options to turn off?
A: These previously asked questions should help in answering your duplicate question:
*
*Do you have any tips to improve
resharper and/or visual studio
performance ?
*ReSharper sluggishness
A: The one thing that I have seen that makes VS slow when R# is on is the lack of RAM and a slow CPU.
That being said, the only time I see slowness is when working in VB. C# is blazing fast ALL the time. The current computer I have isn't as good as my last one, but it does have 2GB of RAM and dual core P4 (3.20GHz).
Things that CAN slow R# down though are:
*
*Solution Errors setting
*Code that has lots and lots and lots of errors
*Code that has lots of analysis errors
*Code Rush installed as well
A: I keep having the same issues. Performance is fine really but my machine has 6gb of memory and a quad core processor. It is the constant system hangs, unit test running hangs, visual studio crashing when opening solutions etc that has meant I uninstalled it. I do have Vista Business 64 but every other add on and program I have work fine. If I uninstall resharper 4.1 everything starts working again.
There again, it behaved in the same way on my dual core laptop too.
Visual Assist X seems rock solid in comparison but it does not have as many features (it is half the price though).
Ste
A: I've experienced a lot of what you're talking about over the years as well, but I have to say having recently moved to the 4.0 version of ReSharper, a lot of that overhead has been cut down dramatically and it seems to be quite a bit more functional to boot.
Try it again. What's the worst that can happen? You'll uninstall it again? No big loss.
A: The only thing I've found remotely sluggish about it is right-clicking anything.
The context menu strip takes ages to load, but the time i lose right-clicking anything is far less compared to the time it saves me refactoring.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is the #region directive really useful in .NET? After maintaining lots of code littered with #region (in both C# and VB.NET), it seems to me that this construct is just a bunch of "make work" for the programmer. It's work to PUT the dang things into the code, and then they make searching and reading code very annoying.
What are the benefits? Why do coders go to the extra trouble to put this in their code.
Make me a believer!
A: I use them all the time. Again, like anything else, they can be used for both evil and good, and can certainly be the hallmark of bad design, but they can be used to help organize code very well.
#region Properties
#region Update Section
#region Accessors
Certainly you should avoid Jeff's example of
#Sweep under carpet
What I find odd about them, as Jeff pointed out, is that they are a compiler preprocessor command for ui purposes. I'm sure VS team could have done something just as useful in another way.
A: Our Business Objects all have Regions - and we love them.
We have;
*
*Business Properties and Methods
*Shared Methods
*Constructors
*Authorization
*Data Access
*Events
We have a few others depending on the type of Business Object we are dealing with (Subscriber etc)
For many classes regions just get in the way - but for our standard business objects they save us a ton of time. These Business Objects are code gen'd, so they are very consistent. Can get to where I want to be way faster than the clutter if they aren't, and the consistency makes it easy to find each other's stuff.
A: I don't generally use code regions, except in one specific case - dependency properties. Although dependecy properties are a pleasure to work with in most respects, their declaraions are an eyesore and they quickly clutter your code. (As if managing GUI code was not already enough of a challenge...)
I like to give the region the same exact name as the CLR property declaration (copy/paste it in there). That way you can see the scope, type and name when it's collapsed - which is really all you care about 95% of the time.
#region public int ObjectDepthThreshold
public int ObjectDepthThreshold
{
get { return (int)GetValue(ObjectDepthThresholdProperty); }
set { SetValue(ObjectDepthThresholdProperty, value); }
}
public static readonly DependencyProperty ObjectDepthThresholdProperty = DependencyProperty.Register(
"ObjectDepthThreshold",
typeof(int),
typeof(GotoXControls),
new FrameworkPropertyMetadata((int)GotoXServiceState.OBJECT_DEPTH_THRESHOLD_DEFAULT,
FrameworkPropertyMetadataOptions.AffectsRender,
new PropertyChangedCallback(OnControlValueChanged)
)
);
#endregion
When it's collapsed you just see
public int ObjectDepthThreshold
If I have more than one dependency property, I like to start the next #region on the very next line. That way you end up with all of them grouped together in your class, and the code is compact and readable.
BTW if you just want to peek at the declaration, mouse hover over it.
A: A similar question has already been asked.
but...
I would say not anymore. It was originally intended to hide generated code from WinForms in early versions of .NET. With partial classes the need seems to go away. IMHO it gets way overused now as an organizational construct and has no compiler value whatsoever. It's all for the IDE.
A: Going on with what has been previously said by Russell Myers, if you learn how to refactor your code properly (a skill proficient developers must learn), there really isn't too much of a need for regions.
A couple of weeks ago I thought regions were great because they allowed me to hide my fat code, but after exercising my code skills I was able to make it slimmer and now I fit into a size 7 class (someone should SO make that a measurement for refactoring in the future! :P)
A: I find that they obfuscate the code in all but the simplest of uses. The only use we advocate in our projects are the ones the IDE uses (interface implementations and designer code).
The right tools should be used for the right purpose. Code should be written to show intent and function rather than arbitrarily grouping things. Organizing things into access modifier grouping or some other grouping just seems to be illogical. I find the code should be organized in a manner that makes sense for the particular class; after all, there are other tools for viewing class members by access modifier. This is also the case for almost every other use of regions; there is a better way.
For example, grouping properties, events, constants or otherwise together doesn't really make sense either as code is generally more maintainable if the things are grouped together by function (as in, a property that uses a constant should be near that constant, not near other unrelated properties just because it's a property).
A: There are times when your methods HAVE to be long, especially with web development. In those cases (such as when I've got a gridview with a large, complex object bound to it) I've found it useful to use regions:
#region Declaring variables for fields and object properties
#region Getting the fields in scope
#region Getting the properties of the object
#region Setting Fields
These are discrete sections of the method that COULD be broken out, but it would be difficult (I'd have to use variables with larger scope than I like or pass a LOT of variables as 'out'), and it is basic plumbing.
In this case, regions are perfectly acceptable. In others, they are needless.
I will also use regions to group methods into logical groups. I reject partial classes for this purpose, as I tend to have a lot of pages open when I'm debugging, and the fewer partial classes there are in an object (or page, or dialog), the more of them I can have on my tab list (which I limit to one line so I can see more code).
Regions are only a problem when used as a crutch, or when they cover poor code (for instance, if you are nesting regions inside of each other within the same scope, it's a bad sign).
A: I often use them instead of comments to order groups of functionality in the body of a class, e.g. "Configuration public interface", "Status public interface", "internal processing" and "internal worker thread management".
Using the keyboard shortcuts to "collapse to definitions" and "expand current block", I can easily navigate even larger classes.
Unfortunately, Regions are broken for C++, and MS doesn't think it needs to be fixed.
A: I hate the over-use of these. The only think I find them useful for is hiding away things you probably never want to see again. Then again, those things should probably be off in a library somewhere.
A: Often times, both partials and #regions are used as a crutch for bad design (e.g. class is too big or tries to do too many things).
The best use I've had for #regions so far is the grouping of functionality that is seen in many different classes. For example, value objects that have getters, setters, constructors and supporting fields. I might very well group those ideas into regions. Its a matter of opinion, however, as to whether that makes code cleaner or harder to read.
A: http://www.rauchy.net/regionerate/ - Automatically regionised your code ;)
I'm a fan of regions for grouping sections of large classes, say all the properties together, all constances, etc. I'm someone who's constantly collapsing code I don't need to see at that time so I love regions for that.
Also I find regions really useful when implementing interfaces, particularly multiple interfaces. I can group each interfaces methods, properties, events, etc so it's easier at a glance to see what method belongs to what interface.
A: They can be overused, but I like them for separating private methods, public methods, properties, and instance variables.
A: Like any language feature, regions have the potential to be misused and abused but they also have their benefits.
They are great for creating "folding" groups around:
*
*methods, especially if you have a lot of overloaded methods
*interface implementations
*operator overloads
You can also use it to group properties, public/private methods, events, class-wide variables, etc.
I use regions in my code to help create a consistent structure in my code so I always know where things are at a glance. Yes, it makes things a bit harder during refactoring or adding new functions (especially when autogenerated by Visual Studio) but I feel it's a small price to pay to keep things consistent and structured.
A: Nice answers, I agree with them that say it sometimes reflects bad coding and design but #region actually is usefull if you're creating documentation (MSDN style) with the SandCastle.
Lets say you have a public API and there is some base class that you want to give an example of usage for. Then you would properly document your public methods and add an example region where you could copy and paste some code. Problem with this is that when/if your base class changes you're supposed to change the example eventually. Better solution is to include a sample code project in your solution and build it all together, so everytime you build your solution if the sample code is not up to date it will not compile. So what does that have to do with regions you will be asking your self by now. Well look at this sample:
/// <example>
/// The following code sample is an implementation of LoadPublishedVersion() for XmlPageProvider.
/// <code source="../CodeSamples/EPiServerNET/PageProvider/XmlPageProvider.cs" region="LoadPublishedVersion" lang="cs"/>
/// </example>
Notice there is a link to the source code sample file and region for the method that you want to expose as a sample in your documentation. See here the result. That method needs to be in a proper region and will be automatically included in your documentation. That's why I wouldn't throw away #region yet.
A: I love regions because it helps me focus on just what I am working on. I use them even if the class just has a method.
I use code snippets with regions already pre-populated, which is less typing. I feel the class is more organized and does what Code Complete talks about make it nicer for other people to read. The compiler just ignores them, they are now to make code more readable.
A: There really isn't a benefit. They are a code smell. After using them for awhile, I got sick of them. If you need to break things out by functionality, use a partial class.
A: My working day starts with opening files in editor and clicking on "Expand All" to hide all regions. After that I can begin to work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Can I use Collections.EMPTY_LIST wihout an UncheckedException? Is there a Generics Friendly way of using Collection.EMPTY_LIST in my Java Program.
I know I could just declare one myself, but I'm just curious to know if there's a way in the JDK to do this.
Something like users = Collections<User>.EMPTY_LIST;
A: By doing the following:
List<User> users = Collections.emptyList();
The type of the returned list from Collections.emptyList(); will be inferred as a String due to the left-hand-side of the assignment. However, if you prefer to not have this inference, you can define it explicitly by doing the following:
List<User> users = Collections.<User>emptyList();
In this particular instance, this may appear as redundant to most people (in fact, I've seen very little code out in the wild that makes use of explicit type arguments), however for a method with the signature: void doStuff(List<String> users) it would be perfectly clean for one to invoke doStuff() with an explicit type argument as follows:
doStuff(Collections.<String>emptyList());
A: List<User> users = Collections.emptyList();
A: After creating the empty list, I would recommend storing it as a constant rather than creating a new one each time.
Also, there are performance benefits to using Collections.emptyList() versus new ArrayList(0), although the difference is probably small. The list returned by emptyList() is optimized to be an immutable empty list. For example, the size() method simply returns 0, rather than a field lookup or whatever ArrayList does.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do I get modrewrite to not apply rules to a specific folder? How do I get modrewrite to ENTIRELY ignore the /vip/ directory so that all requests pass directly to the folder?
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^vip/.$ - [PT]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
See also How do I ignore a directory in mod_rewrite? -- reposting because I wasn't sufficiently clear about the problem first time around.
A: Replace:
RewriteRule ^vip/.$ - [PT]
with:
RewriteRule ^vip/.*$ - [PT,L]
A: The easy way I do it when needed is add a new .htaccess file to that folder and in it put
RewriteEngine Off
edited to fix typo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Understanding dlls and how they work in Visual Studio Does anyone have a good resource on dlls and how they are used / generated in Visual Studio? A few questions I'm rather hazy on specifically are:
*
*How refresh files work
*How dll version numbers are generated
*The difference between adding a reference by project vs browsing for the specific dll
Any other tips are welcome as well.
A: See the question on DLL information for some background.
Version numbers for unmanaged DLLs are stored in the DLL's rc file, same as for an exe. For managed DLLs I believe it uses AssemblyFileInfo attribute, usually in AssemblyInfo.cs for a Visual Studio generated project:
[assembly: AssemblyFileVersion("1.0.0.0")]
If you add the reference by project then VS will be able to copy the correct flavour (debug/release) of the referenced assembly to your output directory. It can also use this information to implicitly add a dependency between the projects so it builds then in the right order.
A: .NET DLL's
The general term for a .NET DLL is an assembly. They are a single atomic unit of deployment and consist of one or more CLR 'modules' (for most developers usually just one unless they are combining compiler output from two or more languages for example). Assemblies contain both CIL code and CLR metadata such as the assembly manifest.
.refresh Files
.refresh files are simply text files that tell VS where to check for new builds of referenced dll's. They are used in file based web projects where there isn't a project file to store this info.
Version Numbers
.NET Assembly version numbers are generated by an assembly scoped attribute AssemblyVersion which is usually found in a source file named 'AssemblyInfo.cs' (found under a project folder named 'Properties' from VS2005 onwards). Version numbers are comprised of major.minor.build.revision, for example -
[assembly: AssemblyVersion("1.0.0.0")]
AssemblyVersion is used as part of an assembly's identity (i.e. in its strong name) and plays an important role in the binding process and during version policy decisions.
For example if I had two assemblies of the same name in the GAC then the AssemblyVersion attribute would differentiate them for the purposes of loading a specific version of the assembly.
AssemblyVersion number can be fixed and incremented manually or you can allow the compiler to generate the build and revision numbers for you by specifying:
[assembly: AssemblyVersion("1.0.*")] - generates build and revision number
[assembly: AssemblyVersion("1.0.0.*")] - generates revision number
If the AssemblyVersion attribute is not present then the version number default to '0.0.0.0'.
The value of the AssemblyVersion attribute becomes part of an assembly's manifest, the AssemblyFileVersion attribute value does not.
The AssemblyFileVersion attribute is used to embed a Win32 file version into the DLL. If this is not present then AssemblyVersion is used. It has no bearing on how the .NET assembly loader/resolver chooses which version of an assembly to load.
Project References vs Browsing For DLL
If you're adding a project reference it means that the referenced project will be part of your solution. This makes debugging simpler by being able to step directly into your referenced project's code. If you only add a dll reference then you don't have the benefits of the project being part of the solution and being able to step into the code within the solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/169287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.