text
stringlengths
8
267k
meta
dict
Q: Does ASP.NET transfer ALL session data from SQL server at the start of a request, or only as needed? I'm using ASP.NET, with session state stored out of process in SQL Server. When a page request begins, are the entire contents of a user's session retrieved from the DB, deserialized, and sent to the ASP.NET process in one fell swoop, or are individual objects transferred to the ASP.NET process only as needed? Basically, I have a page that stores some large objects in session, and it's hard for my application to determine when the data can be disposed. If the data is only pulled out of the DB when it's used then there isn't an issue; if the entire session state is chunked to ASP.NET for each page request, I might have a performance issue. A: It's all in one go. The session object is recreated from the store at the beginning of the request. It lets ASP.NET work the same way no matter what the underlying store is. You can find the gory details here.
{ "language": "en", "url": "https://stackoverflow.com/questions/165399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: how to compare/validate sql schema I'm looking for a way to validate the SQL schema on a production DB after updating an application version. If the application does not match the DB schema version, there should be a way to warn the user and list the changes needed. Is there a tool or a framework (to use programatically) with built-in features to do that? Or is there some simple algorithm to run this comparison? Update: Red gate lists "from $395". Anything free? Or more foolproof than just keeping the version number? A: I hope I can help - this is the article I suggest reading: Compare SQL Server database schemas automatically It describes how you can automate the SQL Server schema comparison and synchronization process using T-SQL, SSMS or a third party tool. A: You can do it programatically by looking in the data dictionary (sys.objects, sys.columns etc.) of both databases and comparing them. However, there are also tools like Redgate SQL Compare Pro that do this for you. I have specified this as a part of the tooling for QA on data warehouse systems on a few occasions now, including the one I am currently working on. On my current gig this was no problem at all, as the DBA's here were already using it. The basic methodology for using these tools is to maintain a reference script that builds the database and keep this in version control. Run the script into a scratch database and compare it with your target to see the differences. It will also generate patch scripts if you feel so inclined. As far as I know there's nothing free that does this unless you feel like writing your own. Redgate is cheap enough that it might as well be free. Even as a QA tool to prove that the production DB is not in the configuration it was meant to be it will save you its purchase price after one incident. A: You can now use my SQL Admin Studio for free to run a Schema Compare, Data Compare and Sync the Changes. No longer requires a license key download from here http://www.simego.com/Products/SQL-Admin-Studio Also works against SQL Azure. [UPDATE: Yes I am the Author of the above program, as it's now Free I just wanted to Share it with the community] A: Try this SQL. - Run it against each database. - Save the output to text files. - Diff the text files. /* get list of objects in the database */ SELECT name, type FROM sysobjects ORDER BY type, name /* get list of columns in each table / parameters for each stored procedure */ SELECT so.name, so.type, sc.name, sc.number, sc.colid, sc.status, sc.type, sc.length, sc.usertype , sc.scale FROM sysobjects so , syscolumns sc WHERE so.id = sc.id ORDER BY so.type, so.name, sc.name /* get definition of each stored procedure */ SELECT so.name, so.type, sc.number, sc.text FROM sysobjects so , syscomments sc WHERE so.id = sc.id ORDER BY so.type, so.name, sc.number A: If you are looking for a tool that can compare two databases and show you the difference Red Gate makes SQL Compare A: You didn't mention which RDMBS you're using: if the INFORMATION SCHEMA views are available in your RDBMS, and if you can reference both schemas from the same host, you can query the INFORMATION SCHEMA views to identify differences in: -tables -columns -column types -constraints (e.g. primary keys, unique constraints, foreign keys, etc) I've written a set of queries for exactly this purpose on SQL Server for a past job - it worked well to identify differences. Many of the queries were using LEFT JOINs with IS NULL to check for the absence of expected items, others were comparing things like column types or constraint names. It's a little tedious, but its possible. A: I found this small and free tool that fits most of my needs. http://www.wintestgear.com/products/MSSQLSchemaDiff/MSSQLSchemaDiff.html It's very basic but it shows you the schema differences of two databases. It doesn't have any fancy stuff like auto generated scripts to make the differences to go away and it doesn't compare any data. It's just a small, free utility that shows you schema differences :) A: Make a table and store your version number in there. Just make sure you update it as necessary. CREATE TABLE version ( version VARCHAR(255) NOT NULL ) INSERT INTO version VALUES ('v1.0'); You can then check the version number stored in the database matches the application code during your app's setup or wherever is convenient. A: SQL Compare by Red Gate. A: Which RDBMS is this, and how complex are the potential changes? Maybe this is just a matter of comparing row counts and index counts for each table -- if you have trigger and stored procedure versions to worry about also then you need something more industrial A: Try dbForge Data Compare for SQL Server. It can compare and sync any databases, even very large ones. Quick, easy, always delivers a correct result. Try it on your database and comment upon the product. We can recommend you a reliable SQL comparison tool that offer 3 time’s faster comparison and synchronization of table data in your SQL Server databases. It's dbForge Data Compare for SQL Server. Main advantages: * *Speedier comparison and synchronization of large databases *Support of native SQL Server backups *Custom mapping of tables, columns, and schemas *Multiple options to tune your comparison and synchronization *Generating comparison and synchronization reports Plus free 30-day trial and risk-free purchase with 30-day money back guarantee.
{ "language": "en", "url": "https://stackoverflow.com/questions/165401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Html control and Server control can be used in single aspx page I need help on this following aspx code aspx Code: <asp:Label ID ="lblName" runat ="server" Text ="Name"></asp:Label> <asp:TextBox ID ="txtName" runat ="server"></asp:TextBox> Consider this is my aspx page content. I am going to populate the values for the TextBox only after the postback from server. But the label is also posting to the server (runat="server") even though it's not necessary. Should I write my code like this to save time from server with less load. Corrected Code: <label id ="lblNames">Name</label> <asp:TextBox ID ="txtName" runat ="server"></asp:TextBox> Only my server control will send to the server for postback and not my HTML control which has a static value. Please suggest whether this is the correct way of coding. A: If you take the runat='server' out of the <label> element then it won't be parsed as a server control. If you're not going to do anything with lblNames from the server then it is perfectly okay to leave it out. A: If you're not doing anything with the label server-side, then just use a <span>. It'll end up as the same html at the browser. A: .net label controls are rendered as html label elements and do not get posted back to the server. Labels just don't post back. The server control allows you to manipulate the properties of the control in code however which is very useful. There is nothing wrong with using html tags as well in your aspx/ascx page though if you don't need any programmatic control of the element.
{ "language": "en", "url": "https://stackoverflow.com/questions/165402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Resources for 2d game physics I'm looking for some good references for learning how to model 2d physics in games. I am not looking for a library to do it for me - I want to think and learn, not blindly use someone else's work. I've done a good bit of Googling, and while I've found a few tutorials on GameDev, etc., I find their tutorials hard to understand because they are either written poorly, or assume a level of mathematical understanding that I don't yet possess. For specifics - I'm looking for how to model a top-down 2d game, sort of like a tank combat game - and I want to accurately model (among other things) acceleration and speed, heat buildup of 'components,' collisions between models and level boundaries, and missile-type weapons. Websites, recommended books, blogs, code examples - all are welcome if they will aid understanding. I'm considering using C# and F# to build my game, so code examples in either of those languages would be great - but don't let language stop you from posting a good link. =) Edit: I don't mean that I don't understand math - it's more the case that I don't know what I need to know in order to understand the systems involved, and don't really know how to find the resources that will teach me in an understandable way. A: (source: oreilly.com) Physics for Game Developers by O'Reilly A: Speaking from experience, implementing a 2D physics engine is pretty difficult. I'll detail the several steps I took when creating my engine. * *Collision detection. Collision detection can be a difficult problem, even when you're not dealing with 3D worlds or networked simulations. For 2D physics, you definitely want to use the Separating Axis Theorem. Once you've implement SAT, you're half-way done making the dynamics portion of your engine. *Kinematics/Dynamics. Chris Hecker has written an excellent online resource which walked me through collision response step-by-step. *Everything Else. Once you've got the collision detection/response finished, its a matter of implementing everything else you want in the engine. This can include friction, contact forces, joints, along with whatever else you can think of. Have fun! Creating your own physics simulation is an incredibly rewarding experience. A: This is a great tutorial that demonstrates 2D physics concepts using flash and is not specific to flash. http://www.rodedev.com/tutorials/gamephysics/game_physics.swf A: Even if you want to learn it all from the bottom up, an open source physics library that is well coded and documented contains far more information than a book. How do I deal with situation x... find in files can be faster than a paper index. Original response: What, no mention of Box2D? Its an open source side project of a blizzard employee, has a good community, and well, works great. In my (brief) experience with Box2D, integrating it with Torque Game Builder, I found the API clean to use, documentation was clear, it supported all the physics objects I expected (joints in particular were a requirement), and the community looked friendly and active (sometime around early 2010). Judging by forum posters, it also appeared that managers were receptive to source contributions (that did not carry license baggage). It's island based solver seemed quite fast, as I expected from its reputation, not that I did any major performance testing. A: Here are some resources I assembled a few years ago. Of note is the Verlet Integration. I am also including links to some open source and commercial physics engines I found at that time. There is a stackoverflow article on this subject here: 2d game physics? Physics Methods * *Verlet Integration (Wikipedia Article) *Advanced Character Physics (Great article! Includes movement, collisions, joints, and other constraints.) Books * *"Game Physics Engine Development", Ian Millington -- I own this book and highly recommend it. The book builds a physics engine in C++ from scratch. The Author starts with basic particle physics and then adds "laws of motion", constraints, rigid-body physics and on and on. He includes well documented source code all the way through. Physics Engines * *Tokamak (Open source physics API) *APE (Actionscript Physics Engine) *FLADE (Flash Dynamics Engine) *Fisix Engine (another Flash Actionscript engine) *Simple Physics Engine (commercial) A: F# has a feature called Units of Measure which does dimensional analysis for you, providing errors if you get it wrong. For example if you say: let distance : float<meters> = gravity * 3.0<seconds> That would yield a compile-error, since gravity is < meters/seconds^2 > and not < meters >. Also, since F# is just .NET you can write your math/physics code in a class library and reference that from your C#. I'd reccomend you check out these blog posts for more information: * *Simple WPF game in F# using Units of Measure *Andew Kennedy's blog A: This is a great resource for writing your first engine. It's in 3D but it's very easy to convert down to 2D. I know at least one big company that followed this tutorial for their internal engine, and i personally have followed his steps for my own engine. He explains all the basic physics concepts in spring/impulse based physics, and shows you how to write your own intergrater. A: The F#.NET Journal has published two articles about this: * *Real-time Finite Element Materials simulation (15th June 2008). *Rigid body dynamics (15th January 2010)
{ "language": "en", "url": "https://stackoverflow.com/questions/165404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Adding field to open recordset I there a simple way to append a new field to an existing open ADO RecordSet? fields.append() won't work if the RecordSet is open, and closing appears to kill the existing data. NB: I'm using Microsoft ActiveX DataObject 2.8 Library A: You can't append fields to a recordset while it's open. You can create a clone of the recordset, append your required fields, open it and copy the data from the original. The other option is to persist the recordset as xml, modify the rowset schema, add required fields & then load xml into a new recordset.
{ "language": "en", "url": "https://stackoverflow.com/questions/165421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does ItemContainerGenerator.ContainerFromItem work with a grouped list? I have a ListBox which until recently was displaying a flat list of items. I was able to use myList.ItemContainerGenerator.ConainerFromItem(thing) to retrieve the ListBoxItem hosting "thing" in the list. This week I've modified the ListBox slightly in that the CollectionViewSource that it binds to for its items has grouping enabled. Now the items within the ListBox are grouped underneath nice headers. However, since doing this, ItemContainerGenerator.ContainerFromItem has stopped working - it returns null even for items I know are in the ListBox. Heck - ContainerFromIndex(0) is returning null even when the ListBox is populated with many items! How do I retrieve a ListBoxItem from a ListBox that's displaying grouped items? Edit: Here's the XAML and code-behind for a trimmed-down example. This raises a NullReferenceException because ContainerFromIndex(1) is returning null even though there are four items in the list. XAML: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:scm="clr-namespace:System.ComponentModel;assembly=WindowsBase" Title="Window1"> <Window.Resources> <XmlDataProvider x:Key="myTasks" XPath="Tasks/Task"> <x:XData> <Tasks xmlns=""> <Task Name="Groceries" Type="Home"/> <Task Name="Cleaning" Type="Home"/> <Task Name="Coding" Type="Work"/> <Task Name="Meetings" Type="Work"/> </Tasks> </x:XData> </XmlDataProvider> <CollectionViewSource x:Key="mySortedTasks" Source="{StaticResource myTasks}"> <CollectionViewSource.SortDescriptions> <scm:SortDescription PropertyName="@Type" /> <scm:SortDescription PropertyName="@Name" /> </CollectionViewSource.SortDescriptions> <CollectionViewSource.GroupDescriptions> <PropertyGroupDescription PropertyName="@Type" /> </CollectionViewSource.GroupDescriptions> </CollectionViewSource> </Window.Resources> <ListBox x:Name="listBox1" ItemsSource="{Binding Source={StaticResource mySortedTasks}}" DisplayMemberPath="@Name" > <ListBox.GroupStyle> <GroupStyle> <GroupStyle.HeaderTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"/> </DataTemplate> </GroupStyle.HeaderTemplate> </GroupStyle> </ListBox.GroupStyle> </ListBox> </Window> CS: public Window1() { InitializeComponent(); listBox1.ItemContainerGenerator.StatusChanged += ItemContainerGenerator_StatusChanged; } void ItemContainerGenerator_StatusChanged(object sender, EventArgs e) { if (listBox1.ItemContainerGenerator.Status == System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated) { listBox1.ItemContainerGenerator.StatusChanged -= ItemContainerGenerator_StatusChanged; var i = listBox1.ItemContainerGenerator.ContainerFromIndex(1) as ListBoxItem; // select and keyboard-focus the second item i.IsSelected = true; i.Focus(); } } A: You have to listen and react to the ItemsGenerator.StatusChanged Event and wait until the ItemContainers are generated before you can access them with ContainerFromElement. Searching further, I've found a thread in the MSDN forum from someone who has the same problem. This seems to be a bug in WPF, when one has a GroupStyle set. The solution is to punt the access of the ItemGenerator after the rendering process. Below is the code for your question. I tried this, and it works: void ItemContainerGenerator_StatusChanged(object sender, EventArgs e) { if (listBox1.ItemContainerGenerator.Status == System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated) { listBox1.ItemContainerGenerator.StatusChanged -= ItemContainerGenerator_StatusChanged; Dispatcher.BeginInvoke(System.Windows.Threading.DispatcherPriority.Input, new Action(DelayedAction)); } } void DelayedAction() { var i = listBox1.ItemContainerGenerator.ContainerFromIndex(1) as ListBoxItem; // select and keyboard-focus the second item i.IsSelected = true; i.Focus(); } A: If the above code doesn't work for you, give this a try public class ListBoxExtenders : DependencyObject { public static readonly DependencyProperty AutoScrollToCurrentItemProperty = DependencyProperty.RegisterAttached("AutoScrollToCurrentItem", typeof(bool), typeof(ListBoxExtenders), new UIPropertyMetadata(default(bool), OnAutoScrollToCurrentItemChanged)); public static bool GetAutoScrollToCurrentItem(DependencyObject obj) { return (bool)obj.GetValue(AutoScrollToSelectedItemProperty); } public static void SetAutoScrollToCurrentItem(DependencyObject obj, bool value) { obj.SetValue(AutoScrollToSelectedItemProperty, value); } public static void OnAutoScrollToCurrentItemChanged(DependencyObject s, DependencyPropertyChangedEventArgs e) { var listBox = s as ListBox; if (listBox != null) { var listBoxItems = listBox.Items; if (listBoxItems != null) { var newValue = (bool)e.NewValue; var autoScrollToCurrentItemWorker = new EventHandler((s1, e2) => OnAutoScrollToCurrentItem(listBox, listBox.Items.CurrentPosition)); if (newValue) listBoxItems.CurrentChanged += autoScrollToCurrentItemWorker; else listBoxItems.CurrentChanged -= autoScrollToCurrentItemWorker; } } } public static void OnAutoScrollToCurrentItem(ListBox listBox, int index) { if (listBox != null && listBox.Items != null && listBox.Items.Count > index && index >= 0) listBox.ScrollIntoView(listBox.Items[index]); } } Usage in XAML <ListBox IsSynchronizedWithCurrentItem="True" extenders:ListBoxExtenders.AutoScrollToCurrentItem="True" ..../> A: Try parsing the VisualTree up from the 'thing' until you reach a ListBoxItem type
{ "language": "en", "url": "https://stackoverflow.com/questions/165424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I convert a "title" attribute to a mouse over event with jQuery? I have a "span" element inside a "table" "td" element. The span tag has a Title. I want to get the title of that span tag and pull it out to make it the "mouseover" tip for the "td" element. For example: I want to turn this: <td> <a href="#"><span id="test" title="Acres for each province">Acres</span></a> </td> Into this: <td onmouseover="tip(Acres for each province)"> <a href="#"><span id="test">Acres</span></a> </td> EDIT: I don't think you guys understand. I am trying to put the onmouseover function into the "td" tag. I am NOT trying to put it into the "span" tag. A: Based on your edit, you might check out jQuery's DOM traversal methods: http://docs.jquery.com/Traversing Something along these lines (not tested, I don't claim it's syntactically correct, just general ideas here)... $("td").each(function() { $(this).mouseover(function() { tip($(this).children("span").attr("title")); }); }); A: something like: $("span#test").mouseover( function () { tip($(this).attr("title")); } A: If you cant put a class on the td or select it in some way then start by selecting the span, then go to the span's grandparent and attach to the mouseover: // get each span with id = test $("span#test").each(function(){ var $this = $(this); // attach to mouseover event of the grandparent (td) $this.parent().parent().mouseover( function () { tip($this.attr("title")); } ); A: Ok, I'll try it too :P $("#yourTable").find("td").over(function() { generateTip($(this).find("span:first").attr("title") } , function() { removeTip() } ) What this does: * *Get the table with id yourTable *Select all its td *insert a mouseover and mouseout event *mouseover event : call the generateTip function with the title value of the first span in that td *mouseout event : call the removeTip() (optionnal) function. A: With jQuery: $('#test').attr('title')
{ "language": "en", "url": "https://stackoverflow.com/questions/165443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PHP Preserve scope when calling a function I have a function that includes a file based on the string that gets passed to it i.e. the action variable from the query string. I use this for filtering purposes etc so people can't include files they shouldn't be able to and if the file doesn't exist a default file is loaded instead. The problem is that when the function runs and includes the file scope, is lost because the include ran inside a function. This becomes a problem because I use a global configuration file, then I use specific configuration files for each module on the site. The way I'm doing it at the moment is defining the variables I want to be able to use as global and then adding them into the top of the filtering function. Is there any easier way to do this, i.e. by preserving scope when a function call is made or is there such a thing as PHP macros? Edit: Would it be better to use extract($_GLOBALS); inside my function call instead? Edit 2: For anyone that cared. I realised I was over thinking the problem altogether and that instead of using a function I should just use an include, duh! That way I can keep my scope and have my cake too. A: Edit: Okay, I've re-read your question and I think I get what you're talking about now: you want something like this to work: // myInclude.php $x = "abc"; // ----------------------- // myRegularFile.php function doInclude() { include 'myInclude.php'; } $x = "A default value"; doInclude(); echo $x; // should be "abc", but actually prints "A default value" If you are only changing a couple of variables, and you know ahead of time which variables are going to be defined in the include, declare them as global in the doInclude() function. Alternatively, if each of your includes could define any number of variables, you could put them all into one array: // myInclude.php $includedVars['x'] = "abc"; $includedVars['y'] = "def"; // ------------------ // myRegularFile.php function doInclude() { global $includedVars; include 'myInclude.php'; // perhaps filter out any "unexpected" variables here if you want } doInclude(); extract($includedVars); echo $x; // "abc" echo $y; // "def" original answer: this sort of thing is known as "closures" and are being introduced in PHP 5.3 http://steike.com/code/php-closures/ Would it be better to use extract($_GLOBALS); inside my function call instead? dear lord, no. if you want to access a global variable from inside a function, just use the global keyword. eg: $x = "foo"; function wrong() { echo $x; } function right() { global $x; echo $x; } wrong(); // undefined variable $x right(); // "foo" A: When it comes to configuration options (especially file paths and such) I generally just define them with absolute paths using a define(). Something like: define('MY_CONFIG_PATH', '/home/jschmoe/myfiles/config.inc.php'); That way they're always globally accessible regardless of scope changes and unless I migrate to a different file structure it's always able to find everything. A: If I understand correctly, you have a code along the lines of: function do_include($foo) { if (is_valid($foo)) include $foo; } do_include(@$_GET['foo']); One solution (which may or may not be simple, depending on the codebase) is to move the include out in the global scope: if (is_valid(@$_GET['foo'])) include $_GET['foo']; Other workarounds exists (like you mentioned: declaring globals, working with the $_GLOBALS array directly, etc), but the advantage of this solution is that you don't have to remember such conventions in all the included files. A: Why not return a value from your include and then set the value of the include call to a variable: config.php return array( 'foo'=>'bar', 'x'=>23, 'y'=>12 ); script.php $config = require('config.php'); var_dump($config); No need to mess up the place with global variables A: Is there any easier way to do this, i.e. by preserving scope when a function call is made You could use: function doInclude($file, $args = array()) { extract($args); include($file); } If you don't want to explicitly pass the variables, you could call doInclude with get_defined_vars as argument, eg.: doInclude('test.template.php', get_defined_vars()); Personally I would prefer to pass an explicit array, rather than use this, but it would work. A: You can declare variables within the included file as global, ensuring they have global scope: //inc.php global $cfg; $cfg['foo'] = bar; //index.php function get_cfg($cfgFile) { if (valid_cfg_file($cfgFile)) { include_once($cfgFile); } } ... get_cfg('inc.php'); echo "cfg[foo]: $cfg[foo]\n";
{ "language": "en", "url": "https://stackoverflow.com/questions/165445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Where can I find a good jQuery drop shadow plugin? Does anyone have a good recommendation for a drop shadow jQuery plugin? I've been working on a project that had every element on the page with a subtle drop shadow, we started using RUZEE to do the shadows but there was a severe performance hit when you had more then 4 or 5 shadows being calculated on the page. I went to writing my own plugin, I call it simple shadow and it only uses jQuery to inject images in floating div's around the div you want a drop shadow. Nothing elegant but for the purpose of completing that site it worked without performance hits. Now my plugin isn't anything special but I am still in search for a good light weight shadow plugin. A: CSS 3 will support drop shadow. Firefox and Safari are already supporting the feature. You might want to use that instead of the jQuery functionality, since it will run in browsers who have turned off javascript. Take a look at http://www.css3.info/preview/box-shadow/ for a demo of the shadow. A: The original site hosting the jQuery Dropshadow plugin has apparently gone down. For anyone looking for it, I'm currently hosting it on my Dropbox account. A: jQuery UI also provides drop shadow functionality. A: The JQuery UI no longer supports shadow functionality. A: try the FontEffect jQuery Plugin, sorry I can't post the link, but you can find it easily on google or jQuery plugin site.
{ "language": "en", "url": "https://stackoverflow.com/questions/165446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Why do people like case sensitivity? Just wondering why people like case sensitivity in a programming language? I'm not trying to start a flame war just curious thats all. Personally I have never really liked it because I find my productivity goes down when ever I have tried a language that has case sensitivity, mind you I am slowly warming up/getting used to it now that I'm using C# and F# alot more then I used to. So why do you like it? Cheers A: An advantage of VB.NET is that although it is not case-sensitive, the IDE automatically re-formats everything to the "official" case for an identifier you are using - so it's easy to be consistent, easy to read. Disadvantage is that I hate VB-style syntax, and much prefer C-style operators, punctuation and syntax. In C# I find I'm always hitting Ctrl-Space to save having to use the proper type. Just because you can name things which only differ by case doesn't mean it's a good idea, because it can lead to misunderstandings if a lot of that leaks out to larger scopes, so I recommend steering clear of it at the application or subsystem-level, but allowing it only internally to a function or method or class. A: Case sensitivity doesn't enforce coding styles or consistency. If you pascal case a constant, the compiler won't complain. It'll just force you to type it in using pascal case every time you use it. I personally find it irritating to have to try and distinguish between two items which only differ in case. It is easy to do in a short block of code, but very difficult to keep straight in a very large block of code. Also notice that the only way people can actually use case sensitivity without going nuts is if they all rigidly follow the same naming conventions. It is the naming convention which added the value, not the case sensitivity. A: Consistency. Code is more difficult to read if "foo", "Foo", "fOO", and "fOo" are considered to be identical. SOME PEOPLE WOULD WRITE EVERYTHING IN ALL CAPS, MAKING EVERYTHING LESS READABLE. Case sensitivity makes it easy to use the "same name" in different ways, according to a capitalization convention, e.g., Foo foo = ... // "Foo" is a type, "foo" is a variable with that type A: I maintain an internal compiler for my company, and am tempted to make it a hybrid - you can use whatever case you want for an identifier, and you have to refer to it with the same casing, but naming something else with the same name and different case will cause an error. Dim abc = 1 Dim y = Abc - 1 ' error, case doesn't match "abc" Dim ABC = False ' error, can't redeclare variable "abc" It's currently case-insensitive, so I could probably fix the few existing errors and nobody would complain too much... A: Many people who like case-sensitivity misunderstand what case-insensitivity means. VB .NET is case-insensitive. That doesn't mean that you can declare a variable as abc, then later refer to it as ABC, Abc, and aBc. It means that if you type it as any of those others, the IDE will automatically change it to the correct form. Case-insensitivity means you can type dim a as string and VS will automatically change it to the correctly-cased Dim a As String In practice, this means you almost never have to hit the Shift key, because you can type in all lowercase and let the IDE correct for you. But C# is not so bad about this as it used to be. Intellisense in C# is much more aggressive than it was in VS 2002 and 2003, so that the keystroke count falls quite a bit. A: There's a lot of answers here, but I'm surprised no one pointed out the obvious example that also makes fun of a stackoverflow competitor: expertSexChange != expertsExchange Case is very important when you use camel case variable names. A: I believe it enforces consistency, which improves the readability of code, and lets your eye parse out the pieces better. class Doohickey { public void doSomethingWith(string things) { print(things); } } Using casing conventions makes that code appear very standarized to any programmer. You can pick out classes, types, methods easily. It would be much harder to do if anyone could capitalize it in any way: Class DOOHICKEY { Public Void dosomethingwith(string Things) { Print(things); } } Not to say that people would write ugly code, but much in the way capitalization and punctuation rules make writing easier to read, case sensitivity or casing standards make code easier to read. A: Case sensitivity is madness! What sort of insane coder would use variables named foo, foO, fOo, and fOO all in the same scope? You'll never convince me that there is a reason for case sensitivity! A: I believe it is important that you understand the difference between what case sensitivity is and what readability is to properly answer this. While having different casing strategies is useful, you can have them within a language that isn't case sensitive. For example foo can be used for a variable and FOO as a constant in both java and VB. There is the minor difference that VB will allow you to type fOo later on, but this is mostly a matter of readability and hopefully is fixed by some form of code completion. What can be extremely useful is when you want to have instances of your objects. If you use a consistent naming convention it can become very easy to see where your objects come from. For example: FooBar fooBar = new FooBar(); When only one object of a type is needed, readability is significantly increased as it is immediately apparent what the object is. When multiple instances are needed, you will obviously have to choose new (hopefully meaningful names), but in small code sections it makes a lot of sense to use the Class name with a lowercase first character rather than a system like myFooBar, x, or some other arbitrary value that you'll forget what it does. Of course all of this is a matter of context, however in this context I'd say 9 times out of 10 it pays off. A: It gives you more options. Bell bell BEll are all different. Besides, it drives the newbies that were just hired nuts trying to find out why the totals aren't coming out right ;o))) A: Because now you actually have to type everything in a consistent way. And then things suddenly begin to make sense. If you have a decent editor - one that features IntelliSense or the same thing by another name - you shouldn't have any problems figuring out case-sensitive namees. A: I think there is also an issue of psychology involved here. We are programmers, we distinguish minutely between things. a is not the same ASCII value as A, and I would feel odd when my compiler considers them the same. This is why, when I type (list 'a 'b 'c) in LISP (in the REPL), and it responds with (A B C) My mind immediately exclaims 'That's not what I said!'. When things are not the same, they are different and must be considered so. A: I usually spend some time with Delphi programming on vacation, and most of the other time I use only C++ and MASM. And one thing's odd: when I'm on Delphi, I don't like case sensitivity, but when I'm on C++ - I do. I like case sensitivity, becouse it makes similar words (functions, variables) look similar, and I like non-case sensitivity because it doesn't put excessive restrictions on syntaxis. A: From .NET Framework Developer's Guide Capitalization Conventions, Case-Sensitivity: The capitalization guidelines exist solely to make identifiers easier to read and recognize. Casing cannot be used as a means of avoiding name collisions between library elements. Do not assume that all programming languages are case-sensitive. They are not. Names cannot differ by case alone. A: It's useful for distinguishing between types in code. For example in Java: If it begins with a capital letter, then its probably a class. if its ALL_CAPS its probably a constant. It gives more versatility. A: Feels like a more professional way of coding. Shouldn't need the compiler to figure out what you meant. A: I felt the same way as you a long time ago when i used VB3/4 a lot more. Now I work in mainly C#. But now I find the IDE's do a great job of finding the symbols, and giving good intellisense on the different cases. It also gives me more flexibility in my own code as I can have differnt meaning to items with different cases, which I do a lot now. A: IMHO it's entirely a question of habit. Whichever one you're used to will seem natural and right. You can come up with plenty of justifications as to why it's good or bad, but none of them hold much water. Eg: * *You get more possible identifiers, eg. foo vs Foo vs FOO. *But having identifiers that differ only in case is not a good idea *You can encode type-info into a name (eg. FooBar=typename, fooBar=function, foo_bar=variable, FOO_BAR=macro) *But you can do that anyway with Hungarian notation A: Also a good habit if your working in Linux where referencing file names is case sensitive. I had to port a Windows ColdFusion application to work in Linux and it was an utter nightmare. Also some databases have case sensitivity turned on, imagine the joy there. It is good habit though regardless of platform and certainly leads to a more consistent development style. A: Because it's how natural language works, too. A: In progamming there's something to be said for case sensitivity, for instance having a public property Foo and a corresponding private/protected field foo. With IntelliSense it's not very hard not to make mistakes. However in an OS, case sensitivity is just crazy. I really don't want to have a file Foo and foo and fOO in the same directory. This drives me cray everytime i'm doing *nix stuff. A: For me case sensitivity is just a play on scopes like thisValue for an argument and ThisValue for a public property or function. More than often you need to use the same variable name (as it represents the same thing) in different scopes and case sensitivity helps you doing this without resorting to prefixes. Whew, at least we are no longer using Hungarian notation. A: After working many years with legacy VBScript ASP code, when we moved to .NET we chose C#, and one of the main reasons was case sensitivity. The old code was unreadable because people didn't follow any convention: code was an unreadable mess (well, poor VBScript IDEs helped on that). In C# we can define naming conventions and everybody must follow them. If something is not correctly cased, you can rename it (with refactoring, but that's an IDE feature) and there won't be any problem because the class or variable will be named the same way all across the code. Finally, I think it's much more readable if everything is correctly cased. Maybe it's faster to write without case sensitivity, but from a code reviewing and maintaining point, it's not the best thing because skipping through the code looking for something is easier. For example it's easier to find all the foo strings at a glance than looking for foo, Foo, FOO, FOo... A: Case-insensitive languages don't easily generalize to non-ASCII character sets. Proper case conversion for a language other than English is not a straightforward task, and depends on system locale, among other things. A: Case insensitivity is very difficult, unless you restrict the syntax to ascii (or only apply the case insensitivity to the ascii characters), and also you need to restrict the locale. The rules for determining upper and lower case of characters is neither well defined, nor is it lossless. Some characters can have more than one 'lowercase' form. Some people will disagree as to what the correct uppercase form should be. (Some languages and environments allow for almost character to be used in names) Consider In C# what is the difference between ToUpper() and ToUpperInvariant()?
{ "language": "en", "url": "https://stackoverflow.com/questions/165455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Which is correct way to check for Null exception? Which is the most correct code? if (HttpContext.Current.Response.Cookies[authCookieName] != null) { HttpContext.Current.Response.Cookies[authCookieName].Value = "New Value"; } or if (HttpContext.Current != null) if (HttpContext.Current.Response != null) if (HttpContext.Current.Response.Cookies != null) if (HttpContext.Current.Response.Cookies[authCookieName] != null) HttpContext.Current.Response.Cookies[authCookieName].Value = "New Value"; A: Both are good. Assuming that you have already checked everything else that need to be checked first. E.g.: private bool CheckSuspendersAndBelt() { try { //ensure that true is true... if (true == true) { //...and that false is false... if (false == false) { //...and that true and false are not equal... if (false != true) { //don't proceed if we don't have at least one processor if (System.Environment.ProcessorCount > 0) { //and if there is no system directory then something is wrong if (System.Environment.SystemDirectory != null) { //hopefully the code is running under some version of the CLR... if (System.Environment.Version != null) { //we don't want to proceed if we're not in a process... if (System.Diagnostics.Process.GetCurrentProcess() != null) { //and code running without a thread would not be good... if (System.Threading.Thread.CurrentThread != null) { //finally, make sure instantiating an object really results in an object... if (typeof(System.Object) == (new System.Object()).GetType()) { //good to go return true; } } } } } } } } } return false; } catch { return false; } } (sorry, couldn't resist... :) ) A: could try: if(HttpContext.Current != null && HttpContext.Current.Response != null && HttpContext.Current.Response.Cookies != null && HttpContext.Current.Response.Cookies[authCookieName] != null) { // do your thing } A: If any one of HttpContext, HttpContext.Current, HttpContext.Current.Response, or Http.Current.Response.Cookies is null, you're already in trouble. Let the exception happen and fix your web server. A: HttpContext.Current.Response.Cookies will never be null. The only thing that can cause a null is if the cookie you are expecting doesn't exist, so the first is correct. HttpContext.Current would be null if you weren't accepting a web request though :) A: The first example you gave is more than enough. Like mentioned, if any of the other objects are null there is a problem with ASP.NET. if (HttpContext.Current.Response.Cookies[authCookieName] != null) { HttpContext.Current.Response.Cookies[authCookieName].Value = "New Value"; } But rather than littering your code with these often many checks, you should create some generic functions like SetCookie, GetCookie, GetQueryString, and GetForm, etc. which accept the name and value (for Set functions) as parameters, handle the null check, and returns the value or an empty string (for Get Functions). This will make your code much easier to maintain and possibly improve, and if you decide to use something other than Cookies to store/retrieve options in the future, you'll only have to change the functions. A: Neither is really more correct, though I would avoid the second, as deeply nested conditionals tend to be hard to understand and maintain. If you would prefer you get a null pointer exception, use the first. If you want to deal with nulls in another way or silently, use the second (or a refactored version of the second). A: If you think there's a chance that Current, Response, Cookies, or Cookies[authCookieName] could be null, and you have a reasonable thing to do if any of them are, then the latter's the way to go. If the chances are low, and/or there's nothing you can do if the intermediates are null, go for the former, as it's more concise - the best you could do is to get better logging if you use the expanded example.
{ "language": "en", "url": "https://stackoverflow.com/questions/165458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Avoiding TSQL Data-conversion errors I think this is best asked in the form of a simple example. The following chunk of SQL causes a "DB-Library Error:20049 Severity:4 Message:Data-conversion resulted in overflow" message, but how come? declare @a numeric(18,6), @b numeric(18,6), @c numeric(18,6) select @a = 1.000000, @b = 1.000000, @c = 1.000000 select @a/(@b/@c) go How is this any different to: select 1.000000/(1.000000/1.000000) go which works fine? A: I ran into the same problem the last time I tried to use Sybase (many years ago). Coming from a SQL Server mindset, I didn't realize that Sybase would attempt to coerce the decimals out -- which, mathematically, is what it should do. :) From the Sybase manual: Arithmetic overflow errors occur when the new type has too few decimal places to accommodate the results. And further down: During implicit conversions to numeric or decimal types, loss of scale generates a scale error. Use the arithabort numeric_truncation option to determine how serious such an error is considered. The default setting, arithabort numeric_truncation on, aborts the statement that causes the error but continues to process other statements in the transaction or batch. If you set arithabort numeric_truncation off, Adaptive Server truncates the query results and continues processing. So assuming that the loss of precision is acceptable in your scenario, you probably want the following at the beginning of your transaction: SET ARITHABORT NUMERIC_TRUNCATION OFF And then at the end of your transaction: SET ARITHABORT NUMERIC_TRUNCATION ON This is what solved it for me those many years ago ... A: This is just speculation, but could it be that the DBMS doesn't look at the dynamic value of your variables but only the potential values? Thus, a six-decimal numeric divided by a six-decimal numeric could result in a twelve-decimal numeric; in the literal division, the DBMS knows there is no overflow. Still not sure why the DBMS would care, though--shouldn't it return the result of two six-decimal divisions as up to a 18-decimal numeric? A: Because you have declared the variables in the first example the result is expected to be of the same declaration (i.e. numeric (18,6)) but it is not. I have to say that the first one worked in SQL2005 though (returned 1.000000 [The same declared type]) while the second one returned (1.00000000000000000000000 [A total different declaration]). A: Not directly related, but could possibly save someone some time with the Arithmetic overflow errors using Sybase ASE (12.5.0.3). I was setting a few default values in a temporary table which I intended to update later on, and stumbled on to an Arithmetic overflow error. declare @a numeric(6,3) select 0.000 as thenumber into #test --indirect declare select @a = ( select thenumber + 100 from #test ) update #test set thenumber = @a select * from #test Shows the error: Arithmetic overflow during implicit conversion of NUMERIC value '100.000' to a NUMERIC field . Which in my head should work, but doesn't as the 'thenumber' column wasn't declared ( or indirectly declared as decimal(4,3) ). So you would have to indirectly declare the temp table column with scale and precision to the format you want, as in my case was 000.000. select 000.000 as thenumber into #test --this solved it Hopefully that saves someone some time :)
{ "language": "en", "url": "https://stackoverflow.com/questions/165466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is better practice when programming a member function? I have seen member functions programed both inside of the class they belong to and outside of the class with a function prototype inside of the class. I have only ever programmed using the first method, but was wondering if it is better practice to use the other or just personal preference? A: Assuming you mean C++, it is always better to define functions outside of the class, because if you put it inside the class, compiler may try to inline it, which is not always desirable: * *Increase in code size (every object file that includes this header might end up with a copy of the function in their code). *Breaking binary compatibility when function definition changes. Even with inline functions, it is usually better to put definitions outside the class to improve readability of class public interface, unless the function is a trivial accessor or some other one-liner. A: For C++, putting method definitions in the header file means that everything that includes a given header must be recompiled when the header changes - even if it's just an implementation detail. Moving definitions out of the header means that files which include the header will need to be recompiled only when the header itself changes (functions added/removed, or declarations changed). This can have a big impact on compile times for complex projects. A: There's advantages to both techniques. If you place only prototypes in the class definition, that makes it easier for someone who is using your class to see what methods are available. They aren't distracted by implementation details. Putting the code directly in the class definition makes it simpler to use the class, you only have to #include a header. This is especially useful (necessary) with templated classes. A: Presuming the language is C++: The bottom line is that is personal preference. Inside the class is shorter overall and more direct, especially for the int getFoo() const { return _foo; } type of function. Outside te class, can remove "clutter" from the class definition. I have seen both in use... Of course, non-inlined functions are always outside the class. A: It is also common to mix both styles when defining a class. For simple methods consisting of 1 or 2 lines it is common and convenient to define the method body within the class definition. For more lengthy methods it is better to define these externally. You will have more readable class definitions without cluttering them up with the method body. Hiding the implementation of a method is beneficial in that the user of the class will not be distracted by the actual implementation, or make assumptions about the implementation that might change at a later time. A: I assume you are talking about C++. Having a nice and clean interface is certainly a good idea. Having a separate implementation file helps to keep your interface clean. It also reduces compilation time, especially if you are using an opaque pointer. A: If you implement the function inside the class, you cannot #include the class in multiple .cpp files or the linker will complain about multiple definitions of the function. Thus, usual practice is to have the class definition in a .h file and the members implementation in a .cpp file (usually with the same name). A: Again, assiming C++, I usually restrict this to placeholders on virtual functions, e.g. virtual int MyFunc() {} // Does nothing in base class, override if needed Anything else, and Andrew Medico's point kicks in too easily and hurts compile times.
{ "language": "en", "url": "https://stackoverflow.com/questions/165474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Installing starling on Windows I am trying to install the starling gem on my Windows machine. But, whenever I try to install it I get this error: Building native extensions. This could take a while... ERROR: Error installing starling: ERROR: Failed to build gem native extension. c:/ruby/bin/ruby.exe extconf.rb install starling -- --srcdir= c:\ruby-1.8.7-p72 checking for windows.h... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --srcdir=. --curdir --ruby=c:/ruby/bin/ruby Gem files will remain installed in c:/ruby/lib/ruby/gems/1.8/gems/eventmachine-0 .12.2 for inspection. Results logged to c:/ruby/lib/ruby/gems/1.8/gems/eventmachine-0.12.2/ext/gem_mak e.out What do I need to install to provide the windows.h header? A: Gems is somewhat broken on Windows at present was at the time broken on Windows, but it's fixed now. The following workaround applies to the old One-Click Installer version of Ruby; you should really update to the new MinGW-based RubyInstaller and the DevKit to which the workaround still works, but is more future proof. * *Locate a version of the problem gem (in this case it's eventmachine) that has a win32 binary. If you look on RubyForge, you'll see that the last eventmachine gem to possess a win32 binary is version 0.12.0 *Force that version of event machine to install: $ gem install eventmachine --version=0.12.0 Successfully installed eventmachine-0.12.0-x86-mswin32 1 gem installed Installing ri documentation for eventmachine-0.12.0-x86-mswin32... Installing RDoc documentation for eventmachine-0.12.0-x86-mswin32... *Now install try installing your original gem again: $ gem install starling Successfully installed ZenTest-3.10.0 Successfully installed memcache-client-1.5.0 Successfully installed SyslogLogger-1.4.0 Successfully installed starling-0.9.8 4 gems installed Installing ri documentation for ZenTest-3.10.0... Installing ri documentation for memcache-client-1.5.0... Installing ri documentation for SyslogLogger-1.4.0... Installing ri documentation for starling-0.9.8... Installing RDoc documentation for ZenTest-3.10.0... Installing RDoc documentation for memcache-client-1.5.0... Installing RDoc documentation for SyslogLogger-1.4.0... Installing RDoc documentation for starling-0.9.8... Be warned though, if you now run gem update gems will stupidly try and install the latest version of eventmachine which, as we already know, won't build on Windows. This causes gem update to stop completely. See this question to find out how to work around this particular annoyance. A: The install seems to be stuck on installing the eventmachine gem. The easiest approach here may be to download and install the eventmachine binary gem for windows here Otherwise you will need a compiler. (which I assume you don't have) A: I don't know if this will work but someone is working on a one click installer of Ruby under Windows that comes with a C compiler. See http://github.com/luislavena/rubyinstaller/tree/master A: Now that everything is installed, is it possible to get it working under windows? I'm getting a fork() function unimplemented on this machine, because, Windows doesn't have a fork() process.
{ "language": "en", "url": "https://stackoverflow.com/questions/165488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Detecting Mouse clicks in windows using python How can I detect mouse clicks regardless of the window the mouse is in? Perferabliy in python, but if someone can explain it in any langauge I might be able to figure it out. I found this on microsoft's site: http://msdn.microsoft.com/en-us/library/ms645533(VS.85).aspx But I don't see how I can detect or pick up the notifications listed. Tried using pygame's pygame.mouse.get_pos() function as follows: import pygame pygame.init() while True: print pygame.mouse.get_pos() This just returns 0,0. I'm not familiar with pygame, is something missing? In anycase I'd prefer a method without the need to install a 3rd party module. (other than pywin32 http://sourceforge.net/projects/pywin32/ ) A: It's been a hot minute since this question was asked, but I thought I'd share my solution: I just used the built-in module ctypes. (I'm using Python 3.3 btw) import ctypes import time def DetectClick(button, watchtime = 5): '''Waits watchtime seconds. Returns True on click, False otherwise''' if button in (1, '1', 'l', 'L', 'left', 'Left', 'LEFT'): bnum = 0x01 elif button in (2, '2', 'r', 'R', 'right', 'Right', 'RIGHT'): bnum = 0x02 start = time.time() while 1: if ctypes.windll.user32.GetKeyState(bnum) not in [0, 1]: # ^ this returns either 0 or 1 when button is not being held down return True elif time.time() - start >= watchtime: break time.sleep(0.001) return False A: Windows MFC, including GUI programming, is accessible with python using the Python for Windows extensions by Mark Hammond. An O'Reilly Book Excerpt from Hammond's and Robinson's book shows how to hook mouse messages, .e.g: self.HookMessage(self.OnMouseMove,win32con.WM_MOUSEMOVE) Raw MFC is not easy or obvious, but searching the web for python examples may yield some usable examples. A: The only way to detect mouse events outside your program is to install a Windows hook using SetWindowsHookEx. The pyHook module encapsulates the nitty-gritty details. Here's a sample that will print the location of every mouse click: import pyHook import pythoncom def onclick(event): print event.Position return True hm = pyHook.HookManager() hm.SubscribeMouseAllButtonsDown(onclick) hm.HookMouse() pythoncom.PumpMessages() hm.UnhookMouse() You can check the example.py script that is installed with the module for more info about the event parameter. pyHook might be tricky to use in a pure Python script, because it requires an active message pump. From the tutorial: Any application that wishes to receive notifications of global input events must have a Windows message pump. The easiest way to get one of these is to use the PumpMessages method in the Win32 Extensions package for Python. [...] When run, this program just sits idle and waits for Windows events. If you are using a GUI toolkit (e.g. wxPython), this loop is unnecessary since the toolkit provides its own. A: I use win32api. It works when clicking on any windows. # Code to check if left or right mouse buttons were pressed import win32api import time state_left = win32api.GetKeyState(0x01) # Left button down = 0 or 1. Button up = -127 or -128 state_right = win32api.GetKeyState(0x02) # Right button down = 0 or 1. Button up = -127 or -128 while True: a = win32api.GetKeyState(0x01) b = win32api.GetKeyState(0x02) if a != state_left: # Button state changed state_left = a print(a) if a < 0: print('Left Button Pressed') else: print('Left Button Released') if b != state_right: # Button state changed state_right = b print(b) if b < 0: print('Right Button Pressed') else: print('Right Button Released') time.sleep(0.001) A: The windows way of doing it is to handle the WM_LBUTTONDBLCLK message. For this to be sent, your window class needs to be created with the CS_DBLCLKS class style. I'm afraid I don't know how to apply this in Python, but hopefully it might give you some hints.
{ "language": "en", "url": "https://stackoverflow.com/questions/165495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Is it possible to convince GCC to mimic the fastcall calling convention? So i have a piece of assembly that needs to call a function with the fastcall calling convention on windows, but gcc doesn't (afaict) support it. GCC does provide the regparm attribute but that expects the first 3 parameters to be passed in eax, edx and ecx, whereas fastcall expects the first two parameters to be passed in ecx and edx. I'm merely trying to avoid effectively duplicating a few code paths, so this isn't exactly critical, but it would be great if it were avoidable. A: I know I'm kind of late to the party on this one but if any of you stumble across this, just remember that you can define a macro to mimic it. For example: #if defined(__GNUC__) #define MSFASTCALL __fastcall #define GCCFASTCALL #elif defined(_MSC_VER) #define MSFASTCALL #define GCCFASTCALL __attribute__((fastcall)) #endif int MSFASTCALL magic() GCCFASTCALL; Obviously, this looks kind of ugly, so you could just define 2 prototypes (which is what I do) to make it a little easier to read but there are people who prefer the route that requires less typing. I generally don't use calling conventions except for special cases. I just let the compiler optimize the rest away. Now, there are some quirks to remember. For example, when you're targeting a 64 bit platforms, all functions utilize the fastcall convention in order to take advantage of the extra registers and improve speed and reduce indirection cost. Likewise, fastcall is implemented differently by different platforms as it is not standardized. There are some others but that's all I can pull off the top of my head. A: GCC does support fastcall, via __attribute__((fastcall)). It appears to have been introduced in GCC 3.4. A: If you're calling the function from asm then surely you have complete control over how you call the function. What's stopping you from just loading up the registers and issuing a CALL?
{ "language": "en", "url": "https://stackoverflow.com/questions/165496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Setting PDQ inside an SPL - local scope? In order to fine tune allocation of PDQ resources depending on the time of day that batch jobs run, we have a utility that sets PDQPRIORITY based on some day of week / hour of day rules, eg: PDQPRIORITY=$(throttle); export PDQPRIORITY However, this is fixed at the time the script starts, so long running jobs never get throttled up or down as they progress. To rectify this, we've tried the following: CREATE PROCEDURE informix.set_pdq() RETURNING VARCHAR(50); DEFINE pdq, dow SMALLINT; DEFINE hr SMALLINT; LET dow = WEEKDAY(CURRENT); LET hr = TO_CHAR(CURRENT, '%H'); IF (dow == 0 OR dow == 6 OR hr < 8 OR hr > 14) THEN LET pdq = 100; SET PDQPRIORITY 100; -- SET PDQ does not accept a variable name arg. ELIF (hr >= 8 AND hr <= 10) THEN LET pdq = 40; SET PDQPRIORITY 40; ELIF (hr >= 11 AND hr <= 12) THEN LET pdq = 60; SET PDQPRIORITY 60; ELIF (hr >= 13 AND hr <= 14) THEN LET pdq = 80; SET PDQPRIORITY 80; END IF; RETURN "PDQPriority set to " || pdq; END PROCEDURE; At various intervals throughout the SQL, we've added: EXECUTE PROCEDURE set_pdq(); However, although it doesn't fail, the scope of the SET PDQ seems to be local to the SPL. onstat -g mgm doesn't report any change to the original resources allocated. So adding these set_pdq() calls doesn't seem to have had any effect - the resources allocated at the program start remain fixed. The code is embedded SQL in shell, ie: dbaccess -e $DBNAME << EOSQL SELECT .. INTO TEMP ..; EXECUTE PROCEDURE set_pdq(); SELECT .. INTO TEMP ..; --etc EOSQL So backticks or $( ) interpolation occurs at the start of the script, when the here document gets passed to dbaccess. (That eliminated the obvious: SET PDQPRIORITY $(throttle);) Wow, that got wordy quickly. Can anyone suggest any way of achieving this that doesn't involve rewriting these batch jobs completely? Breaking the SQL down into smaller pieces is not an option because of the heavy reliance on temp tables. A: As you will have deduced from the inordinate delay between the time when you asked the question and the first attempted answer, this is not trivial. Part of the problem is, I think, that PDQPRIORITY is captured when a stored procedure is created or its statistics are updated. Indeed, that may be all of the problem. Now, temporary tables cause another set of problems with stored procedures - stored procedures often need reoptimizing when temporary tables are involved (unless, possibly, the SP itself creates the temporary table).
{ "language": "en", "url": "https://stackoverflow.com/questions/165511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I track what registry changes is done to the system by an install shield installer? I need something like i6comp but for list of reg changes rather than list of files. Does such a thing exist? EDIT:I know there are ways to do monitor changes to the ergistry but are there ways to do it by examining the setup files? A: Process Monitor A: RegMon A: You can check the registry tables in the MSI using something like Orca but thats not guaranteed to catch all possible changes to the registry. The only sure way is comparing pre and post install changes using somethink like RegMon. A: * *Export the registry. * *Run installer. *Export the registry to a different file *Compare the two files with your favorite comparison program.
{ "language": "en", "url": "https://stackoverflow.com/questions/165515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Collection Editor at runtime I'm working on an application to edit name/value pairs using a property grid. Some of the properties in my class file are ListDictionary collections. Is there an Editor attribute that I can apply at the property declaration that will make the Collection Editor work at runtime? If not, is it possible to inherit from ComponentModel.Design.CollectionEditor for use at runtime? I need to be able to add, delete and edit the collection values. Thanks alot, Terry A: from codeproject article [http://www.codeproject.com/KB/cs/dzcollectioneditor.aspx][1] There are three requirements that a collection should meet in order to be successfully persisted with the CollectionEditor: * *First, the collection must implement the IList interface (inheriting from System.Collections.CollectionBase is in most of the cases the best option). *Second, it must have an Indexer (Item in VB.NET) property. The type of this property is used by the CollectionEditor to determine the default type of the instances that will add to the collection. To better understand how this works, take a look at GetItemType() function of the CustomCollectionEditorForm: protected virtual Type GetItemType(IList coll) { PropertyInfo pi= coll.GetType().GetProperty("Item", new Type[]{typeof(int)}); return pi.PropertyType } *Third, the collection class must implement one or both of the following methods: Add and AddRange. Although IList interface has an Add member and CollectionBase implements IList, you still have to implement an Add method for your collection, given that CollectionBase declares an explicit member implementation of the IList’s Add member. The designer serializes the collection according to what method you have implemented. If you have implemented both, the AddRange is preferred. In this article you'll find everything you need to implement your collection on the property grid
{ "language": "en", "url": "https://stackoverflow.com/questions/165525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: iPhone Proximity Sensor Can the iPhone SDK take advantage of the iPhone's proximity sensors? If so, why hasn't anyone taken advantage of them? I could picture a few decent uses. For example, in a racing game, you could put your finger on the proximity sensor to go instead of taking up screen real-estate with your thumb. Of course though, if this was your only option, then iPod touch users wouldn't be able to use the application. Does the proximity sensor tell how close you are, or just that something is in front of it? A: Evidently the proximity sensor will never turn on if the status bar is in landscape orientation. i.e, if you call: [UIApplication sharedApplication].statusBarOrientation = UIInterfaceOrientationLandscapeLeft; You will no longer get the proximity:ON notifications. This definitely happens on OS 3.0, I can't test it on a 2.X device since I don't have one with a proximity sensor. This seems like a bug. A: The proximity sensor works via measuring IR reflectance. If you hold the iPhone up to a webcam, you can see a small, pulsing IR LED. A: There's a lot of confusion between the proximity sensor and the ambient light sensor. The iPhone has both. The Touch does not have a proximity sensor, making it a poor choice for user input. It would be a bad idea anyway since Apple isn't obligated to locate it in the same place in future devices; you aren't supposed to know or care where it is. The proximity sensor works by pulsing an infrared LED and measuring the amount of reflectance. You can see this using your iSight camera (most digital cameras are sensitive to IR.) Just launch Photo Booth, initiate a call (or play a voicemail) on the phone and point it at your iSight camera. Note the flashing light next to the earpiece; cover it with your finger and the screen will go black. The ambient light sensor's API is evidently private at this point. A: Just to update, this is possible. device = [UIDevice currentDevice]; // Turn on proximity monitoring [device setProximityMonitoringEnabled:YES]; // To determine if proximity monitoring is available, attempt to enable it. // If the value of the proximityMonitoringEnabled property remains NO, proximity // monitoring is not available. // Detect whether device supports proximity monitoring proxySupported = [device isProximityMonitoringEnabled]; // Register for proximity notifications [notificationCenter addObserver:self selector:@selector(proximityChanged:) name:UIDeviceProximityStateDidChangeNotification object:device]; As benzado points out, you can use: // Returns a BOOL, YES if device is proximate [device proximityState]; A: There is a public API for this. -[UIApplication setProximitySensingEnabled:(BOOL)] will turn the feature on. BTW, it doesn't seem to be using the light sensor, because proximity sensing would tweak out in a dark room. However, the API call basically blanks the screen when you hold the phone up to your face. Not useful for interaction, sadly. A: There is no public API for this. A: In iPhone 3.0 there is official support for the proximity sensor. Have a look at UIDevice proximityMonitoringEnabled in the docs. A: If you aren't aiming for the AppStore, you can read my articles here on getting access to those: Proximity Sensor: http://iphonedevwiki.net/index.php/AppleProxShim Ambient Light Sensor: http://iphonedevwiki.net/index.php/AppleISL29003 A: Evidently the proximity sensor will never turn on if the status bar is in landscape orientation. i.e. if you call: [UIApplication sharedApplication].statusBarOrientation = UIInterfaceOrientationLandscapeLeft; You will no longer get proximity:ON notifications. This definitely happens on OS 3.0, I can't test it on a 2.X device since I don't have one with a proximity sensor. This seems like a bug. answered Jul 22 '09 at 5:49 Kevin Lambert I've encoutered this problem too. It took me a long time to figure out the real reason of why the proximity sensor is not working. When orientation is UIInterfaceOrientationLandscapeLeft or UIInterfaceOrientationLandscapeRight, proximity sensor does not work; while in portrait mode it works well. My iPhone is iPhone 4S (iOS SDK 5.0). A: Assuming you mean the sensor that shuts off the screen when you hold it to your ear, I'm pretty sure that is just an infrared sensor inside the ear speaker. If you start the phone app (you don't have to be making a call) and hold something to cast a shadow over the ear speaker, you can make the display shut off. When you asked this question it was not accessible via the public API. You can now access the sensor's state via UIDevice's proximityState property. However, it wouldn't be that useful for games, since it is only an on/off thing, not a near/far measure. Plus, it's only available on the iPhone and not the iPod touch. A: Those proximity sensors are basically a matrix of conductors. The vertical "wires" are tracks on one side of a thin sheet of insulator, the horizontal ones are on the other side. The intersections function as capacitors. Your finger carries an electrostatic charge, so capacitance of each junction varies with proximity. FETs amplify the signal and biasing sets a threshold. In practice the circuit is more complex than that because it has to detect a relative change and reject noise. But anyway, what the sensor grid tells you is that a field effect has been sensed, and that field effect is characteristic of object about the size of a fingertip and resting on the surface of the display. The centroid of the capacitive disturbance is computed (probably by hardware) and the coordinates are (presumably) reported as numbers on a port most likely brought to the attention of the device OS by an interrupt. In something as sexy as an iPhone there's probably a buffer of the last dozen or so positions so it can work out direction and speed. Probably these are also computed by hardware and presented as numbers on the same port. A: @Dipak Patel & @Coderer You can download working code at http://spazout.com/google_cheats_independent_iphone_developers_screwed It has a working implementation of proximityStateChanged a undocumented method in UIApplication. Hope this helps. A: To turn the screen off it's conceivable that more than one sensors is used to figure out if the screen should be turned off or not. The IR proximity sensor described by Cryptognome in conjunction with the Touch screen sensor described by Peter Wone could work out if the iphone is being held close to your face (or something else with a slight electric charge) or if its just very close to something in-animate.
{ "language": "en", "url": "https://stackoverflow.com/questions/165539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Can I show a Generalization relationship in a Domain Model in UML I would like to ask if it is possible to show Generalization relationship in UML for a Domain Model although I understand I can do that in a Class Diagram? I did not see much examples of Domain Model displaying Generalization Relationship except in Class Diagram. If not possible, what is the best way to display that an Entity in a Domain Model is either a Entity A or Entity B or Entity C or Entity D etc ? A: Yes, generalization between entities in your domain model is allowed. Basically, the domain model is a class diagram, where classes show the types of entities in your conceptual design, and not concrete programming language classes that you show in your typical class diagram. To better find what you can or can't do, you can read the UML 2.x Superstructure specification, but it is quite complex to understand. I generally tend to freely use whatever communicates my design, but if you are constrained to some specific modeling tool that is somewhat strict, you should become familiar with the UML specification. A: You need to clarify why you're distinguishing between a domain model and a class diagram. On the one hand, your domain model could simply be a class diagram of everything that could map to your database, and consume it. Hence a generalization relationship could simply depict implementation inheritance or interface inheritance. On the other hand, your domain model could simply express how you expect your classes to work. In which case, it could be any of the standard UML diagrams: class, sequence, collaboration, component, activity, etc. ADD: are you talking about finding different ways to categorize your persistent entities like a tagging system? Or you could make it possible to have a persistent entity have many nodes in a category tree?
{ "language": "en", "url": "https://stackoverflow.com/questions/165542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to tell if text on the windows clipboard is ISO 8859 or UTF-8 in C++? I would like to know if there is an easy way to detect if the text on the clipboard is in ISO 8859 or UTF-8 ? Here is my current code: COleDataObject obj; if (obj.AttachClipboard()) { if (obj.IsDataAvailable(CF_TEXT)) { HGLOBAL hmem = obj.GetGlobalData(CF_TEXT); CMemFile sf((BYTE*) ::GlobalLock(hmem),(UINT) ::GlobalSize(hmem)); CString buffer; LPSTR str = buffer.GetBufferSetLength((int)::GlobalSize(hmem)); sf.Read(str,(UINT) ::GlobalSize(hmem)); ::GlobalUnlock(hmem); //this is my string class s->SetEncoding(ENCODING_8BIT); s->SetString(buffer); } } } A: Check out the definition of CF_LOCALE at this Microsoft page. It tells you the locale of the text in the clipboard. Better yet, if you use CF_UNICODETEXT instead, Windows will convert to UTF-16 for you. A: UTF-8 has a defined structure for non-ASCII bytes. You can scan for bytes >= 128, and if any are detected, check if they form a valid UTF-8 string. The valid UTF-8 byte formats can be found on Wikipedia: Unicode Byte1 Byte2 Byte3 Byte4 U+000000-U+00007F 0xxxxxxx U+000080-U+0007FF 110xxxxx 10xxxxxx U+000800-U+00FFFF 1110xxxx 10xxxxxx 10xxxxxx U+010000-U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx old answer: You don't have to -- all ASCII text is valid UTF-8, so you can just decode it as UTF-8 and it will work as expected. To test if it contains non-ASCII characters, you can scan for bytes >= 128. A: I can be mistaken, but I think you cannot: if I open an UTF-8 file without Bom in my editor, it is displayed by default as ISO-8859-1 (my locale), and beside some strange use of foreign (for me) accented chars, I have no strong visual hint that it is UTF-8 (unless it is encoded in another way elsewhere, eg. charset declaration in HTML or XML): it is perfectly valid Ansi text. John wrote "all ASCII text is valid UTF-8" but the reverse is true. Windows XP+ uses naturally UTF-16, and have a clipboard format for it, but AFAIK it just ignore UTF-8, with no special treatment for it. (Well, there is an API to convert UTF-8 to UTF-16 (or Ansi, etc.), actually). A: You could check to see obj.IsDataAvailable(CF_UNICODETEXT) to see if a unicode version of what's on the clipboard is available. -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/165551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I get the filename of the currently playing file in Vista Media Center? I have Windows Vista MCML app, and I need to figure out the current name of the file playing. The Media Center SDK alludes to using MediaMetadata["Title"] to get this information, unfortunately this does not work with playlists (.wpl) files as there is no method for getting the position in the playlist. A: Turns out this can not be easily done. There are 4 options. * *Never use play lists, in that case MediaMetadata["Title"] is good enough. *Examine remote file handles in ehshell.exe. *Inject a remote thread in ehshell.exe, establish communication and use reflection to read it. *Write a DirectShow filter and communicate with it. Update: This is fixed in Windows 7. It is unclear if its going to be back ported to Vista MCE yet. Second Update: Looks like Microsoft have changed the behavior of MediaMetadata["Title"] in a recent hotfix, it now returns both the filename without an extension and the playlist name. A: Have you tried: MediaContext.GetProperty(TrackTitle) I've also seen samples that in the markup for the media display layout file they specify an element such as: <music-title duration = "2000" x="69" y="29" width="187" height="20"/> Good Luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/165556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you control printer tray selection for printer in Windows We need to be able to change the default selected print tray of a given printer. Does anyone have VC++/win32 code for doing this? In case it matters, I believe we have to change the default setting for the printer. Our print jobs are executed by an application other than ours, so we can't make these kinds of changes in the context of a print operation originating from inside our application. Unless there is some way to modify the default print settings in a different application, I think we are stuck changing the user's defaults for the printer, initiating our print job, then setting the defaults back to the original values. We'd really prefer to have the defaults change for the current user only, and not require any special UAC elevation, etc... I suspect that it will use something similar to what is shown in this MSDN article, and involve setting fields in the DEVMODE structure (either dmDefaultSource or dmFormName or both). Any takers? Or does anyone have any gotchas they'd like to share? EDIT: Here is a link for DEVMODE documentation DEVMODE documentation EDIT: I should also point out that we are looking for a general solution - not something specific to a particular printer (we deploy in many, many environments) A: FYI - the solution we wound up using was to capture the DEVMODE structure. We have a small win32 app that presents the printer settings dialog (via DocumentProperties with fMode set to DM_IN_PROMPT). The resultant DEVMODE is then saved to disk. When we do our printing, we capture the current DEVMODE, set the stored DEVMODE, initiate the print, then restore the original DEVMODE. This actually works quite well. Occasionally, the print drivers will update and cause the stored DEVMODE to break, but that doesn't happen very often and it's easy enough for users to fix. As an extra bonus, this approach allows us to capture ALL of the printer settings (not just the output tray) - so we were able to support advanced settings like stapling, collating, etc... Tip: If you try this, be sure to write to disk as a binary output stream. In my initial evaluation of this approach, I accidentally set the output stream up as a text output stream. Things would work fine for many cases, then suddenly break for some printers (that used high order bytes in their DEVMODE private data). A dumb, but easy, mistake to make - and one that took a very nice solution off the table for awhile. A: Setting features like this can be tricky, especially if the driver doesn't follow Microsoft's print guidelines. That being said, we've had some success with System.Drawing.Printing.PrinterSettings. You can set PaperSource but I'm not sure you can set the defaults. If you haven't seen this example you may want to look further at it. It describes a method to store and reload printer settings. One of my guys pointed it to me: PrinterSettings - Changing, Storing and Loading Printer Settings Another method, that could work but might not work for you, is to determine your the handful of setups you need. Install a printer with each of these (ie: Tray 1, Tray 2) setups. Then simply switch the default printer on print. Not what you are looking for but it may help. What we typically do in these situations is have the 3rd party app write the data to a folder that we are monitoring, we then pick up the file and parse the Postscript or PCL ourselves and change the paper tray and then send onto the destination device. A lot simpler then it may sound. A: dmDefaultSource controls the tray. Unfortunately the values you'll want to set this to differs depending on your driver as this is a bin number and not necessarily the same number as the tray# printed on your printer. The following link provides some VB6 code for gathering information about your printers tray/bin assignments. You can use that information to programatically assign dmDefaultSource to the appropriate bin # for a tray. You basically need to use DeviceCapabilities to return information about your printers and then search for a string (like "Tray 1") to get the associated bin number. http://support.microsoft.com/kb/194789 A: I had to do something very similar recently on a specific printer driver and it required a vendor specific SDK. The tray doesn't seem to appear in DEVMODE or any of the other PRINTINFO_* structures so I guess I'd drop an email to the printer vendor. As a last resort, I can think of two possible hacks. One is to automate the driver at GUI level using a scripted tool such as AutoIT. Second is to dump the registry to file, change the driver setting, dump the registry again, and compare the differences (may or may not work). A: As far as I know, printers are controlled by the printer driver by sending them SNMP or PJL commands. But not all printers implement completely these sets of commands. For HP printers I found at: http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=bpl07282&jumpid=reg_R1002_USEN some PJL commands (there are some related to the tray too). I'm not sure this help, but take it as a hint for future searches...
{ "language": "en", "url": "https://stackoverflow.com/questions/165567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SQL query for cumulative frequency of list of datetimes I have a list of times in a database column (representing visits to a website). I need to group them in intervals and then get a 'cumulative frequency' table of those dates. For instance I might have: 9:01 9:04 9:11 9:13 9:22 9:24 9:28 and i want to convert that into 9:05 - 2 9:15 - 4 9:25 - 6 9:30 - 7 How can I do that? Can i even easily achieve this in SQL? I can quite easily do it in C# A: create table accu_times (time_val datetime not null, constraint pk_accu_times primary key (time_val)); go insert into accu_times values ('9:01'); insert into accu_times values ('9:05'); insert into accu_times values ('9:11'); insert into accu_times values ('9:13'); insert into accu_times values ('9:22'); insert into accu_times values ('9:24'); insert into accu_times values ('9:28'); go select rounded_time, ( select count(*) from accu_times as at2 where at2.time_val <= rt.rounded_time ) as accu_count from ( select distinct dateadd(minute, round((datepart(minute, at.time_val) + 2)*2, -1)/2, dateadd(hour, datepart(hour, at.time_val), 0) ) as rounded_time from accu_times as at ) as rt go drop table accu_times Results in: rounded_time accu_count ----------------------- ----------- 1900-01-01 09:05:00.000 2 1900-01-01 09:15:00.000 4 1900-01-01 09:25:00.000 6 1900-01-01 09:30:00.000 7 A: I should point out that based on the stated "intent" of the problem, to do analysis on visitor traffic - I wrote this statement to summarize the counts in uniform groups. To do otherwise (as in the "example" groups) would be comparing the counts during a 5 minute interval to counts in a 10 minute interval - which doesn't make sense. You have to grok to the "intent" of the user requirement, not the literal "reading" of it. :-) create table #myDates ( myDate datetime ); go insert into #myDates values ('10/02/2008 09:01:23'); insert into #myDates values ('10/02/2008 09:03:23'); insert into #myDates values ('10/02/2008 09:05:23'); insert into #myDates values ('10/02/2008 09:07:23'); insert into #myDates values ('10/02/2008 09:11:23'); insert into #myDates values ('10/02/2008 09:14:23'); insert into #myDates values ('10/02/2008 09:19:23'); insert into #myDates values ('10/02/2008 09:21:23'); insert into #myDates values ('10/02/2008 09:21:23'); insert into #myDates values ('10/02/2008 09:21:23'); insert into #myDates values ('10/02/2008 09:21:23'); insert into #myDates values ('10/02/2008 09:21:23'); insert into #myDates values ('10/02/2008 09:26:23'); insert into #myDates values ('10/02/2008 09:27:23'); insert into #myDates values ('10/02/2008 09:29:23'); go declare @interval int; set @interval = 10; select convert(varchar(5), dateadd(minute,@interval - datepart(minute, myDate) % @interval, myDate), 108) timeGroup, count(*) from #myDates group by convert(varchar(5), dateadd(minute,@interval - datepart(minute, myDate) % @interval, myDate), 108) retuns: timeGroup --------- ----------- 09:10 4 09:20 3 09:30 8 A: ooh, way too complicated all of that stuff. Normalise to seconds, divide by your bucket interval, truncate and remultiply: select sec_to_time(floor(time_to_sec(d)/300)*300), count(*) from d group by sec_to_time(floor(time_to_sec(d)/300)*300) Using Ron Savage's data, I get +----------+----------+ | i | count(*) | +----------+----------+ | 09:00:00 | 1 | | 09:05:00 | 3 | | 09:10:00 | 1 | | 09:15:00 | 1 | | 09:20:00 | 6 | | 09:25:00 | 2 | | 09:30:00 | 1 | +----------+----------+ You may wish to use ceil() or round() instead of floor(). Update: for a table created with create table d ( d datetime ); A: Create a table periods describing the periods you wish to divide the day up into. SELECT periods.name, count(time) FROM periods, times WHERE period.start <= times.time AND times.time < period.end GROUP BY periods.name A: Create a table containing what intervals you want to be getting totals at then join the two tables together. Such as: time_entry.time_entry ----------------------- 2008-10-02 09:01:00.000 2008-10-02 09:04:00.000 2008-10-02 09:11:00.000 2008-10-02 09:13:00.000 2008-10-02 09:22:00.000 2008-10-02 09:24:00.000 2008-10-02 09:28:00.000 time_interval.time_end ----------------------- 2008-10-02 09:05:00.000 2008-10-02 09:15:00.000 2008-10-02 09:25:00.000 2008-10-02 09:30:00.000 SELECT ti.time_end, COUNT(*) AS 'interval_total' FROM time_interval ti INNER JOIN time_entry te ON te.time_entry < ti.time_end GROUP BY ti.time_end; time_end interval_total ----------------------- ------------- 2008-10-02 09:05:00.000 2 2008-10-02 09:15:00.000 4 2008-10-02 09:25:00.000 6 2008-10-02 09:30:00.000 7 If instead of wanting cumulative totals you wanted totals within a range, then you add a time_start column to the time_interval table and change the query to SELECT ti.time_end, COUNT(*) AS 'interval_total' FROM time_interval ti INNER JOIN time_entry te ON te.time_entry >= ti.time_start AND te.time_entry < ti.time_end GROUP BY ti.time_end; A: This uses quite a few SQL tricks (SQL Server 2005): CREATE TABLE [dbo].[stackoverflow_165571]( [visit] [datetime] NOT NULL ) ON [PRIMARY] GO ;WITH buckets AS ( SELECT dateadd(mi, (1 + datediff(mi, 0, visit - 1 - dateadd(dd, 0, datediff(dd, 0, visit))) / 5) * 5, 0) AS visit_bucket ,COUNT(*) AS visit_count FROM stackoverflow_165571 GROUP BY dateadd(mi, (1 + datediff(mi, 0, visit - 1 - dateadd(dd, 0, datediff(dd, 0, visit))) / 5) * 5, 0) ) SELECT LEFT(CONVERT(varchar, l.visit_bucket, 8), 5) + ' - ' + CONVERT(varchar, SUM(r.visit_count)) FROM buckets l LEFT JOIN buckets r ON r.visit_bucket <= l.visit_bucket GROUP BY l.visit_bucket ORDER BY l.visit_bucket Note that it puts all the times on the same day, and assumes they are in a datetime column. The only thing it doesn't do as your example does is strip the leading zeroes from the time representation.
{ "language": "en", "url": "https://stackoverflow.com/questions/165571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Learning about Auto-Implemented Properties I have the simple class using auto-implemented properies: Public Class foo { public foo() { } public string BarName {get; set;} } I obviously use the variable BarName throughout my class and now need to add logic when the property value is set (it must be all upper case, go figure). Does this mean that I need to now create a private variable for BarName , e.g. _BarName, and change the current BarName variable used throughout my class to _BarName? Public Class foo { public foo() {} private string _BarName = ""; public string BarName { get {return _BarName;} set {_BarName = Value.ToString().ToUpper();} } } I am trying to make sure I understand the implications of using auto-implemented properties, and what it will entail down the road when/if I need to change something. I am assuming that the refactoring, as shown above, is not a breaking change because the property is basically staying the same; it just took a little work inside the class to keep it that way and add the needed logic. Another example, which may be more meaningful is that I need to call some method when a setter or getter is used; more then changing the value. This seems a fair trade off the the lines and lines of code to setup properties. A: Does this mean that I need to now create a private variable for BarName Yes and change the current BarName variable used throughout my class Do not change the rest of the code in your class to use the new private variable you create. BarName, as a property, is intended to hide the private variable (among other things), for the purpose of avoiding the sweeping changes you contemplate to the rest of your code. I am assuming that the refactoring, as shown above, is not a breaking change because the property is basically staying the same; it just took a little work to keep it that way and add the needed logic. Correct. A: You don't need to change anything. Auto-implemented properties are just syntactic sugar. The compiler is generating the private variable and get/set logic for you, behind the scenes. If you add your own getter/setter logic the compiler will use your code instead of its auto-generated code, but as far as the users of that property are concerned, nothing has changed; any code referencing your property will continue to work. A: When using automatic properties you don't get direct access to the underlying "backing" variable and you don't get access to the actual logic that gets implemented in the property getter and setter. You only have access to the property (hence using BarName throughout your code). If you now need to implement specific logic in the setter, you can no longer use automatic properties and need to implement the property in the "old fashioned" way. In this case, you would need to implement your own private backing variable (the preferred method, at least for me, is to name the private backing variable the same name as the property, but with an initial lowercase (in this case, the backing variable would be named barName). You would then implement the appropriate logic in the getter/setter. In your example, you are correct that it is not a breaking change. This type of refactoring (moving from automatic properties to "normal" properties should never be a breaking change as you aren't changing the public interface (the name or accessibility of the public property). A: Don't use automatic properties if you know that you are going to validate that object. These objects can be domain objects etc. Like if you have a Customer class then use private variables because you might need to validate the name, birthdate etc. But if you are using a Rss class then it will be okay to just use the automatic properties since there is no validation being perform and the class is just used to hold some data. A: You are correct about the refactoring and it really shouldn't break anything. Whether or not you actually need to go through the references within the class to the property name and change those to refer to the private field would depend on whether the internal code needed to access the underlying representation of the data rather than how it was presented to consumers of the class. In most cases you could leave well enough alone. In your simple example it would be wise to leave well enough alone and ensure that no code internal to the class could subvert the conversion/formatting being performed in the setter. If on the other hand the getter was doing some magic to change the internal representation of the field into the way consumers needed to view the data then perhaps (in some cases) the internal code within the class would need to access the field. You would need to look at each occurrence of the access to the auto-property in the class and decide whether it should be touching the field or using the property. A: Automatic properties are just syntactic sugar, the compiler in fact creates the private member for it, but since it's generated at compile time, you cannot access it. And later on, if you want to implement getters and setters for the property, only then you create a explicit private member for it and add the logic.
{ "language": "en", "url": "https://stackoverflow.com/questions/165575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Running a SWF from file:/// without having the user change their Flash Player security settings I have a Flex app that does a a fair amount of network traffic, it uses ExternalInterface to make some javascript calls (for SCORM), it loads XML files, images, video, audio and it has a series of modules that it could be loading at some point... So the problem is - we now have a requirement where the user needs to run this content locally on a machine that is not connected to the internet (which means they can't connect to Adobe's site to change their security settings.) As you can imagine, when the user doubles clicks on the html page to launch this thing, they are greeted with a security warning that the swf is trying to communicate with another domain other than the one it's in. We can't wrap it in an exe or an AIR app so I unless there is some way to tweak some obscure security settings we may be hosed. Any idea's? A: What you are trying to do is exactly the problem solved by AIR. You should really give it a try, it's not that hard to pick up. If you really really can't use AIR (you didn't specify why, so I assume it's just because you don't want to have to learn a new system), then modifying the security config file will solve the problem. Basically what you need to do is create a 'trust' file in the "Global FlashPlayerTrust" directory. This can be done by your installer (which installs all the javascript, SWF, html, etc files onto the local machine). You should create the directory if it does not exist. The directory for each OS is: * *Windows - %WINDIR%\System32\Macromed\Flash\FlashPlayerTrust *Mac - /Library/Application Support/Macromedia/FlashPlayerTrust *Linux - /etc/adobe/FlashPlayerTrust Next, you need to create the trust file. You can name it anything, so pick a unique name that would be unlikely to conflict with others. Something like CompanyName.cfg. It's a text file, with one path per line. You can trust either one SWF at a time, or an entire directory. Example: C:\Program Files\MyCompany\CoolApp C:\Program Files\MyCompany\OtherApp\Main.swf To test that it's working, inside your flash movie you can check System.security.sandboxType (ActionScript 1 or 2), or Security.sandboxType (ActionScript 3). It should have the value of "localTrusted" A: I hesitate to say "you can't do it", but in my experience, there's no way to do what you're describing. Anyone, if I'm wrong, I'd love to know the trick. A: Sorry that I haven't actually tried this to see if it works or not ... but ... Page 20 (and/or 26) of this document may be of help. The document is referenced here. In a nutshell it describes directories which contain cfg files which in turn contain lists of locations on disk which should be regarded as trusted. An installer for the application would then be responsible for creating appropriate .cfg files in the desired location (global or for the installing user). A: The short answer is that if your swf is compiled with use-network to true, it isn't going to work. Is it possible to compile a version with use-network to false? Or is it running on an Intranet that is closed off from the Internet and still communicating with the LMS? A: It is possible. Please chek that the swfs you are calling from the main swf have the "Access local files only" property enabled or not. A: Did you try to specify the authorized domain with: System.security.allowDomain("www.yourdomain.com");
{ "language": "en", "url": "https://stackoverflow.com/questions/165595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I access the raw HTTP request data with PHP/apache? I was wondering if there was a way to get at the raw HTTP request data in PHP running on apache that doesn't involve using any additional extensions. I've seen the HTTP functions in the manual, but I don't have the option of installing an extension in my environment. While I can access the information from $_SERVER, I would like to see the raw request exactly as it was sent to the server. PHP munges the header names to suit its own array key style, for eg. Some-Test-Header becomes HTTP_X_SOME_TEST_HEADER. This is not what I need. A: Try this: $request = $_SERVER['SERVER_PROTOCOL'] .' '. $_SERVER['REQUEST_METHOD'] .' '. $_SERVER['REQUEST_URI'] . PHP_EOL; foreach (getallheaders() as $key => $value) { $request .= trim($key) .': '. trim($value) . PHP_EOL; } $request .= PHP_EOL . file_get_contents('php://input'); echo $request; A: Use the following php wrapper: $raw_post = file_get_contents("php://input"); A: Do you mean the information contained in $_SERVER? print_r($_SERVER); Edit: Would this do then? foreach(getallheaders() as $key=>$value) { print $key.': '.$value."<br />"; } A: GET / host: domain.com; all-other-headers: <its-value>; request-content: <as-per-content-type>
{ "language": "en", "url": "https://stackoverflow.com/questions/165603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I create a custom slot in qt4 designer? Whenever I use the signal/slot editor dialog box, I have to choose from the existing list of slots. So the question is how do I create a custom named slot? A: Unfortunately this is not possible in Qt4. In Qt3 you could create custom slots which where then implemented in the ui.h file. However, Qt4 does not use this file so custom slots are not supported. There is some discussion of this issue over on QtForum A: I am able to do it by: In MainWindow.h, add the line: public slots: void example(); in the MainWindow class. In MainWindow.cpp void MainWindow::example() { <code> } A: This doesn't seem to be possible in a simple way. The designer only allows you to promote existing widgets to your own custom widgets. yet it doesn't allow you to connect the signals and slots of the class of promoted widgets. The way this is possible is creating a plugin for the designer as is described here and in the pages that follow it. The normal course of action is to promote a widget to your own class and then to connect it manually in your own code. this process is described here A: This does seem to be possible in the version of Qt Designer 4.5.2, but it can't be done from the Signal/Slot Editor dock-widget in the main window. This is what worked for me * *Switch to Edit Signals/Slots mode (F4) *Drag and drop from the widget which is to emit the signal, to the widget which is to receive the signal. *A Configure Connection dialog appears, showing the signals for the emitting widget, and the slots for the receiving widget. Click Edit... below the slots column on the right. *A Signals/Slots of ReceivingWidget dialog appears. In here its is possible to click the plus icon beneath slots to add a new slot of any name. *You can then go back and connect to your new slot in the Configure Connection dialog, or indeed in the Signal/Slot Editor dockwidget back in the main window. Caveat: I'm using PyQt, and I've only tried to use slots added in this way from Python, not from C++, so your mileage may vary... A: right click on the main window and select "change signals and slots" and add a new slot. It will appear in your signal slot editor. A: It is not possible to do it, because it means you would add a slot to an existing Qt class like QPushButton which is not really the way to go. You should create your own QWidget eventually by subclassing an existing one. Then integrating it into Qt Designer as a plugin as suggested. Having your own class allows you to add/modifiy the signals/slots available as you want. A: Don't forget about the slot auto-connection features. There are a few drawbacks, like having to rename your function if you rename your widget, but we use those a lot at my company. A: You can use the magic slot format of void on_objectName_signal() { // slot code here, where objectname is the Qt Designer object name // and the signal is the emission } The connection to this method is established by the method connectSlotsByName and whenever the signal is emitted, this slot is invoked. A: Maybe it'll help. By default you have to choose from the existing list of slots. But you can add slot by right-clicking at you object in the list at right side of designer and choose "slot/signals" and add your custom slot/signal. After that, you can choose it in signal/slot editor. A: click the widget by right button promote the widget into a class you defined click the widget by right button again you will see that signal and slot is editable
{ "language": "en", "url": "https://stackoverflow.com/questions/165637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How can I display a calendar control (date picker) in Oracle forms 9/10? How can I display a calendar control (date picker) in Oracle forms 9/10? A: Most forms projects I've worked already have a date picker, implemented as a separate form - e.g. I think one gets generated by HeadStart. I think it's also been implemented using a separate canvas in the standard forms template. However, this question also asked on ittoolbox and answered here: Creating date picker calendar control in Oracle Forms A: Here's a link to a Java Swing solution for Calendar. This site is great for all things Java and how to implement in Oracle Forms. http://forms.pjc.bean.over-blog.com/article-14848846.html We at PITSS use this site often when dealing with our Hundreds of Customers. http://pitss.com
{ "language": "en", "url": "https://stackoverflow.com/questions/165648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to add a Tooltip to a "td" with jquery? I need to add a tooltip/alt to a "td" element inside of my tables with jquery. Can someone help me out? I tried: var tTip ="Hello world"; $(this).attr("onmouseover", tip(tTip)); where I have verified that I am using the "td" as "this". **Edit:**I am able to capture the "td" element through using the "alert" command and it worked. So for some reason the "tip" function doesn't work. Anyone know why this would be? A: $('#grdList tr td:nth-child(5)').each(function(i) { if (i > 0) { //skip header var sContent = $(this).text(); $(this).attr("title", $(this).html()); if (sContent.length > 20) { $(this).text(sContent.substring(0,20) + '...'); } } }); grdList - table id td:nth-child(5) - column 5 A: $(this).mouseover(function() { tip(tTip); }); a better way might be to put title attributes in your HTML. That way, if someone has javascript turned off, they'll still get a tool tip (albeit not as pretty/flexible as you can do with jQuery). <table id="myTable"> <tbody> <tr> <td title="Tip 1">Cell 1</td> <td title="Tip 2">Cell 2</td> </tr> </tbody> </table> and then use this code: $('#myTable td[title]') .hover(function() { showTooltip($(this)); }, function() { hideTooltip(); }) ; function showTooltip($el) { // insert code here to position your tooltip element (which i'll call $tip) $tip.html($el.attr('title')); } function hideTooltip() { $tip.hide(); } A: you might want to have a look at http://bassistance.de/jquery-plugins/jquery-plugin-tooltip/ A: var tTip ="Hello world"; $(this).mouseover( function() { tip(tTip); }); A: grdList - table id td:nth-child(5) - column $('#grdList tr td:nth-child(5)').each(function(i) { if (i > 0) { //skip header var sContent = $(this).text(); $(this).attr("title", $(this).html()); if (sContent.length > 20) { $(this).text(sContent.substring(0,20) + '...'); } } }); A: If you really do want to put those tooltips on your table cells and not your table headers--where they'd make much more sense--please consider putting them on the content INSIDE the table cells, where it's much more meaningful.
{ "language": "en", "url": "https://stackoverflow.com/questions/165650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why is my Perl regex using so much memory? I'm running a regular expression against a large scalar. Though this match isn't capturing anything, my process grows by 30M after this match: # A if (${$c} =~ m/\G<<\s*/cgs) { #B ... } $c is a reference to a pretty big scalar (around 21M), but I've verified that pos(${$c}) is in the right place and the expression matches at the first character, with pos(${$c}) being updated to the correct place after the match. But as I mentioned, the process has grown by about 30M between #A and #B, even though I'm not capturing anything with this match. Where is my memory going? Edit: Yes, use of $& was to blame. We are using Perl 5.8.8, and my script was using Getopt::Declare, which uses the built-in Text::Balanced. The 1.95 version of this module was using $&. The 2.0.0 version that ships with Perl 5.10 has removed the reference to $& and alleviates the problem. A: Just a quick sanity check, are you mentioning $&, $` or $' (sometimes called $MATCH, $PREMATCH and $POSTMATCH) anywhere in your code? If so, Perl will copy your entire string for every regular expression match, just in case you want to inspect those variables. "In your code" in this case means indirectly, including using modules that reference these variables, or writing use English rather than use English qw( -no_match_vars ). If you're not sure, you can use the Devel::SawAmpersand module to determine if they have been used, and Devel::FindAmpersand to figure out where they are used. There may be other reasons for the increase in memory (which version of Perl are you using?), but the match variables will definitely blow your memory if they're used, and hence are a likely culprit. Cheerio, Paul
{ "language": "en", "url": "https://stackoverflow.com/questions/165660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can J# code run on JVM If I create a J# Application is there any way I can execute it on JVM as well A: No, J# programs are designed to run on the .NET platform. See the J# FAQ. A: Yes - sort of. If the Java you write will compile using javac then you can have one source base and compile for both J# and Java. We do that for our reporting engine. If you do this on .net 40, you need Calling J# code from .NET 4.0. But you cannot run the J# binary on a JVM.
{ "language": "en", "url": "https://stackoverflow.com/questions/165690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: how do I specify the source code directory in VS when looking at the call stack of a memory dump? I am analyzing a .dmp file that was created and I have a call stack which gives me a lot of info. But I'd like to double click on the call stack and have it bring me to the source code. I can right click on the call stack and select symbol settings.. where I can put the location to the PDB. But there is no option for the source code directory. A: The source code directory is unfortunately hard coded into the pdb's however if you know the folders required you can use windows concept of symbolic links, junctions. I use the tool Junction Link Magic A: Read this article about how to set up a Source Server (aka SrcSrv) integration at your site. I took the time to follow these steps for our codebase, and now we are able to take a .dmp file from any build of our software in the past 6 months... get a stack trace with symbols... and view the exact source code lines in the debugger. Since the steps are integrated into our automated builds, there's very little overhead now. I did need to write a custom indexer for ClearCase, but they have pre-existing ones for Perforce, TFS, and maybe others. It is worth noting that the .dmp support in VS2005 is a little shaky.. it's quite a bit more stable in VS2008. You'll also need to configure Visual Studio to grab the symbols for the MS products from here in addition to your own symbol server: http://msdl.microsoft.com/download/symbols That is described in a few places such as on the Debugging Tools for Windows site. A: Windbg allows you to setup source paths same as PDB's paths. A: After loading the PDB, manually navigate to the source file that matches the current execution location. A PDB contains the path and filename of the source files that built its associated binary, and I suspect the debugger is smart enough to hook things up when it notices that the filename being displayed and the filename associated with with current binary location, match.
{ "language": "en", "url": "https://stackoverflow.com/questions/165699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to use data mining feature of SQL Server 2008 with ASP.Net How to use data mining feature of SQL Server 2008 with ASP.Net A: Take a look at SqlServerDataMining.com, a site run by Microsoft's SQL Server Data Mining team. A: In a nutshell, you want to: * *Build cubes to model your data *Build a prediction calculator (or whatever kind of calculator you're looking to use) *Expose that via a web service *Call the web service in your app For example, if you want to model whether or not a customer is likely to abandon their shopping card, you would figure out what characteristics of a shopper you want to capture and analyze. You set up your cubes to model what characteristics are indicative of a soon-to-be-bailing-out shopper. During the shopping process, your web app would send the shopper's characteristics to the SSAS server, which would return back a guess about whether or not the shopper is going to abandon the cart. Then your web app can take proactive measures before they leave. All of the steps in here are kinda complicated - your best bet is probably to refine your question to focus on the areas you're responsible for.
{ "language": "en", "url": "https://stackoverflow.com/questions/165703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I provide a suffix for days of the month? I need a function to return a suffix for days when displaying text like the "th" in "Wednesday June 5th, 2008". It only need work for the numbers 1 through 31 (no error checking required) and English. A: Here is an alternative which should work for larger numbers too: static const char *daySuffixLookup[] = { "th","st","nd","rd","th", "th","th","th","th","th" }; const char *daySuffix(int n) { if(n % 100 >= 11 && n % 100 <= 13) return "th"; return daySuffixLookup[n % 10]; } A: The following function works for C: char *makeDaySuffix (unsigned int day) { //if ((day < 1) || (day > 31)) return ""; switch (day) { case 1: case 21: case 31: return "st"; case 2: case 22: return "nd"; case 3: case 23: return "rd"; } return "th"; } As requested, it only works for the numbers 1 through 31 inclusive. If you want (possibly, but not necessarily) raw speed, you could try: char *makeDaySuffix (unsigned int day) { static const char * const suffix[] = { "st","nd","rd","th","th","th","th","th","th","th", "th","th","th","th","th","th","th","th","th","th" "st","nd","rd","th","th","th","th","th","th","th" "st" }; //if ((day < 1) || (day > 31)) return ""; return suffix[day-1]; } You'll note that I have bounds checking in there though commented out. If there's even the slightest possibility that an unexpected value will be passed in, you'll probably want to uncomment those lines. Just keep in mind that, with the compilers of today, naive assumptions about what is faster in a high-level language may not be correct: measure, don't guess. A: const char *getDaySuffix(int day) { if (day%100 > 10 && day%100 < 14) return "th"; switch (day%10) { case 1: return "st"; case 2: return "nd"; case 3: return "rd"; default: return "th"; }; } This one works for any number, not just 1-31. A: See my question here: How to convert Cardinal numbers into Ordinal ones (it's not the C# one). Summary: looks like there's no way yet, with your limited requirements you can just use a simple function like the one posted.
{ "language": "en", "url": "https://stackoverflow.com/questions/165713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Practical uses for the "internal" keyword in C# Could you please explain what the practical usage is for the internal keyword in C#? I know that the internal modifier limits access to the current assembly, but when and in which circumstance should I use it? A: Saw an interesting one the other day, maybe week, on a blog that I can't remember. Basically I can't take credit for this but I thought it might have some useful application. Say you wanted an abstract class to be seen by another assembly but you don't want someone to be able to inherit from it. Sealed won't work because it's abstract for a reason, other classes in that assembly do inherit from it. Private won't work because you might want to declare a Parent class somewhere in the other assembly. namespace Base.Assembly { public abstract class Parent { internal abstract void SomeMethod(); } //This works just fine since it's in the same assembly. public class ChildWithin : Parent { internal override void SomeMethod() { } } } namespace Another.Assembly { //Kaboom, because you can't override an internal method public class ChildOutside : Parent { } public class Test { //Just fine private Parent _parent; public Test() { //Still fine _parent = new ChildWithin(); } } } As you can see, it effectively allows someone to use the Parent class without being able to inherit from. A: When you have methods, classes, etc which need to be accessible within the scope of the current assembly and never outside it. For example, a DAL may have an ORM but the objects should not be exposed to the business layer all interaction should be done through static methods and passing in the required paramters. A: A very interesting use of internal - with internal member of course being limited only to the assembly in which it is declared - is getting "friend" functionality to some degree out of it. A friend member is something that is visible only to certain other assemblies outside of the assembly in which its declared. C# has no built in support for friend, however the CLR does. You can use InternalsVisibleToAttribute to declare a friend assembly, and all references from within the friend assembly will treat the internal members of your declaring assembly as public within the scope of the friend assembly. A problem with this is that all internal members are visible; you cannot pick and choose. A good use for InternalsVisibleTo is to expose various internal members to a unit test assembly thus eliminating the needs for complex reflection work arounds to test those members. All internal members being visible isn't so much of a problem, however taking this approach does muck up your class interfaces pretty heavily and can potentially ruin encapsulation within the declaring assembly. A: Another reason to use internal is if you obfuscate your binaries. The obfuscator knows that it's safe to scramble the class name of any internal classes, while the name of public classes can't be scrambled, because that could break existing references. A: As rule-of-thumb there are two kinds of members: * *public surface: visible from an external assembly (public, protected, and internal protected): caller is not trusted, so parameter validation, method documentation, etc. is needed. *private surface: not visible from an external assembly (private and internal, or internal classes): caller is generally trusted, so parameter validation, method documentation, etc. may be omitted. A: Noise reduction, the less types you expose the more simple your library is. Tamper proofing / Security is another (although Reflection can win against it). A: Utility or helper classes/methods that you would like to access from many other classes within the same assembly, but that you want to ensure code in other assemblies can't access. From MSDN (via archive.org): A common use of internal access is in component-based development because it enables a group of components to cooperate in a private manner without being exposed to the rest of the application code. For example, a framework for building graphical user interfaces could provide Control and Form classes that cooperate using members with internal access. Since these members are internal, they are not exposed to code that is using the framework. You can also use the internal modifier along with the InternalsVisibleTo assembly level attribute to create "friend" assemblies that are granted special access to the target assembly internal classes. This can be useful for creation of unit testing assemblies that are then allowed to call internal members of the assembly to be tested. Of course no other assemblies are granted this level of access, so when you release your system, encapsulation is maintained. A: Internal classes enable you to limit the API of your assembly. This has benefits, like making your API simpler to understand. Also, if a bug exists in your assembly, there is less of a chance of the fix introducing a breaking change. Without internal classes, you would have to assume that changing any class's public members would be a breaking change. With internal classes, you can assume that modifying their public members only breaks the internal API of the assembly (and any assemblies referenced in the InternalsVisibleTo attribute). I like having encapsulation at the class level and at the assembly level. There are some who disagree with this, but it's nice to know that the functionality is available. A: If you are writing a DLL that encapsulates a ton of complex functionality into a simple public API, then “internal” is used on the class members which are not to be exposed publicly. Hiding complexity (a.k.a. encapsulation) is the chief concept of quality software engineering. A: The internal keyword is heavily used when you are building a wrapper over non-managed code. When you have a C/C++ based library that you want to DllImport you can import these functions as static functions of a class, and make they internal, so your user only have access to your wrapper and not the original API so it can't mess with anything. The functions being static you can use they everywhere in the assembly, for the multiple wrapper classes you need. You can take a look at Mono.Cairo, it's a wrapper around cairo library that uses this approach. A: One use of the internal keyword is to limit access to concrete implementations from the user of your assembly. If you have a factory or some other central location for constructing objects the user of your assembly need only deal with the public interface or abstract base class. Also, internal constructors allow you to control where and when an otherwise public class is instantiated. A: I have a project which uses LINQ-to-SQL for the data back-end. I have two main namespaces: Biz and Data. The LINQ data model lives in Data and is marked "internal"; the Biz namespace has public classes which wrap around the LINQ data classes. So there's Data.Client, and Biz.Client; the latter exposes all relevant properties of the data object, e.g.: private Data.Client _client; public int Id { get { return _client.Id; } set { _client.Id = value; } } The Biz objects have a private constructor (to force the use of factory methods), and an internal constructor which looks like this: internal Client(Data.Client client) { this._client = client; } That can be used by any of the business classes in the library, but the front-end (UI) has no way of directly accessing the data model, ensuring that the business layer always acts as an intermediary. This is the first time I've really used internal much, and it's proving quite useful. A: There are cases when it makes sense to make members of classes internal. One example could be if you want to control how the classes are instantiated; let's say you provide some sort of factory for creating instances of the class. You can make the constructor internal, so that the factory (that resides in the same assembly) can create instances of the class, but code outside of that assembly can't. However, I can't see any point with making classes or members internal without specific reasons, just as little as it makes sense to make them public, or private without specific reasons. A: Being driven by "use as strict modifier as you can" rule I use internal everywhere I need to access, say, method from another class until I explicitly need to access it from another assembly. As assembly interface is usually more narrow than sum of its classes interfaces, there are quite many places I use it. A: I find internal to be far overused. you really should not be exposing certain functionailty only to certain classes that you would not to other consumers. This in my opinion breaks the interface, breaks the abstraction. This is not to say it should never be used, but a better solution is to refactor to a different class or to be used in a different way if possible. However, this may not be always possible. The reasons it can cause issues is that another developer may be charged with building another class in the same assembly that yours is. Having internals lessens the clarity of the abstraction, and can cause problems if being misused. It would be the same issue as if you made it public. The other class that is being built by the other developer is still a consumer, just like any external class. Class abstraction and encapsulation isnt just for protection for/from external classes, but for any and all classes. Another problem is that a lot of developers will think they may need to use it elsewhere in the assembly and mark it as internal anyways, even though they dont need it at the time. Another developer then may think its there for the taking. Typically you want to mark private until you have a definative need. But some of this can be subjective, and I am not saying it should never be used. Just use when needed. A: This example contains two files: Assembly1.cs and Assembly2.cs. The first file contains an internal base class, BaseClass. In the second file, an attempt to instantiate BaseClass will produce an error. // Assembly1.cs // compile with: /target:library internal class BaseClass { public static int intM = 0; } // Assembly1_a.cs // compile with: /reference:Assembly1.dll class TestAccess { static void Main() { BaseClass myBase = new BaseClass(); // CS0122 } } In this example, use the same files you used in example 1, and change the accessibility level of BaseClass to public. Also change the accessibility level of the member IntM to internal. In this case, you can instantiate the class, but you cannot access the internal member. // Assembly2.cs // compile with: /target:library public class BaseClass { internal static int intM = 0; } // Assembly2_a.cs // compile with: /reference:Assembly1.dll public class TestAccess { static void Main() { BaseClass myBase = new BaseClass(); // Ok. BaseClass.intM = 444; // CS0117 } } source: http://msdn.microsoft.com/en-us/library/7c5ka91b(VS.80).aspx A: If Bob needs BigImportantClass then Bob needs to get the people who own project A to sign up to guarantee that BigImportantClass will be written to meet his needs, tested to ensure that it meets his needs, is documented as meeting his needs, and that a process will be put in place to ensure that it will never be changed so as to no longer meet his needs. If a class is internal then it doesn't have to go through that process, which saves budget for Project A that they can spend on other things. The point of internal is not that it makes life difficult for Bob. It's that it allows you to control what expensive promises Project A is making about features, lifetime, compatibility, and so on. A: the only thing i have ever used the internal keyword on is the license-checking code in my product ;-) A: How about this one: typically it is recommended that you do not expose a List object to external users of an assembly, rather expose an IEnumerable. But it is lot easier to use a List object inside the assembly, because you get the array syntax, and all other List methods. So, I typically have a internal property exposing a List to be used inside the assembly. Comments are welcome about this approach. A: Keep in mind that any class defined as public will automatically show up in the intellisense when someone looks at your project namespace. From an API perspective, it is important to only show users of your project the classes that they can use. Use the internal keyword to hide things they shouldn't see. If your Big_Important_Class for Project A is intended for use outside your project, then you should not mark it internal. However, in many projects, you'll often have classes that are really only intended for use inside a project. For example, you may have a class that holds the arguments to a parameterized thread invocation. In these cases, you should mark them as internal if for no other reason than to protect yourself from an unintended API change down the road. A: The idea is that when you are designing a library only the classes that are intended for use from outside (by clients of your library) should be public. This way you can hide classes that * *Are likely to change in future releases (if they were public you would break client code) *Are useless to the client and may cause confusion *Are not safe (so improper use could break your library pretty badly) etc. If you are developing inhouse solutions than using internal elements is not that important I guess, because usually the clients will have constant contact with you and/or access to the code. They are fairly critical for library developers though. A: When you have classes or methods which don't fit cleanly into the Object-Oriented Paradigm, which do dangerous stuff, which need to be called from other classes and methods under your control, and which you don't want to let anyone else use. public class DangerousClass { public void SafeMethod() { } internal void UpdateGlobalStateInSomeBizarreWay() { } }
{ "language": "en", "url": "https://stackoverflow.com/questions/165719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "485" }
Q: How to debug RESTful services? I'm looking for an easy way to debug RESTful services. For example, most webapps can be debugged using your average web browser. Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST. I am not looking to automate tests. I'd like to run new services through a quick sanity check, ideally without having to writing my own client. A: I've found RequestBin useful for debugging REST requests. Post to a unique URL and request data are updated/displayed. Can help in a pinch when other tools are not available. https://requestbin.com/ A: A tool I've found useful if you're running OS X Leopard: HTTP Client It's a very simple GUI program that allows you to craft http requests to a resource and view the response. A: You can use fiddler's Composer to debug restful services.. Updated JD 12 sep 2013: Rest Builder is now called Composer. A: cURL works just fine. A: Use an existing 'REST client' tool that makes it easy to inspect the requests and responses, like RESTClient. A: I ended up settling on POSTMAN It supports all REST features I could think of, and the UI is absolutely excellent. The only downside is that it requires Chrome. A: RESTTest for Firefox (an add-on). Fiddler for IE. A: I'm using Soap UI to test my REST API. It is more complete than any other tools: * *fine debug requests and responses *automated testing *all GUI based *properties and properties transfer to parameterize your tests *conditional testing *performance testing I'm not working for SmartBear. I was already a big fan of SoapUI while using it for SOAP WebServices. A: At my firm we use a variety of different tools and approaches to testing RESTful services: * *We write cURL scripts - essentially a single command saved in a file. One file per resource per method. For PUT and POST, we'll usually have files containing the representations to send alongside the cURL script. For example, for a mailbox resource, we might have a file named mailbox_post.cmd, which might contain the line curl -v -X POST -u username -H 'Content-Type:application/xml' -d @mailbox_post.xml http://service/mailbox. We like this approach because we end up building a collection of tests which can be run in a batch, or at least passed around between testers, and used for regression testing. *We use cURL and RESTClient for ad-hoc tests *We have the service serve XHTML by default, so it's browsable, and add forms resources, so the service is actually partially or fully testable using a browser. This was partly inspired by some parts of RESTful Web Services, wherein the authors show that the line between web services and web applications may not need to be as solid and strict as is usually assumed. *We write functional tests as Groovy closures, using the Restlet framework, and run the tests with a test runner Groovy script. This is useful because the tests can be stateful, build on each other, and share variables, when appropriate. We find Restlet's API to be simple and intuitive, and so easy to write quick HTTP requests and test the responses, and it's even easier when used in Groovy. (I hope to share this technique, including the test runner script, on our blog soon.) A: Postman, a Google Chrome extension, may be helpful. Edit years later: Also the website of the url in case Chrome extension link gets changed: www.postman.com A: Aside from using one of the tools in Peter Hilton's response, I would have to say that scripting the tests with LWP or some similar tool may be your only option. You could bypass the use of LWP by just opening a socket, sending a raw HTTP request in and examining what you get in return. But as far as I know, there are a dearth of testing tools for this sort of domain-- most look at this problem-space primarily from the lens of a web-site developer, and for them the browser is enough of a testing platform. A: I use restclient, available from Google Code. It's a simple Java Swing application which supports all HTTP methods, and allows you full control over the HTTP headers, conneg, etc. A: I tend to write unit tests for RESTful resources using Jersey which comes with a nice REST client. The nice thing is if you implement your RESTful resources using JAX-RS then the Jersey client can reuse the entity providers such as for JAXB/XML/JSON/Atom and so forth - so you can reuse the same objects on the server side as you use on the client side unit test. For example here is a unit test case from the Apache Camel project which looks up XML payloads from a RESTful resource (using the JAXB object Endpoints). The resource(uri) method is defined in this base class which just uses the Jersey client API. e.g. clientConfig = new DefaultClientConfig(); client = Client.create(clientConfig); resource = client.resource("http://localhost:8080"); // lets get the XML as a String String text = resource("foo").accept("application/xml").get(String.class); A: If you want free tool for the same purpose with additional feature of multipart form data submission it is here http://code.google.com/a/eclipselabs.org/p/restclient-tool/ A: Firefox's has RESTClient plug-in to send different request with methods, parameters, headers etc. A: You guys should check poster extension for firefox, it's simple and useful enough to use :) A: because its totally missing here: https://luckymarmot.com/paw Is worth ever penny...
{ "language": "en", "url": "https://stackoverflow.com/questions/165720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: Do programmers of other languages, besides C++, use, know or understand RAII? I've noticed RAII has been getting lots of attention on Stackoverflow, but in my circles (mostly C++) RAII is so obvious its like asking what's a class or a destructor. So I'm really curious if that's because I'm surrounded daily, by hard-core C++ programmers, and RAII just isn't that well known in general (including C++), or if all this questioning on Stackoverflow is due to the fact that I'm now in contact with programmers that didn't grow up with C++, and in other languages people just don't use/know about RAII? A: RAII. It starts with a constructor and destructor but it is more than that. It is all about safely controlling resources in the presence of exceptions. What makes RAII superior to finally and such mechanisms is that it makes code safer to use because it moves responsibility for using an object correctly from the user of the object to the designer of the object. Read this Example to use StdioFile correctly using RAII. void someFunc() { StdioFile file("Plop","r"); // use file } // File closed automatically even if this function exits via an exception. To get the same functionality with finally. void someFunc() { // Assuming Java Like syntax; StdioFile file = new StdioFile("Plop","r"); try { // use file } finally { // close file. file.close(); // // Using the finaliser is not enough as we can not garantee when // it will be called. } } Because you have to explicitly add the try{} finally{} block this makes this method of coding more error prone (i.e. it is the user of the object that needs to think about exceptions). By using RAII exception safety has to be coded once when the object is implemented. To the question is this C++ specific. Short Answer: No. Longer Answer: It requires Constructors/Destructors/Exceptions and objects that have a defined lifetime. Well technically it does not need exceptions. It just becomes much more useful when exceptions could potentially be used as it makes controlling the resource in the presence of exceptions very easy. But it is useful in all situations where control can leave a function early and not execute all the code (e.g. early return from a function. This is why multiple return points in C is a bad code smell while multiple return points in C++ is not a code smell [because we can clean up using RAII]). In C++ controlled lifetime is achieved by stack variables or smart pointers. But this is not the only time we can have a tightly controlled lifespan. For example Perl objects are not stack based but have a very controlled lifespan because of reference counting. A: The problem with RAII is the acronym. It has no obvious correlation to the concept. What does this have to do with stack allocation? That is what it comes down to. C++ gives you the ability to allocate objects on the stack and guarantee that their destructors are called when the stack is unwound. In light of that, does RAII sound like a meaningful way of encapsulating that? No. I never heard of RAII until I came here a few weeks ago, and I even had to laugh hard when I read someone had posted that they would never hire a C++ programmer who'd didn't know what RAII was. Surely the concept is well known to most all competent professional C++ developers. It's just that the acronym is poorly conceived. A: A modification of @Pierre's answer: In Python: with open("foo.txt", "w") as f: f.write("abc") f.close() is called automatically whether an exception were raised or not. In general it can be done using contextlib.closing, from the documenation: closing(thing): return a context manager that closes thing upon completion of the block. This is basically equivalent to: from contextlib import contextmanager @contextmanager def closing(thing): try: yield thing finally: thing.close() And lets you write code like this: from __future__ import with_statement # required for python version < 2.6 from contextlib import closing import urllib with closing(urllib.urlopen('http://www.python.org')) as page: for line in page: print line without needing to explicitly close page. Even if an error occurs, page.close() will be called when the with block is exited. A: Common Lisp has RAII: (with-open-file (stream "file.ext" :direction :input) (do-something-with-stream stream)) See: http://www.psg.com/~dlamkins/sl/chapter09.html A: There are plenty of reasons why RAII isn't better known. First, the name isn't particularly obvious. If I didn't already know what RAII was, I'd certainly never guess it from the name. (Resource acquisition is initialization? What does that have to do with the destructor or cleanup, which is what really characterizes RAII?) Another is that it doesn't work as well in languages without deterministic cleanup. In C++, we know exactly when the destructor is called, we know the order in which destructors are called, and we can define them to do anything we like. In most modern languages, everything is garbage-collected, which makes RAII trickier to implement. There's no reason why it wouldn't be possible to add RAII-extensions to, say, C#, but it's not as obvious as it is in C++. But as others have mentioned, Perl and other languages support RAII despite being garbage collected. That said, it is still possible to create your own RAII-styled wrapper in C# or other languages. I did it in C# a while ago. I had to write something to ensure that a database connection was closed immediately after use, a task which any C++ programmer would see as an obvious candidate for RAII. Of course we could wrap everything in using-statements whenever we used a db connection, but that's just messy and error-prone. My solution was to write a helper function which took a delegate as argument, and then when called, opened a database connection, and inside a using-statement, passed it to the delegate function, pseudocode: T RAIIWrapper<T>(Func<DbConnection, T> f){ using (var db = new DbConnection()){ return f(db); } } Still not as nice or obvious as C++-RAII, but it achieved roughly the same thing. Whenever we need a DbConnection, we have to call this helper function which guarantees that it'll be closed afterwards. A: I use C++ RAII all the time, but I've also developed in Visual Basic 6 for a long time, and RAII has always been a widely-used concept there (although I've never heard anyone call it that). In fact, many VB6 programs rely on RAII quite heavily. One of the more curious uses that I've repeatedly seen is the following small class: ' WaitCursor.cls ' Private m_OldCursor As MousePointerConstants Public Sub Class_Inititialize() m_OldCursor = Screen.MousePointer Screen.MousePointer = vbHourGlass End Sub Public Sub Class_Terminate() Screen.MousePointer = m_OldCursor End Sub Usage: Public Sub MyButton_Click() Dim WC As New WaitCursor ' … Time-consuming operation. ' End Sub Once the time-consuming operation terminates, the original cursor gets restored automatically. A: First of all I'm very surprised it's not more well known! I totally thought RAII was, at least, obvious to C++ programmers. However now I guess I can understand why people actually ask about it. I'm surrounded, and my self must be, C++ freaks... So my secret.. I guess that would be, that I used to read Meyers, Sutter [EDIT:] and Andrei all the time years ago until I just grokked it. A: RAII stands for Resource Acquisition Is Initialization. This is not language-agnostic at all. This mantra is here because C++ works the way it works. In C++ an object is not constructed until its constructor completes. A destructor will not be invoked if the object has not been successfully constructed. Translated to practical language, a constructor should make sure it covers for the case it can't complete its job thoroughly. If, for example, an exception occurs during construction then the constructor must handle it gracefully, because the destructor will not be there to help. This is usually done by covering for the exceptions within the constructor or by forwarding this hassle to other objects. For example: class OhMy { public: OhMy() { p_ = new int[42]; jump(); } ~OhMy() { delete[] p_; } private: int* p_; void jump(); }; If the jump() call in the constructor throws we're in trouble, because p_ will leak. We can fix this like this: class Few { public: Few() : v_(42) { jump(); } ~Few(); private: std::vector<int> v_; void jump(); }; If people are not aware of this then it's because of one of two things: * *They don't know C++ well. In this case they should open TCPPPL again before they write their next class. Specifically, section 14.4.1 in the third edition of the book talks about this technique. *They don't know C++ at all. That's fine. This idiom is very C++y. Either learn C++ or forget all about this and carry on with your lives. Preferably learn C++. ;) A: For people who are commenting in this thread about RAII (resource acquisition is initialisation), here's a motivational example. class StdioFile { FILE* file_; std::string mode_; static FILE* fcheck(FILE* stream) { if (!stream) throw std::runtime_error("Cannot open file"); return stream; } FILE* fdup() const { int dupfd(dup(fileno(file_))); if (dupfd == -1) throw std::runtime_error("Cannot dup file descriptor"); return fdopen(dupfd, mode_.c_str()); } public: StdioFile(char const* name, char const* mode) : file_(fcheck(fopen(name, mode))), mode_(mode) { } StdioFile(StdioFile const& rhs) : file_(fcheck(rhs.fdup())), mode_(rhs.mode_) { } ~StdioFile() { fclose(file_); } StdioFile& operator=(StdioFile const& rhs) { FILE* dupstr = fcheck(rhs.fdup()); if (fclose(file_) == EOF) { fclose(dupstr); // XXX ignore failed close throw std::runtime_error("Cannot close stream"); } file_ = dupstr; return *this; } int read(std::vector<char>& buffer) { int result(fread(&buffer[0], 1, buffer.size(), file_)); if (ferror(file_)) throw std::runtime_error(strerror(errno)); return result; } int write(std::vector<char> const& buffer) { int result(fwrite(&buffer[0], 1, buffer.size(), file_)); if (ferror(file_)) throw std::runtime_error(strerror(errno)); return result; } }; int main(int argc, char** argv) { StdioFile file(argv[1], "r"); std::vector<char> buffer(1024); while (int hasRead = file.read(buffer)) { // process hasRead bytes, then shift them off the buffer } } Here, when a StdioFile instance is created, the resource (a file stream, in this case) is acquired; when it's destroyed, the resource is released. There is no try or finally block required; if the reading causes an exception, fclose is called automatically, because it's in the destructor. The destructor is guaranteed to be called when the function leaves main, whether normally or by exception. In this case, the file stream is cleaned up. The world is safe once again. :-D A: The thing with RAII is that it requires deterministic finalization something that is guaranteed for stackbased objects in C++. Languages like C# and Java that relies on garbage collection doesn't have this guarantee so it has to be "bolted" on somehow. In C# this is done by implementing IDisposable and much of the same usage patterns then crops up basicly that's one of the motivators for the "using" statement, it ensures Disposal and is very well known and used. So basicly the idiom is there, it just doesn't have a fancy name. A: RAII is a way in C++ to make sure a cleanup procedure is executed after a block of code regardless of what happens in the code: the code executes till the end properly or raises an exception. An already cited example is automatically closing a file after its processing, see answer here. In other languages you use other mechanism to achieve that. In Java you have try { } finally {} constructs: try { BufferedReader file = new BufferedReader(new FileReader("infilename")); // do something with file } finally { file.close(); } In Ruby you have the automatic block argument: File.open("foo.txt") do | file | # do something with file end In Lisp you have unwind-protect and the predefined with-XXX (with-open-file (file "foo.txt") ;; do something with file ) In Scheme you have dynamic-wind and the predefined with-XXXXX: (with-input-from-file "foo.txt" (lambda () ;; do something ) in Python you have try finally try file = open("foo.txt") # do something with file finally: file.close() The C++ solution as RAII is rather clumsy in that it forces you to create one class for all kinds of cleanup you have to do. This may forces you to write a lot of small silly classes. Other examples of RAII are: * *unlocking a mutex after acquisition *closing a database connection after opening *freeing memory after allocation *logging on entry and exit of a block of code *... A: It's sort of tied to knowing when your destructor will be called though right? So it's not entirely language-agnostic, given that that's not a given in many GC'd languages. A: I think a lot of other languages (ones that don't have delete, for example) don't give the programmer quite the same control over object lifetimes, and so there must be other means to provide for deterministic disposal of resources. In C#, for example, using using with IDisposable is common. A: RAII is popular in C++ because it's one of the few (only?) languages that can allocate complex scope-local variables, but does not have a finally clause. C#, Java, Python, Ruby all have finally or an equivalent. C hasn't finally, but also can't execute code when a variable drops out of scope. A: I have colleagues who are hard-core, "read the spec" C++ types. Many of them know RAII but I have never really heard it used outside of that scene. A: RAII is specific to C++. C++ has the requisite combination of stack-allocated objects, unmanaged object lifetimes, and exception handling. A: CPython (the official Python written in C) supports RAII because of its use of reference counted objects with immediate scope based destruction (rather than when garbage is collected). Unfortunately, Jython (Python in Java) and PyPy do not support this very useful RAII idiom and it breaks a lot of legacy Python code. So for portable python you have to handle all the exceptions manually just like Java.
{ "language": "en", "url": "https://stackoverflow.com/questions/165723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: end() function The end() function in jQuery reverts the element set back to what it was before the last destructive change, so I can see how it's supposed to be used, but I've seen some code examples, eg: on alistapart (which were probably from older versions of jQuery - the article is from 2006) which finished every statement off with .end(). eg: $( 'form.cmxform' ).hide().end(); * *Does this have any effect? *Is it something I should also be doing? *What does the above code even return? A: That end() doesn't do anything. There's no point to coding like that. It will return $('#myBox') -- the example is pretty poor. More interesting is something like this: $('#myBox').show ().children ('.myClass').hide ().end ().blink (); Which will show myBox, hide the specified children, and then blink the box. There are more interesting examples here: http://simonwillison.net/2007/Aug/15/jquery/ such as: $('form#login') // hide all the labels inside the form with the 'optional' class .find('label.optional').hide().end() // add a red border to any password fields in the form .find('input:password').css('border', '1px solid red').end() // add a submit handler to the form .submit(function(){ return confirm('Are you sure you want to submit?'); }); A: From jquery doc there is an example: $('ul.first').find('.foo') .css('background-color', 'red') .end().find('.bar') .css('background-color', 'green') .end(); and after it a clarification: The last end() is unnecessary, as we are discarding the jQuery object immediately thereafter. However, when the code is written in this form, the end() provides visual symmetry and a sense of completion —making the program, at least to the eyes of some developers, more readable, at the cost of a slight hit to performance as it is an additional function call.
{ "language": "en", "url": "https://stackoverflow.com/questions/165729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you show animated GIFs on a Windows Form (c#) I have a form showing progress messages as a fairly long process runs. It's a call to a web service so I can't really show a percentage complete figure on a progress bar meaningfully. (I don't particularly like the Marquee property of the progress bar) I would like to show an animated GIF to give the process the feel of some activity (e.g. files flying from one computer to another like Windows copy process). How do you do this? A: Note that in Windows, you traditionally don't use animated Gifs, but little AVI animations: there is a Windows native control just to display them. There are even tools to convert animated Gifs to AVI (and vice-versa). A: If you put it in a PictureBox control, it should just work A: It's not too hard. * *Drop a picturebox onto your form. *Add the .gif file as the image in the picturebox *Show the picturebox when you are loading. Things to take into consideration: * *Disabling the picturebox will prevent the gif from being animated. Another way of doing it: Another way that I have found that works quite well is the async dialog control that I found on the code project A: I had the same problem. Whole form (including gif) stopping to redraw itself because of long operation working in the background. Here is how i solved this. private void MyThreadRoutine() { this.Invoke(this.ShowProgressGifDelegate); //your long running process System.Threading.Thread.Sleep(5000); this.Invoke(this.HideProgressGifDelegate); } private void button1_Click(object sender, EventArgs e) { ThreadStart myThreadStart = new ThreadStart(MyThreadRoutine); Thread myThread = new Thread(myThreadStart); myThread.Start(); } I simply created another thread to be responsible for this operation. Thanks to this initial form continues redrawing without problems (including my gif working). ShowProgressGifDelegate and HideProgressGifDelegate are delegates in form that set visible property of pictureBox with gif to true/false. A: It doesn't when you start a long operation behind, because everything STOPS since you'Re in the same thread. A: Public Class Form1 Private animatedimage As New Bitmap("C:\MyData\Search.gif") Private currentlyanimating As Boolean = False Private Sub OnFrameChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Me.Invalidate() End Sub Private Sub AnimateImage() If currentlyanimating = True Then ImageAnimator.Animate(animatedimage, AddressOf Me.OnFrameChanged) currentlyanimating = False End If End Sub Protected Overrides Sub OnPaint(ByVal e As System.Windows.Forms.PaintEventArgs) AnimateImage() ImageAnimator.UpdateFrames(animatedimage) e.Graphics.DrawImage(animatedimage, New Point((Me.Width / 4) + 40, (Me.Height / 4) + 40)) End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load BtnStop.Enabled = False End Sub Private Sub BtnStop_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BtnStop.Click currentlyanimating = False ImageAnimator.StopAnimate(animatedimage, AddressOf Me.OnFrameChanged) BtnStart.Enabled = True BtnStop.Enabled = False End Sub Private Sub BtnStart_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BtnStart.Click currentlyanimating = True AnimateImage() BtnStart.Enabled = False BtnStop.Enabled = True End Sub End Class A: too late, But! setting the image to the PictureBox.Image and Setting the PictureBox.SizeMode = PictureBoxSizeMode.Zoom does the trick .Net Framework 4.8 PictureBox.Image = Image.FromFile("location"); // OR From base64 PictureBox.SizeMode = PictureBoxSizeMode.Zoom; A: I had the same issue and came across different solutions by implementing which I used to face several different issues. Finally, below is what I put some pieces from different posts together which worked for me as expected. private void btnCompare_Click(object sender, EventArgs e) { ThreadStart threadStart = new ThreadStart(Execution); Thread thread = new Thread(threadStart); thread.SetApartmentState(ApartmentState.STA); thread.Start(); } Here is the Execution method that also carries invoking the PictureBox control: private void Execution() { btnCompare.Invoke((MethodInvoker)delegate { pictureBox1.Visible = true; }); Application.DoEvents(); // Your main code comes here . . . btnCompare.Invoke((MethodInvoker)delegate { pictureBox1.Visible = false; }); } Keep in mind, the PictureBox is invisible from Properties Window or do below: private void ComparerForm_Load(object sender, EventArgs e) { pictureBox1.Visible = false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/165735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "154" }
Q: Is it advisable to build a web service over other web services? I've inherited this really weird codebase where they've built an external web service over a bunch of internal web services just to add authentication/authorization using WS-Security, WS-Encryption, et al. Less than a month into this engagement, I'm already feeling the pain of coupling volatile components through rigid WSDL, esp considering some of them use WCF and other choose to go WSDL first. Managing various versions of generated proxies and wrappers at various levels is a nightmare! I'll admit the design is over-complicated and could have been much better, but my question essentially is: * *Would you ever build a web service just to provide a cross cutting concern over a bunch of services? *Would this be better implemented as web service handlers? and lastly... * *Would you categorize this under the Web Service Gateway pattern? A: I saw that very thing being built one year ago. I almost cried when the team took months to build 4 web services, 2 of which simply wrapped other internal ones, using WCF and some serious encryption. The only reason they wrapped the internal ones was to change the potential error numbers coming back. So, would I ever intentionaly do that? Nope. Would it be better implemented as almost anything else? yep. Would I categorize it under the WTF pattern? absolutely. UPDATE: One thing I just remembered is that there is an architecture called "Enterprise Service Bus" It's purpose is to provide a common interface into other SOA systems. This way it doesn't matter what the different applications use for their end point mechanisms (WCF, WSE 1/2/3, RESTful, etc). BizTalk is one example of an ESB and there are many other off the shelf programs that can be used. Basically, your app passes a message to the ESB and it handles sending that message, in a reliable way, to the other systems as well as marshalling any responses back. This also means that you could insulate other applications from many types of changes to the end points. Of course, if the new end points require additional information, then you'd have to modify the callers. However, if all they are changing is the mechanism then a good ESB would be able to handle those changes without impacting your app. A: I have seen similar implementations if you are exposing the services to the outside world and if you need to tighten down the security..check this MSDN column..
{ "language": "en", "url": "https://stackoverflow.com/questions/165747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: For your complicated algorithms, how do you measure its performance? Let's just assume for now that you have narrowed down where the typical bottlenecks in your app are. For all you know, it might be the batch process you run to reindex your tables; it could be the SQL queries that runs over your effective-dated trees; it could be the XML marshalling of a few hundred composite objects. In other words, you might have something like this: public Result takeAnAnnoyingLongTime(Input in) { // impl of above } Unfortunately, even after you've identified your bottleneck, all you can do is chip away at it. No simple solution is available. How do you measure the performance of your bottleneck so that you know your fixes are headed in the right direction? A: Two points: * *Beware of the infamous "optimizing the idle loop" problem. (E.g. see the optimization story under the heading "Porsche-in-the-parking-lot".) That is, just because a routine is taking a significant amount of time (as shown by your profiling), don't assume that it's responsible for slow performance as perceived by the user. *The biggest performance gains often come not from that clever tweak or optimization to the implementation of the algorithm, but from realising that there's a better algorithm altogether. Some improvements are relatively obvious, while others require more detailed analysis of the algorithms, and possibly a major change to the data structures involved. This may include trading off processor time for I/O time, in which case you need to make sure that you're not optimizing only one of those measures. Bringing it back to the question asked, make sure that whatever you're measuring represents what the user actually experiences, otherwise your efforts could be a complete waste of time. A: * *Profile it *Find the top line in the profiler, attempt to make it faster. *Profile it *If it worked, go to 1. If it didn't work, go to 2. A: I'd measure them using the same tools / methods that allowed me to find them in the first place. Namely, sticking timing and logging calls all over the place. If the numbers start going down, then you just might be doing the right thing. A: As mentioned in this msdn column, performance tuning is compared to the job of painting Golden Gate Bridge: once you finish painting the entire thing, it's time to go back to the beginning and start again. A: This is not a hard problem. The first thing you need to understand is that measuring performance is not how you find performance problems. Knowing how slow something is doesn't help you find out why. You need a diagnostic tool, and a good one. I've had a lot of experience doing this, and this is the best method. It is not automatic, but it runs rings around most profilers. A: It's an interesting question. I don't think anyone knows the answer. I believe that significant part of the problem is that for more complicated programs, no one can predict their complexity. Therefore, even if you have profiler results, it's very complicated to interpret it in terms of changes that should be made to the program, because you have no theoretical basis for what the optimal solution is. I think this is a reason why we have so bloated software. We optimize only so that quite simple cases would work on our fast machines. But once you put such pieces together into a large system, or you use order of magnitude larger input, wrong algorithms used (which were until then invisible both theoretically and practically) will start showing their true complexity. Example: You create a string class, which handles Unicode. You use it somewhere like computer-generated XML processing where it really doesn't matter. But Unicode processing is in there, taking part of the resources. By itself, the string class can be very fast, but call it million times, and the program will be slow. I believe that most of the current software bloat is of this nature. There is a way to reduce it, but it contradicts OOP. There is an interesting book There is an interesting book about various techniques, it's memory oriented but most of them could be reverted to get more speed. A: I'd identify two things: 1) what complexity is it? The easiest way is to graph time-taken verses size of input. 2) how is it bound? Is it memory, or disk, or IPC with other processes or machines, or.. Now point (2) is the easier to tackle and explain: Lots of things go faster if you have more RAM or a faster machine or faster disks or move over to gig ethernet or such. If you identify your pain, you can put some money into hardware to make it tolerable.
{ "language": "en", "url": "https://stackoverflow.com/questions/165751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the command line syntax for custom stacks for the OSX Leopard dock? I know there is a command line syntax for adding custom stacks to the OSX Leopard dock besides the "Downloads" which comes by default. What is it? A: http://code.google.com/p/dockutil/ A: While this doesn't completely answer your question, this page has some info on how to add a "recent items" stack to the Dock. Maybe that will help you to figure out what the generic syntax for adding Stacks is?
{ "language": "en", "url": "https://stackoverflow.com/questions/165765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does Common Lisp have a something like java's Set Interface/implementing classes? I need something like this, a collection of elements which contains no duplicates of any element. Does Common Lisp, specifically SBCL, have any thing like this? A: For a quick solution, just use hash tables, as has been mentioned before. However, if you prefer a more principled approach, you can take a look at FSet, which is “a functional set-theoretic collections library”. Among others, it contains classes and operations for sets and bags. (EDIT:) The cleanest way would probably be to define your set-oriented operations as generic functions. A set of generic functions is basically equivalent to a Java interface, after all. You can simply implement methods on the standard HASH-TABLE class as a first prototype and allow other implementations as well. A: Look at cl-containers. There is a set-container class. A: You could use lists, though they can prove to be inefficient for representing large sets. This is done using ADJOIN or PUSHNEW to add a new element to a list, and DELETE or REMOVE to do the opposite. (let ((set (list))) (pushnew 11 set) (pushnew 42 set) (pushnew 11 set) (print set) ; set={42,11} (setq set (delete 42 set)) (print set)) ; set={11} One thing to watch out for is all that these operators use EQL by default to test for potential duplicates in the set (much as Java uses the equals method). That's OK for sets holding numbers or characters, but for sets of other objects, a `deeper' equality test such as EQUAL should be specified as a :TEST keyword parameter, e.g. for a set of strings :- (let ((set (list))) (pushnew "foo" set :test #'equal) (pushnew "bar" set :test #'equal) (pushnew "foo" set :test #'equal) ; EQUAL decides that "foo"="foo" (print set)) ; set={"bar","foo"} Lisp's counterparts to some of Java's Set operations are: * *addAll -> UNION or NUNION *containsAll -> SUBSETP *removeAll -> SET-DIFFERENCE or NSET-DIFFERENCE *retainAll -> INTERSECTION or NINTERSECTION A: Yes, it has sets. See this section on "Sets" from Practical Common Lisp. Basically, you can create a set with pushnew and adjoin, query it with member, member-if and member-if-not, and combine it with other sets with functions like intersection, union, set-difference, set-exclusive-or and subsetp. A: Easily solvable using a hash table. (let ((h (make-hash-table :test 'equalp))) ; if you're storing symbols (loop for i from 0 upto 20 do (setf (gethash i h) (format nil "Value ~A" i))) (loop for i from 10 upto 30 do (setf (gethash i h) (format nil "~A eulaV" i))) (loop for k being the hash-keys of h using (hash-value v) do (format t "~A => ~A~%" k v))) outputs 0 => Value 0 1 => Value 1 ... 9 => Value 9 10 => 10 eulaV 11 => 11 eulaV ... 29 => 29 eulaV 30 => 30 eulaV A: Not that I'm aware of, but you can use hash tables for something quite similar. A: Lisp hashtables are CLOS based. Specs here. A: Personally, I would just implement a function which takes a list and return a unique set. I've drafted something together which works for me: (defun make-set (list-in &optional (list-out '())) (if (endp list-in) (nreverse list-out) (make-set (cdr list-in) (adjoin (car list-in) list-out :test 'equal)))) Basically, the adjoin function prepends an item to a list non-destructively if and only if the item is not already present in the list, accepting an optional test function (one of the Common Lisp "equal" functions). You can also use pushnew to do so destructively, but I find the tail-recursive implementation to be far more elegant. So, Lisp does export several basic functions that allow you to use a list as a set; no built-in datatype is needed because you can just use different functions for prepending things to a list. My data source for all of this (not the function, but the info) has been a combination of the Common Lisp HyperSpec and Common Lisp the Language (2nd Edition).
{ "language": "en", "url": "https://stackoverflow.com/questions/165767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Assembly dependencies with .Net projects If I have an assembly (A) which references another assembly (B). I want to reference A in a project, I add the reference and it copies A into my BIN directory. It does not copy B as well, even though A depends on it, so the code doesn't compile. How can I set things up so that whenever I reference A, both A and B get copied to my bin directory? A: In Visual Studio, add each project to the same solution. Ensure you use Project References instead of direct file references (ie browsing for the assembly). A: I dont think there is any way around what you ask other than to explicitly add both. I dont think however adding projects for the sake of getting references copied is a viable solution to the issue. Not all projects that a solution depends on should necassarily be added to the solution. This would completely depdend on your overall project structure, processes, source control, division of labour, etc A: Reference both A and B. A: Unfortunately you'll have to manually add both. This is what happens to me as well whenever I use pre-3.5 versions of NHibernate: it requires both log4net and Iesi.Collections assemblies. So I have no choice but to manually include a reference to both in all my solutions that implement NHibernate. This is more of an issue, of course, if you only have the DLLs. If it's a project that you have a codebase to Visual Studio itself will warn you beforehand that the references are missing. A: How about adding them to Global Assembly Cache?
{ "language": "en", "url": "https://stackoverflow.com/questions/165771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are the PUT, DELETE, HEAD, etc methods available in most web browsers? I've seen a couple questions around here like How to debug RESTful services, which mentions: Unfortunately that same browser won't allow me to test HTTP PUT, DELETE, and to a certain degree even HTTP POST. I've also heard that browsers support only GET and POST, from some other sources like: * *http://www.packetizer.com/ws/rest.html *http://www.mail-archive.com/jmeter-user@jakarta.apache.org/msg13518.html *http://www.xml.com/cs/user/view/cs_msg/1098 However, a few quick tests in Firefox show that sending PUT and DELETE requests works as expected -- the XMLHttpRequest completes successfully, and the request shows up in the server logs with the right method. Is there some aspect to this I'm missing, such as cross-browser compatibility or non-obvious limitations? A: HTML forms support GET and POST. (HTML5 at one point added PUT/DELETE, but those were dropped.) XMLHttpRequest supports every method, including CHICKEN, though some method names are matched against case-insensitively (methods are case-sensitive per HTTP) and some method names are not supported at all for security reasons (e.g. CONNECT). Fetch API also supports any method except for CONNECT, TRACE, and TRACK, which are forbidden for security reasons. Browsers are slowly converging on the rules specified by XMLHttpRequest, but as the other comment pointed out there are still some differences. A: Just to add - Safari 2 and earlier definitely didn't support PUT and DELETE. I get the impression 3 did, but I don't have it around to test anymore. Safari 4 definitely does support PUT and DELETE. A: No. The HTML 5 spec mentions: The method and formmethod content attributes are enumerated attributes with the following keywords and states: The keyword get, mapping to the state GET, indicating the HTTP GET method. The GET method should only request and retrieve data and should have no other effect. The keyword post, mapping to the state POST, indicating the HTTP POST method. The POST method requests that the server accept the submitted form's data to be processed, which may result in an item being added to a database, the creation of a new web page resource, the updating of the existing page, or all of the mentioned outcomes. The keyword dialog, mapping to the state dialog, indicating that submitting the form is intended to close the dialog box in which the form finds itself, if any, and otherwise not submit. The invalid value default for these attributes is the GET state I.e. HTML forms only support GET and POST as HTTP request methods. A workaround for this is to tunnel other methods through POST by using a hidden form field which is read by the server and the request dispatched accordingly. However, GET, POST, PUT and DELETE are supported by the implementations of XMLHttpRequest (i.e. AJAX calls) in all the major web browsers (IE, Firefox, Safari, Chrome, Opera). A: XMLHttpRequest is a standard object in the JavaScript Object model. According to Wikipedia, XMLHttpRequest first appeared in Internet Explorer 5 as an ActiveX object, but has since been made into a standard and has been included for use in JavaScript in the Mozilla family since 1.0, Apple Safari 1.2, Opera 7.60-p1, and IE 7.0. The open() method on the object takes the HTTP Method as an argument - and is specified as taking any valid HTTP method (see the item number 5 of the link) - including GET, POST, HEAD, PUT and DELETE, as specified by RFC 2616. As a side note IE 7–8 only permit the following HTTP methods: "GET", "POST", "HEAD", "PUT", "DELETE", "MOVE", "PROPFIND", "PROPPATCH", "MKCOL", "COPY", "LOCK", "UNLOCK", and "OPTIONS". A: _method hidden field workaround Used in Rails and could be adapted to any framework: * *add a hidden _method parameter to any form that is not GET or POST: <input type="hidden" name="_method" value="DELETE"> This can be done automatically in frameworks through the HTML creation helper method (e.g. Rails form_tag) *fix the actual form method to POST (<form method="post") *processes _method on the server and do exactly as if that method had been sent instead of the actual POST Rationale / history of why it is not possible: https://softwareengineering.stackexchange.com/questions/114156/why-there-are-no-put-and-delete-methods-in-html-forms A: I believe those comments refer specifically to the browsers, i.e., clicking links and submitting forms, not XMLHttpRequest. XMLHttpRequest is just a custom client that you wrote in JavaScript that uses the browser as a runtime. UPDATE: To clarify, I did not mean (though I did write) that you wrote XMLHttpRequest; I meant that you wrote the code that uses XMLHttpRequest. The browsers do not natively support XMLHttpRequest. XMLHttpRequest comes from the JavaScript runtime, which may be hosted by a browser, although it isn't required to be (see Rhino). That's why people say browsers don't support PUT and DELETE—because it's actually JavaScript that is supporting them. A: YES, PUT, DELETE, HEAD etc HTTP methods are available in all modern browsers. To be compliant with XMLHttpRequest Level 2 browsers must support these methods. To check which browsers support XMLHttpRequest Level 2 I recommend CanIUse: http://caniuse.com/#feat=xhr2 Only Opera Mini is lacking support atm (juli '15), but Opera Mini lacks support for everything. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/165779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "633" }
Q: Opinion: in HTML, Possible Duplicate IDs or Non-Standard Attributes? It seems pretty common to want to let your javascript know a particular dom node corresponds to a record in the database. So, how do you do it? One way I've seen that's pretty common is to use a class for the type and an id for the id: <div class="thing" id="5"> <script> myThing = select(".thing#5") </script> There's a slight html standards issue with this though -- if you have more than one type of record on the page, you may end up duplicating IDs. But that doesn't do anything bad, does it? An alternative is to use data attributes: <div data-thing-id="5"> <script> myThing = select("[data-thing-id=5]") </script> This gets around the duplicate IDs problem, but it does mean you have to deal with attributes instead of IDs, which is sometimes more difficult. What do you guys think? A: IDs should be unique according to the standards and whilst most browsers don't barf when handed duplicate IDs it would not be a good idea to rely on that always being the case. Making the ID unique by adding a type name to the ID would work but you need to ask why you need it. Giving an element an id is very useful when the element needs to be found, getElementById is very fast. The reason its fast it that most browsers will build an index of IDs as its loads the DOM. However if you have zillions of IDs that you never actually need to use in something like getElementById then you've incurred a cost that is never paid back. I think you may find most of the time you want the object ID in an event fired by the element or one of its children. In which case I would use an additional attribute on the element and not the ID attribute. I would leave class attribute to do what its meant to do and not overload it with identification duties. A: Considering the fact that you can have multiple classes per element, couldn't you create a unique identifier as an additional class per element? That way, there could be more than one element with the same "id" without HTML ID attribute collisions. <div class="thing myapp-thing-5" /> <div class="thing myapp-thing-668" /> <div class="thing myapp-thing-5" /> It would be easy to then find these nodes, and find their corresponding DB record with a little string manipulation. A: Note that an ID cannot start with a digit, so: <div class="thing" id="5"> is invalid HTML. See What are valid values for the id attribute in HTML? In your case, I would use ID's like thing5 or thing.5. A: <div class="thing" id="myapp-thing-5"/> // Get thing on the page for a particular ID var myThing = select("#myapp-thing-5"); // Get ID for the first thing on the page var thing_id = /myapp-thing-(\d+)/.exec ($('.thing')[0].id)[1]; A: You'll be giving up some control of the DOM True, nothing will explode, but it's bad practice. If you put duplicate ids on the page you'll basically loose the ability to be sure about what you're getting when you try to access an element by its id. var whoKnows = document.getElementById('duplicateId'); The behavior is actually different, depending on the browser. In any case, you can use classNames for duplicate values, and you'll be avoiding the problem altogether. The browser will try to overlook faults in your markup, but things become messy and more difficult. The best thing to do is keep your markup valid. You can describe both the type of the element and its unique database id in a className. You could even use multiple classNames to differentiate between them. There are a lot of valid possibilities: <div class="friend04"/> <div class="featuredFriend04" /> or <div class="friend friend04" /> <div class="featuredFriend friend04" /> or <div class="friend objectId04" /> <div class="groupMember objectId04" /> or <div class="friend objectId04" /> <div class="friend objectId04" id="featured" /> These are all completely legitimate & valid snippets of XHTML. Notice how, in the last snippet, that I've still got an id working for me, which is nice. Accessing elements by their id is very quick and easy, so you definitely want to be able to leverage it when you can. You'll already spend enough of your time in javascript making sure that you've got the right values and types. Putting duplicate ids on the page will just make things harder for you. If you can find ways to write standards-compliant markup, it has many practical benefits. A: In HTML5, you could do it like this: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title></title> <script> window.addEventListener("DOMContentLoaded", function() { var thing5 = document.evaluate('//*[@data-thing="5"]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE ,null); alert(thing5.singleNodeValue.textContent); }, false); </script> </head> <body> <div data-thing="5">test</div> </body> </html> A: If you set non-standard properties, be sure to either set them programmatically (as everything will be legal that way) or go through the trouble of revising the dtd !-) But I would use an ID with a meaningful word prepending the DB-id and then use .getElementById, as every necessary informtion is at hand ... A: Non-standard attributes are fine, if you're using XHTML and take the time to extend the DTD you're using to cover the new attributes. Personally, I'd just use a more unique id, like some of the other people have suggested. A: I don't like John Millikin's solution. It's gonna be performance-intensive on large datasets. An optimization on his code could be replacing the regular expression with a call to substring() since the first few characters of the id-property are constant. I'd go with matching class and then a specific id though. A: Keeping track of your data via the DOM seems shaky to me; remember, those IDs are global variables, so if there's any chance somebody else's script can find its way onto your page, it's vulnerable. For best results, load your data into an object within an anonymous function and write the table (or the big nested list of DIVs) afterwards.
{ "language": "en", "url": "https://stackoverflow.com/questions/165783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the nicest way to do observer/observable in objective-c (iphone version) I'm used to coding Java Swing UIs, and in those if you have some properties that change, and you want your UI to update, you would implement the observer/observable pattern. In Java you do this normally by having your class maintain a list of listeners that it notifies of different events. I've played with Objective-C on the Mac, and that has KVC and binding which seems to work very nicely, and requires less code. The iPhone SDK doesn't seem to have this functionality though, so my question is: If I have a class that holds data that changes, what's the best way for me to register a UI component with that class so that it can be notified of changes in the data that it needs to display? A: I also found that you can do: [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_handleWhateverChange) name:@"whateverChange" object:nil]; To register for change events, and [[NSNotificationCenter defaultCenter] postNotificationName:@"whateverChange" object:nil]; To fire them. I might be a N00b but I just couldn't get the observer for key path thing to work for me. A: There are two built-in ways of doing observation in Cocoa: Key-Value Observing and notifications. In neither system do you need to maintain or notify a collection of observers yourself; the framework will handle that for you. Key-Value Observing (KVO) lets you observe a property of an object — including even a property that represents a collection — and be notified of changes to that property. You just need to send the object -addObserver:forKeyPath:options:context: passing the object you want to receive updates, the key path of the property (relative to the receiver) for which you want to receive updates, and the types of updates you want to receive. (There are similar methods you can use if you want to observe a property representing a collection.) Notifications are older and heavier-weight. You register with an NSNotificationCenter — usually the default center — an object and selector pair to be passed a notification when an event occurs. The notification object itself can contain arbitrary data via its userInfo property, and you can choose to observe all notifications of a specific name rather than those that apply to a particular object. Which should you use in any particular case? In general, if you care about changes to a specific property of a specific object, use Key-Value Observing. That's what it's designed for and it's intentionally lightweight. (Among other uses, it is the foundation on which Cocoa Bindings are built.) If you care about a change in state that isn't represented by a property, then notifications are more appropriate. For example, to stay in sync when the user changes the name of a model object, I'd use KVO. To know when an entire object graph was saved, I'd use notifications. A: That's not generally the way that it's done. Take a look at the discussion here, in particular the link to the Apple documentation. If you still want to do it the way you say you do, it's not particularly hard to implement something like bindings "by hand". You'd just create a "binding" object that knows how to subscribe to changes, and connects to a property of a view. To actually answer how it's done - normally, you have a controller object that monitors the state of the model (acting something like an Observer), and updates the view object(s) as necessary.
{ "language": "en", "url": "https://stackoverflow.com/questions/165790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Using ini-files with VB 6.0 I must be getting daft, but I can't seem to find how to read old-fashioned ini files with VB 6.0. All I can seem to find is about reading from and writing to the registry. Can someone push me in the right direction? Mind you, I am not a programmer, just a hobbyist trying to have some harmless fun with his computer, so please don't be to harsh when you point out the bleedin' obvious. A: See the top answer on this thread. Nope, it's no different in VB! :-) A: Use the GetPrivateProfile* functions. Some examples of how to do this with a Declare statement are here: * *codeguru *vbforums
{ "language": "en", "url": "https://stackoverflow.com/questions/165796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way for me to implement record locking? I have a question about locking. This doesn't have to be only about record locking, but anyway. Let's say I'm writing a web accessible CMS. I am struggling with some ideas. I could, on the moment when a user opens an article for editing, flag the article as being 'in use'. so far so good. but when do I remove the flag? when the user saves the article? but what if the user doesn't feel like typing anymore and decides to close his browser and go to bed? a time-out mechanism comes to mind, but how long does it take to write an article? 10 minutes too short, 30 minutes too long.. Maybe I am over-complicating this. I'd like to hear your thoughts on this subject. A: Why not use timestamps? Don't actually worry about locking anything, just react to the event where the record (article) has changed. Basically, before you save the article, check if your version (timestamp) is the same as what is on disk. If same, then you still have latest copy so write it, if not then ... offer to merge, offer to save as new, discard it - its application specific. A: My vote is for optimistic locking wherever possible. In one place, where I have implemented actual locks, I had an admin page to remove locks. There was also a service running on the server to unlock any locks which did not have a corresponding active session.. A: Use rowversion for mssql 2005 and up, timestamp for mssql 2000 and below. Use the hidden xmin field for postgresql. Let all other users open the record. Along with saving the record, tag who saved it, and with the aid of rowversion, on catch(DbConcurrencyException) re-throw an error which indicate to other users who saved the record before they do, and request them to re-open the record to see the changes made by the user who saved the record first.
{ "language": "en", "url": "https://stackoverflow.com/questions/165800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Simple insecure two-way data "obfuscation"? I'm looking for very simple obfuscation (like encrypt and decrypt but not necessarily secure) functionality for some data. It's not mission critical. I need something to keep honest people honest, but something a little stronger than ROT13 or Base64. I'd prefer something that is already included in the .NET framework 2.0, so I don't have to worry about any external dependencies. I really don't want to have to mess around with public/private keys, etc. I don't know much about encryption, but I do know enough to know that anything I wrote would be less than worthless... In fact, I'd probably screw up the math and make it trivial to crack. A: Using TripleDESCryptoServiceProvider in System.Security.Cryptography : public static class CryptoHelper { private const string Key = "MyHashString"; private static TripleDESCryptoServiceProvider GetCryproProvider() { var md5 = new MD5CryptoServiceProvider(); var key = md5.ComputeHash(Encoding.UTF8.GetBytes(Key)); return new TripleDESCryptoServiceProvider() { Key = key, Mode = CipherMode.ECB, Padding = PaddingMode.PKCS7 }; } public static string Encrypt(string plainString) { var data = Encoding.UTF8.GetBytes(plainString); var tripleDes = GetCryproProvider(); var transform = tripleDes.CreateEncryptor(); var resultsByteArray = transform.TransformFinalBlock(data, 0, data.Length); return Convert.ToBase64String(resultsByteArray); } public static string Decrypt(string encryptedString) { var data = Convert.FromBase64String(encryptedString); var tripleDes = GetCryproProvider(); var transform = tripleDes.CreateDecryptor(); var resultsByteArray = transform.TransformFinalBlock(data, 0, data.Length); return Encoding.UTF8.GetString(resultsByteArray); } } A: [EDIT] Years later, I've come back to say: don't do this! See What's wrong with XOR encryption? for details. A very simple, easy two-way encrytpion is XOR encryption. * *Come up with a password. Let's have it be mypass. *Convert the password into binary (according to ASCII). The password becomes 01101101 01111001 01110000 01100001 01110011 01110011. *Take the message you want to encode. Convert that into binary, also. *Look at the length of the message. If the message length is 400 bytes, turn the password into a 400 byte string by repeating it over and over again. It would become 01101101 01111001 01110000 01100001 01110011 01110011 01101101 01111001 01110000 01100001 01110011 01110011 01101101 01111001 01110000 01100001 01110011 01110011... (or mypassmypassmypass...) *XOR the message with the long password. *Send the result. *Another time, XOR the encrypted message with the same password (mypassmypassmypass...). *There's your message! A: If you just want simple encryption (i.e., possible for a determined cracker to break, but locking out most casual users), just pick two passphrases of equal length, say: deoxyribonucleicacid while (x>0) { x-- }; and xor your data with both of them (looping the passphrases if necessary)(a). For example: 1111-2222-3333-4444-5555-6666-7777 deoxyribonucleicaciddeoxyribonucle while (x>0) { x-- };while (x>0) { Someone searching your binary may well think the DNA string is a key, but they're unlikely to think the C code is anything other than uninitialized memory saved with your binary. (a) Keep in mind this is very simple encryption and, by some definitions, may not be considered encryption at all (since the intent of encryption is to prevent unauthorised access rather than just make it more difficult). Although, of course, even the strongest encryption is insecure when someone's standing over the key-holders with a steel pipe. As stated in the first sentence, this is a means to make it difficult enough for the casual attacker that they'll move on. It's similar to preventing burglaries on your home - you don't need to make it impregnable, you just need to make it less pregnable than the house next door :-) A: Encryption is easy: as others have pointed out, there are classes in the System.Security.Cryptography namespace that do all the work for you. Use them rather than any home-grown solution. But decryption is easy too. The issue you have is not the encryption algorithm, but protecting access to the key used for decryption. I would use one of the following solutions: * *DPAPI using the ProtectedData class with CurrentUser scope. This is easy as you don't need to worry about a key. Data can only be decrypted by the same user, so no good for sharing data between users or machines. *DPAPI using the ProtectedData class with LocalMachine scope. Good for e.g. protecting configuration data on a single secure server. But anyone who can log into the machine can encrypt it, so no good unless the server is secure. *Any symmetric algorithm. I typically use the static SymmetricAlgorithm.Create() method if I don't care what algorithm is used (in fact it's Rijndael by default). In this case you need to protect your key somehow. E.g. you can obfuscate it in some way and hide it in your code. But be aware that anyone who is smart enough to decompile your code will likely be able to find the key. A: I wanted to post my solution since none of the above the solutions are as simple as mine. Let me know what you think: // This will return an encrypted string based on the unencrypted parameter public static string Encrypt(this string DecryptedValue) { HttpServerUtility.UrlTokenEncode(MachineKey.Protect(Encoding.UTF8.GetBytes(DecryptedValue.Trim()))); } // This will return an unencrypted string based on the parameter public static string Decrypt(this string EncryptedValue) { Encoding.UTF8.GetString(MachineKey.Unprotect(HttpServerUtility.UrlTokenDecode(EncryptedValue))); } Optional This assumes that the MachineKey of the server used to encrypt the value is the same as the one used to decrypt the value. If desired, you can specify a static MachineKey in the Web.config so that your application can decrypt/encrypt data regardless of where it is run (e.g. development vs. production server). You can generate a static machine key following these instructions. A: Other answers here work fine, but AES is a more secure and up-to-date encryption algorithm. This is a class that I obtained a few years ago to perform AES encryption that I have modified over time to be more friendly for web applications (e,g. I've built Encrypt/Decrypt methods that work with URL-friendly string). It also has the methods that work with byte arrays. NOTE: you should use different values in the Key (32 bytes) and Vector (16 bytes) arrays! You wouldn't want someone to figure out your keys by just assuming that you used this code as-is! All you have to do is change some of the numbers (must be <= 255) in the Key and Vector arrays (I left one invalid value in the Vector array to make sure you do this...). You can use https://www.random.org/bytes/ to generate a new set easily: * *generate Key *generate Vector Using it is easy: just instantiate the class and then call (usually) EncryptToString(string StringToEncrypt) and DecryptString(string StringToDecrypt) as methods. It couldn't be any easier (or more secure) once you have this class in place. using System; using System.Data; using System.Security.Cryptography; using System.IO; public class SimpleAES { // Change these keys private byte[] Key = __Replace_Me__({ 123, 217, 19, 11, 24, 26, 85, 45, 114, 184, 27, 162, 37, 112, 222, 209, 241, 24, 175, 144, 173, 53, 196, 29, 24, 26, 17, 218, 131, 236, 53, 209 }); // a hardcoded IV should not be used for production AES-CBC code // IVs should be unpredictable per ciphertext private byte[] Vector = __Replace_Me__({ 146, 64, 191, 111, 23, 3, 113, 119, 231, 121, 2521, 112, 79, 32, 114, 156 }); private ICryptoTransform EncryptorTransform, DecryptorTransform; private System.Text.UTF8Encoding UTFEncoder; public SimpleAES() { //This is our encryption method RijndaelManaged rm = new RijndaelManaged(); //Create an encryptor and a decryptor using our encryption method, key, and vector. EncryptorTransform = rm.CreateEncryptor(this.Key, this.Vector); DecryptorTransform = rm.CreateDecryptor(this.Key, this.Vector); //Used to translate bytes to text and vice versa UTFEncoder = new System.Text.UTF8Encoding(); } /// -------------- Two Utility Methods (not used but may be useful) ----------- /// Generates an encryption key. static public byte[] GenerateEncryptionKey() { //Generate a Key. RijndaelManaged rm = new RijndaelManaged(); rm.GenerateKey(); return rm.Key; } /// Generates a unique encryption vector static public byte[] GenerateEncryptionVector() { //Generate a Vector RijndaelManaged rm = new RijndaelManaged(); rm.GenerateIV(); return rm.IV; } /// ----------- The commonly used methods ------------------------------ /// Encrypt some text and return a string suitable for passing in a URL. public string EncryptToString(string TextValue) { return ByteArrToString(Encrypt(TextValue)); } /// Encrypt some text and return an encrypted byte array. public byte[] Encrypt(string TextValue) { //Translates our text value into a byte array. Byte[] bytes = UTFEncoder.GetBytes(TextValue); //Used to stream the data in and out of the CryptoStream. MemoryStream memoryStream = new MemoryStream(); /* * We will have to write the unencrypted bytes to the stream, * then read the encrypted result back from the stream. */ #region Write the decrypted value to the encryption stream CryptoStream cs = new CryptoStream(memoryStream, EncryptorTransform, CryptoStreamMode.Write); cs.Write(bytes, 0, bytes.Length); cs.FlushFinalBlock(); #endregion #region Read encrypted value back out of the stream memoryStream.Position = 0; byte[] encrypted = new byte[memoryStream.Length]; memoryStream.Read(encrypted, 0, encrypted.Length); #endregion //Clean up. cs.Close(); memoryStream.Close(); return encrypted; } /// The other side: Decryption methods public string DecryptString(string EncryptedString) { return Decrypt(StrToByteArray(EncryptedString)); } /// Decryption when working with byte arrays. public string Decrypt(byte[] EncryptedValue) { #region Write the encrypted value to the decryption stream MemoryStream encryptedStream = new MemoryStream(); CryptoStream decryptStream = new CryptoStream(encryptedStream, DecryptorTransform, CryptoStreamMode.Write); decryptStream.Write(EncryptedValue, 0, EncryptedValue.Length); decryptStream.FlushFinalBlock(); #endregion #region Read the decrypted value from the stream. encryptedStream.Position = 0; Byte[] decryptedBytes = new Byte[encryptedStream.Length]; encryptedStream.Read(decryptedBytes, 0, decryptedBytes.Length); encryptedStream.Close(); #endregion return UTFEncoder.GetString(decryptedBytes); } /// Convert a string to a byte array. NOTE: Normally we'd create a Byte Array from a string using an ASCII encoding (like so). // System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); // return encoding.GetBytes(str); // However, this results in character values that cannot be passed in a URL. So, instead, I just // lay out all of the byte values in a long string of numbers (three per - must pad numbers less than 100). public byte[] StrToByteArray(string str) { if (str.Length == 0) throw new Exception("Invalid string value in StrToByteArray"); byte val; byte[] byteArr = new byte[str.Length / 3]; int i = 0; int j = 0; do { val = byte.Parse(str.Substring(i, 3)); byteArr[j++] = val; i += 3; } while (i < str.Length); return byteArr; } // Same comment as above. Normally the conversion would use an ASCII encoding in the other direction: // System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); // return enc.GetString(byteArr); public string ByteArrToString(byte[] byteArr) { byte val; string tempStr = ""; for (int i = 0; i <= byteArr.GetUpperBound(0); i++) { val = byteArr[i]; if (val < (byte)10) tempStr += "00" + val.ToString(); else if (val < (byte)100) tempStr += "0" + val.ToString(); else tempStr += val.ToString(); } return tempStr; } } A: Yes, add the System.Security assembly, import the System.Security.Cryptography namespace. Here's a simple example of a symmetric (DES) algorithm encryption: DESCryptoServiceProvider des = new DESCryptoServiceProvider(); des.GenerateKey(); byte[] key = des.Key; // save this! ICryptoTransform encryptor = des.CreateEncryptor(); // encrypt byte[] enc = encryptor.TransformFinalBlock(new byte[] { 1, 2, 3, 4 }, 0, 4); ICryptoTransform decryptor = des.CreateDecryptor(); // decrypt byte[] originalAgain = decryptor.TransformFinalBlock(enc, 0, enc.Length); Debug.Assert(originalAgain[0] == 1); A: Just thought I'd add that I've improved Mud's SimplerAES by adding a random IV that's passed back inside the encrypted string. This improves the encryption as encrypting the same string will result in a different output each time. public class StringEncryption { private readonly Random random; private readonly byte[] key; private readonly RijndaelManaged rm; private readonly UTF8Encoding encoder; public StringEncryption() { this.random = new Random(); this.rm = new RijndaelManaged(); this.encoder = new UTF8Encoding(); this.key = Convert.FromBase64String("Your+Secret+Static+Encryption+Key+Goes+Here="); } public string Encrypt(string unencrypted) { var vector = new byte[16]; this.random.NextBytes(vector); var cryptogram = vector.Concat(this.Encrypt(this.encoder.GetBytes(unencrypted), vector)); return Convert.ToBase64String(cryptogram.ToArray()); } public string Decrypt(string encrypted) { var cryptogram = Convert.FromBase64String(encrypted); if (cryptogram.Length < 17) { throw new ArgumentException("Not a valid encrypted string", "encrypted"); } var vector = cryptogram.Take(16).ToArray(); var buffer = cryptogram.Skip(16).ToArray(); return this.encoder.GetString(this.Decrypt(buffer, vector)); } private byte[] Encrypt(byte[] buffer, byte[] vector) { var encryptor = this.rm.CreateEncryptor(this.key, vector); return this.Transform(buffer, encryptor); } private byte[] Decrypt(byte[] buffer, byte[] vector) { var decryptor = this.rm.CreateDecryptor(this.key, vector); return this.Transform(buffer, decryptor); } private byte[] Transform(byte[] buffer, ICryptoTransform transform) { var stream = new MemoryStream(); using (var cs = new CryptoStream(stream, transform, CryptoStreamMode.Write)) { cs.Write(buffer, 0, buffer.Length); } return stream.ToArray(); } } And bonus unit test [Test] public void EncryptDecrypt() { // Arrange var subject = new StringEncryption(); var originalString = "Testing123!£$"; // Act var encryptedString1 = subject.Encrypt(originalString); var encryptedString2 = subject.Encrypt(originalString); var decryptedString1 = subject.Decrypt(encryptedString1); var decryptedString2 = subject.Decrypt(encryptedString2); // Assert Assert.AreEqual(originalString, decryptedString1, "Decrypted string should match original string"); Assert.AreEqual(originalString, decryptedString2, "Decrypted string should match original string"); Assert.AreNotEqual(originalString, encryptedString1, "Encrypted string should not match original string"); Assert.AreNotEqual(encryptedString1, encryptedString2, "String should never be encrypted the same twice"); } A: I cleaned up SimpleAES (above) for my use. Fixed convoluted encrypt/decrypt methods; separated methods for encoding byte buffers, strings, and URL-friendly strings; made use of existing libraries for URL encoding. The code is small, simpler, faster and the output is more concise. For instance, johnsmith@gmail.com produces: SimpleAES: "096114178117140150104121138042115022037019164188092040214235183167012211175176167001017163166152" SimplerAES: "YHKydYyWaHmKKnMWJROkvFwo1uu3pwzTr7CnARGjppg%3d" Code: public class SimplerAES { private static byte[] key = __Replace_Me__({ 123, 217, 19, 11, 24, 26, 85, 45, 114, 184, 27, 162, 37, 112, 222, 209, 241, 24, 175, 144, 173, 53, 196, 29, 24, 26, 17, 218, 131, 236, 53, 209 }); // a hardcoded IV should not be used for production AES-CBC code // IVs should be unpredictable per ciphertext private static byte[] vector = __Replace_Me_({ 146, 64, 191, 111, 23, 3, 113, 119, 231, 121, 221, 112, 79, 32, 114, 156 }); private ICryptoTransform encryptor, decryptor; private UTF8Encoding encoder; public SimplerAES() { RijndaelManaged rm = new RijndaelManaged(); encryptor = rm.CreateEncryptor(key, vector); decryptor = rm.CreateDecryptor(key, vector); encoder = new UTF8Encoding(); } public string Encrypt(string unencrypted) { return Convert.ToBase64String(Encrypt(encoder.GetBytes(unencrypted))); } public string Decrypt(string encrypted) { return encoder.GetString(Decrypt(Convert.FromBase64String(encrypted))); } public byte[] Encrypt(byte[] buffer) { return Transform(buffer, encryptor); } public byte[] Decrypt(byte[] buffer) { return Transform(buffer, decryptor); } protected byte[] Transform(byte[] buffer, ICryptoTransform transform) { MemoryStream stream = new MemoryStream(); using (CryptoStream cs = new CryptoStream(stream, transform, CryptoStreamMode.Write)) { cs.Write(buffer, 0, buffer.Length); } return stream.ToArray(); } } A: A variant of Marks (excellent) answer * *Add "using"s *Make the class IDisposable *Remove the URL encoding code to make the example simpler. *Add a simple test fixture to demonstrate usage Hope this helps [TestFixture] public class RijndaelHelperTests { [Test] public void UseCase() { //These two values should not be hard coded in your code. byte[] key = {251, 9, 67, 117, 237, 158, 138, 150, 255, 97, 103, 128, 183, 65, 76, 161, 7, 79, 244, 225, 146, 180, 51, 123, 118, 167, 45, 10, 184, 181, 202, 190}; byte[] vector = {214, 11, 221, 108, 210, 71, 14, 15, 151, 57, 241, 174, 177, 142, 115, 137}; using (var rijndaelHelper = new RijndaelHelper(key, vector)) { var encrypt = rijndaelHelper.Encrypt("StringToEncrypt"); var decrypt = rijndaelHelper.Decrypt(encrypt); Assert.AreEqual("StringToEncrypt", decrypt); } } } public class RijndaelHelper : IDisposable { Rijndael rijndael; UTF8Encoding encoding; public RijndaelHelper(byte[] key, byte[] vector) { encoding = new UTF8Encoding(); rijndael = Rijndael.Create(); rijndael.Key = key; rijndael.IV = vector; } public byte[] Encrypt(string valueToEncrypt) { var bytes = encoding.GetBytes(valueToEncrypt); using (var encryptor = rijndael.CreateEncryptor()) using (var stream = new MemoryStream()) using (var crypto = new CryptoStream(stream, encryptor, CryptoStreamMode.Write)) { crypto.Write(bytes, 0, bytes.Length); crypto.FlushFinalBlock(); stream.Position = 0; var encrypted = new byte[stream.Length]; stream.Read(encrypted, 0, encrypted.Length); return encrypted; } } public string Decrypt(byte[] encryptedValue) { using (var decryptor = rijndael.CreateDecryptor()) using (var stream = new MemoryStream()) using (var crypto = new CryptoStream(stream, decryptor, CryptoStreamMode.Write)) { crypto.Write(encryptedValue, 0, encryptedValue.Length); crypto.FlushFinalBlock(); stream.Position = 0; var decryptedBytes = new Byte[stream.Length]; stream.Read(decryptedBytes, 0, decryptedBytes.Length); return encoding.GetString(decryptedBytes); } } public void Dispose() { if (rijndael != null) { rijndael.Dispose(); } } } A: I combined what I found the best from several answers and comments. * *Random initialization vector prepended to crypto text (@jbtule) *Use TransformFinalBlock() instead of MemoryStream (@RenniePet) *No pre-filled keys to avoid anyone copy & pasting a disaster *Proper dispose and using patterns Code: /// <summary> /// Simple encryption/decryption using a random initialization vector /// and prepending it to the crypto text. /// </summary> /// <remarks>Based on multiple answers in http://stackoverflow.com/questions/165808/simple-two-way-encryption-for-c-sharp </remarks> public class SimpleAes : IDisposable { /// <summary> /// Initialization vector length in bytes. /// </summary> private const int IvBytes = 16; /// <summary> /// Must be exactly 16, 24 or 32 bytes long. /// </summary> private static readonly byte[] Key = Convert.FromBase64String("FILL ME WITH 24 (2 pad chars), 32 OR 44 (1 pad char) RANDOM CHARS"); // Base64 has a blowup of four-thirds (33%) private readonly UTF8Encoding _encoder; private readonly ICryptoTransform _encryptor; private readonly RijndaelManaged _rijndael; public SimpleAes() { _rijndael = new RijndaelManaged {Key = Key}; _rijndael.GenerateIV(); _encryptor = _rijndael.CreateEncryptor(); _encoder = new UTF8Encoding(); } public string Decrypt(string encrypted) { return _encoder.GetString(Decrypt(Convert.FromBase64String(encrypted))); } public void Dispose() { _rijndael.Dispose(); _encryptor.Dispose(); } public string Encrypt(string unencrypted) { return Convert.ToBase64String(Encrypt(_encoder.GetBytes(unencrypted))); } private byte[] Decrypt(byte[] buffer) { // IV is prepended to cryptotext byte[] iv = buffer.Take(IvBytes).ToArray(); using (ICryptoTransform decryptor = _rijndael.CreateDecryptor(_rijndael.Key, iv)) { return decryptor.TransformFinalBlock(buffer, IvBytes, buffer.Length - IvBytes); } } private byte[] Encrypt(byte[] buffer) { // Prepend cryptotext with IV byte [] inputBuffer = _encryptor.TransformFinalBlock(buffer, 0, buffer.Length); return _rijndael.IV.Concat(inputBuffer).ToArray(); } } Update 2015-07-18: Fixed mistake in private Encrypt() method by comments of @bpsilver and @Evereq. IV was accidentally encrypted, is now prepended in clear text as expected by Decrypt(). A: The namespace System.Security.Cryptography contains the TripleDESCryptoServiceProvider and RijndaelManaged classes Don't forget to add a reference to the System.Security assembly. A: I changed this: public string ByteArrToString(byte[] byteArr) { byte val; string tempStr = ""; for (int i = 0; i <= byteArr.GetUpperBound(0); i++) { val = byteArr[i]; if (val < (byte)10) tempStr += "00" + val.ToString(); else if (val < (byte)100) tempStr += "0" + val.ToString(); else tempStr += val.ToString(); } return tempStr; } to this: public string ByteArrToString(byte[] byteArr) { string temp = ""; foreach (byte b in byteArr) temp += b.ToString().PadLeft(3, '0'); return temp; } A: Using the builtin .Net Cryptography library, this example shows how to use the Advanced Encryption Standard (AES). using System; using System.IO; using System.Security.Cryptography; namespace Aes_Example { class AesExample { public static void Main() { try { string original = "Here is some data to encrypt!"; // Create a new instance of the Aes // class. This generates a new key and initialization // vector (IV). using (Aes myAes = Aes.Create()) { // Encrypt the string to an array of bytes. byte[] encrypted = EncryptStringToBytes_Aes(original, myAes.Key, myAes.IV); // Decrypt the bytes to a string. string roundtrip = DecryptStringFromBytes_Aes(encrypted, myAes.Key, myAes.IV); //Display the original data and the decrypted data. Console.WriteLine("Original: {0}", original); Console.WriteLine("Round Trip: {0}", roundtrip); } } catch (Exception e) { Console.WriteLine("Error: {0}", e.Message); } } static byte[] EncryptStringToBytes_Aes(string plainText, byte[] Key,byte[] IV) { // Check arguments. if (plainText == null || plainText.Length <= 0) throw new ArgumentNullException("plainText"); if (Key == null || Key.Length <= 0) throw new ArgumentNullException("Key"); if (IV == null || IV.Length <= 0) throw new ArgumentNullException("Key"); byte[] encrypted; // Create an Aes object // with the specified key and IV. using (Aes aesAlg = Aes.Create()) { aesAlg.Key = Key; aesAlg.IV = IV; // Create a decrytor to perform the stream transform. ICryptoTransform encryptor = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV); // Create the streams used for encryption. using (MemoryStream msEncrypt = new MemoryStream()) { using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write)) { using (StreamWriter swEncrypt = new StreamWriter(csEncrypt)) { //Write all data to the stream. swEncrypt.Write(plainText); } encrypted = msEncrypt.ToArray(); } } } // Return the encrypted bytes from the memory stream. return encrypted; } static string DecryptStringFromBytes_Aes(byte[] cipherText, byte[] Key, byte[] IV) { // Check arguments. if (cipherText == null || cipherText.Length <= 0) throw new ArgumentNullException("cipherText"); if (Key == null || Key.Length <= 0) throw new ArgumentNullException("Key"); if (IV == null || IV.Length <= 0) throw new ArgumentNullException("Key"); // Declare the string used to hold // the decrypted text. string plaintext = null; // Create an Aes object // with the specified key and IV. using (Aes aesAlg = Aes.Create()) { aesAlg.Key = Key; aesAlg.IV = IV; // Create a decrytor to perform the stream transform. ICryptoTransform decryptor = aesAlg.CreateDecryptor(aesAlg.Key, aesAlg.IV); // Create the streams used for decryption. using (MemoryStream msDecrypt = new MemoryStream(cipherText)) { using (CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read)) { using (StreamReader srDecrypt = new StreamReader(csDecrypt)) { // Read the decrypted bytes from the decrypting stream // and place them in a string. plaintext = srDecrypt.ReadToEnd(); } } } } return plaintext; } } } A: I know you said you don't care about how secure it is, but if you chose DES you might as well take AES it is the more up-to-date encryption method. A: I've been using the accepted answer by Mark Brittingham and its has helped me a lot. Recently I had to send encrypted text to a different organization and that's where some issues came up. The OP does not require these options but since this is a popular question I'm posting my modification (Encrypt and Decrypt functions borrowed from here): * *Different IV for every message - Concatenates IV bytes to the cipher bytes before obtaining the hex. Of course this is a convention that needs to be conveyed to the parties receiving the cipher text. *Allows two constructors - one for default RijndaelManaged values, and one where property values can be specified (based on mutual agreement between encrypting and decrypting parties) Here is the class (test sample at the end): /// <summary> /// Based on https://msdn.microsoft.com/en-us/library/system.security.cryptography.rijndaelmanaged(v=vs.110).aspx /// Uses UTF8 Encoding /// http://security.stackexchange.com/a/90850 /// </summary> public class AnotherAES : IDisposable { private RijndaelManaged rijn; /// <summary> /// Initialize algo with key, block size, key size, padding mode and cipher mode to be known. /// </summary> /// <param name="key">ASCII key to be used for encryption or decryption</param> /// <param name="blockSize">block size to use for AES algorithm. 128, 192 or 256 bits</param> /// <param name="keySize">key length to use for AES algorithm. 128, 192, or 256 bits</param> /// <param name="paddingMode"></param> /// <param name="cipherMode"></param> public AnotherAES(string key, int blockSize, int keySize, PaddingMode paddingMode, CipherMode cipherMode) { rijn = new RijndaelManaged(); rijn.Key = Encoding.UTF8.GetBytes(key); rijn.BlockSize = blockSize; rijn.KeySize = keySize; rijn.Padding = paddingMode; rijn.Mode = cipherMode; } /// <summary> /// Initialize algo just with key /// Defaults for RijndaelManaged class: /// Block Size: 256 bits (32 bytes) /// Key Size: 128 bits (16 bytes) /// Padding Mode: PKCS7 /// Cipher Mode: CBC /// </summary> /// <param name="key"></param> public AnotherAES(string key) { rijn = new RijndaelManaged(); byte[] keyArray = Encoding.UTF8.GetBytes(key); rijn.Key = keyArray; } /// <summary> /// Based on https://msdn.microsoft.com/en-us/library/system.security.cryptography.rijndaelmanaged(v=vs.110).aspx /// Encrypt a string using RijndaelManaged encryptor. /// </summary> /// <param name="plainText">string to be encrypted</param> /// <param name="IV">initialization vector to be used by crypto algorithm</param> /// <returns></returns> public byte[] Encrypt(string plainText, byte[] IV) { if (rijn == null) throw new ArgumentNullException("Provider not initialized"); // Check arguments. if (plainText == null || plainText.Length <= 0) throw new ArgumentNullException("plainText cannot be null or empty"); if (IV == null || IV.Length <= 0) throw new ArgumentNullException("IV cannot be null or empty"); byte[] encrypted; // Create a decrytor to perform the stream transform. using (ICryptoTransform encryptor = rijn.CreateEncryptor(rijn.Key, IV)) { // Create the streams used for encryption. using (MemoryStream msEncrypt = new MemoryStream()) { using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write)) { using (StreamWriter swEncrypt = new StreamWriter(csEncrypt)) { //Write all data to the stream. swEncrypt.Write(plainText); } encrypted = msEncrypt.ToArray(); } } } // Return the encrypted bytes from the memory stream. return encrypted; }//end EncryptStringToBytes /// <summary> /// Based on https://msdn.microsoft.com/en-us/library/system.security.cryptography.rijndaelmanaged(v=vs.110).aspx /// </summary> /// <param name="cipherText">bytes to be decrypted back to plaintext</param> /// <param name="IV">initialization vector used to encrypt the bytes</param> /// <returns></returns> public string Decrypt(byte[] cipherText, byte[] IV) { if (rijn == null) throw new ArgumentNullException("Provider not initialized"); // Check arguments. if (cipherText == null || cipherText.Length <= 0) throw new ArgumentNullException("cipherText cannot be null or empty"); if (IV == null || IV.Length <= 0) throw new ArgumentNullException("IV cannot be null or empty"); // Declare the string used to hold the decrypted text. string plaintext = null; // Create a decrytor to perform the stream transform. using (ICryptoTransform decryptor = rijn.CreateDecryptor(rijn.Key, IV)) { // Create the streams used for decryption. using (MemoryStream msDecrypt = new MemoryStream(cipherText)) { using (CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read)) { using (StreamReader srDecrypt = new StreamReader(csDecrypt)) { // Read the decrypted bytes from the decrypting stream and place them in a string. plaintext = srDecrypt.ReadToEnd(); } } } } return plaintext; }//end DecryptStringFromBytes /// <summary> /// Generates a unique encryption vector using RijndaelManaged.GenerateIV() method /// </summary> /// <returns></returns> public byte[] GenerateEncryptionVector() { if (rijn == null) throw new ArgumentNullException("Provider not initialized"); //Generate a Vector rijn.GenerateIV(); return rijn.IV; }//end GenerateEncryptionVector /// <summary> /// Based on https://stackoverflow.com/a/1344255 /// Generate a unique string given number of bytes required. /// This string can be used as IV. IV byte size should be equal to cipher-block byte size. /// Allows seeing IV in plaintext so it can be passed along a url or some message. /// </summary> /// <param name="numBytes"></param> /// <returns></returns> public static string GetUniqueString(int numBytes) { char[] chars = new char[62]; chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890".ToCharArray(); byte[] data = new byte[1]; using (RNGCryptoServiceProvider crypto = new RNGCryptoServiceProvider()) { data = new byte[numBytes]; crypto.GetBytes(data); } StringBuilder result = new StringBuilder(numBytes); foreach (byte b in data) { result.Append(chars[b % (chars.Length)]); } return result.ToString(); }//end GetUniqueKey() /// <summary> /// Converts a string to byte array. Useful when converting back hex string which was originally formed from bytes. /// </summary> /// <param name="hex"></param> /// <returns></returns> public static byte[] StringToByteArray(String hex) { int NumberChars = hex.Length; byte[] bytes = new byte[NumberChars / 2]; for (int i = 0; i < NumberChars; i += 2) bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16); return bytes; }//end StringToByteArray /// <summary> /// Dispose RijndaelManaged object initialized in the constructor /// </summary> public void Dispose() { if (rijn != null) rijn.Dispose(); }//end Dispose() }//end class and.. Here is the test sample: class Program { string key; static void Main(string[] args) { Program p = new Program(); //get 16 byte key (just demo - typically you will have a predetermined key) p.key = AnotherAES.GetUniqueString(16); string plainText = "Hello World!"; //encrypt string hex = p.Encrypt(plainText); //decrypt string roundTrip = p.Decrypt(hex); Console.WriteLine("Round Trip: {0}", roundTrip); } string Encrypt(string plainText) { Console.WriteLine("\nSending (encrypt side)..."); Console.WriteLine("Plain Text: {0}", plainText); Console.WriteLine("Key: {0}", key); string hex = string.Empty; string ivString = AnotherAES.GetUniqueString(16); Console.WriteLine("IV: {0}", ivString); using (AnotherAES aes = new AnotherAES(key)) { //encrypting side byte[] IV = Encoding.UTF8.GetBytes(ivString); //get encrypted bytes (IV bytes prepended to cipher bytes) byte[] encryptedBytes = aes.Encrypt(plainText, IV); byte[] encryptedBytesWithIV = IV.Concat(encryptedBytes).ToArray(); //get hex string to send with url //this hex has both IV and ciphertext hex = BitConverter.ToString(encryptedBytesWithIV).Replace("-", ""); Console.WriteLine("sending hex: {0}", hex); } return hex; } string Decrypt(string hex) { Console.WriteLine("\nReceiving (decrypt side)..."); Console.WriteLine("received hex: {0}", hex); string roundTrip = string.Empty; Console.WriteLine("Key " + key); using (AnotherAES aes = new AnotherAES(key)) { //get bytes from url byte[] encryptedBytesWithIV = AnotherAES.StringToByteArray(hex); byte[] IV = encryptedBytesWithIV.Take(16).ToArray(); Console.WriteLine("IV: {0}", System.Text.Encoding.Default.GetString(IV)); byte[] cipher = encryptedBytesWithIV.Skip(16).ToArray(); roundTrip = aes.Decrypt(cipher, IV); } return roundTrip; } } A: I think this is the worlds simplest one ! string encrypted = "Text".Aggregate("", (c, a) => c + (char) (a + 2)); Test Console.WriteLine(("Hello").Aggregate("", (c, a) => c + (char) (a + 1))); //Output is Ifmmp Console.WriteLine(("Ifmmp").Aggregate("", (c, a) => c + (char)(a - 1))); //Output is Hello
{ "language": "en", "url": "https://stackoverflow.com/questions/165808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "441" }
Q: Is there a way to count the number of IL instructions executed? I want to do some benchmarking of a C# process, but I don't want to use time as my vector - I want to count the number of IL instructions that get executed in a particular method call. Is this possible? Edit I don't mean static analysis of a method body - I'm referring to the actual number of instructions that are executed - so if, for example, the method body includes a loop, the count would be increased by however many instructions make up the loop * the number of times the loop is iterated. A: I don't think it's possible to do what you want. This is because the IL is only used during JIT (Just-In-Time) compilation. By the time the method is running the IL has been translated into native machine code. So, while it might be possible to count the number of IL instructions in a given method/type/assembly statically, there's no concept of this at runtime. You haven't stated your intention in knowing the number of IL instructions that would be interpreted. Given that there's only a loose correlation between the IL count of a method body and the actual number of machine code instructions, I fail to see what knowing this number would achieve (other than satisfying your curiosity). A: Well, it won't be easy. I think you could instrument your assembly post-compile with performance counter code that executed after blocks of IL. For example, if you had a section of a method that loaded an int onto the stack then executed a static method using that int under optimized code, you could record a count of 2 for the int load and call. Even using existing IL/managed assembly reading/writing projects, this would be a pretty daunting task to pull off. Of course, some instructions that your counter recorded might get optimized away during just-in-time compiling to x86/ia64/x64, but that is a risk you'd have to take to try to profile based on an abstract lanaguage like IL. A: You could use ICorDebug which has a managed interface. Chuck a break point at the beginning of the method and programmaticly step through the code till it leaves the method. However, I am not sure how useful the metric will be, people tend to use time for this kind of stuff. Some IL instructions are more expensive than others. A: I use the Code Metrics add in to Reflector The CodeMetrics add-in analyses and computes several code quality metrics on your assemblies. This add-in uses Reflector to compute classic metrics such as cyclomatic complexity or more straightforward ones such as the number of local variables in a method. All results can be saved to a file. Install the plugin. Select an assembly and load method metrics. It will show you a grid with CodeSize, CyclomaticComplexity, # of Instruction, etc. A: I know you don't want the static count. However, the static IL count per arc, plus the number of times the arc was executed together give you the IL count. You'd need to instrument each arc for this, which requires inserting a counter. (An arc is a sequence of instructions which you can't jump in or out. If you execute the first instruction, you'll always execute the last, etc.)
{ "language": "en", "url": "https://stackoverflow.com/questions/165809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Design of an performance assertion checking system What are opinions on the design of a "performance assertion checking" system? The idea is that a developer makes some assertions about his/her code and use these to test the evolution of the performance of the code. What is the experience with such a system? My current block is "What's a better way to translate these assertions, written in a specified language (that are to be checked against specified logs or runtime instrumentation) into, say, CLR, or assembly or bytecode that could be executed?" Currently I have written a parser that parses the specification and holds it in a data structure. A: Do we embed performance checks in our application? No. The reason is that the performance checks themselves take time and our application is very sensitive to performance. Instead, we make our performance checks a test. And for that we use NUnit. For our nightly builds, we run the test, we generate a log with detailed timing data as well as a pass/fail indication given our requirements. Since we keep our logs around for some time -- forever for beta and production releases -- we can track performance over time as well. A: Similarly to Kevin, I put performance logs in my automated regression testing, so that I am effectively regression testing performance as well as functionality. We use TestComplete for the automated regression and it does alot of this automatically. The principal reason for adding it manually is to compare results of this run versus last run at each checkpoint. This works something like StartTest InitialiseCounter ' ' Do some testing ' ' CheckPoint GetElapsedTime Compare ElapsedTime with stored elapsed time from last run If difference is outside tolerence log an error (Excuse the dodgy highlighting on my pseudocode) A: Many languages nowadays have assert statements. Can they be leveraged to validate your generated assertion? They're easy to write and easy to find. The issue is that an assertion failure means your program stops. If you want to provide a warning or a log entry that an assertion has failed at run-time, you could try an if-statement. For this kind of code generation, people often use simple template tools to generate the appropriate source that can be inserted into the application. You could look at Java's Velocity or Python's Mako to generate source for your assertion condition.
{ "language": "en", "url": "https://stackoverflow.com/questions/165811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Code Coverage for Mono? mono creates its own debug targets called .mdb files when you use the mcs compiler. is there a way of using NCover or another code coverage tool with Mono? a commandline tool would be better so I can add it to our continuous integration server. A: have you looked into monocov?
{ "language": "en", "url": "https://stackoverflow.com/questions/165814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Transactional queueing/dequeueing I need to queue events and tasks for external systems in a reliable/transactional way. Using things like MSMQ or ActiveMQ look very seductive, but the transactional part becomes complicated (MSDTC, etc). We could use the database (SQL Server 2005+, Oracle 9+) and achieve easier transactional support, but the queuing part becomes uglier. Neither route seems all that great and is filled with nasty gotchas and edge cases. Can someone offer some practical guidance in this matter? Think: E/C/A or a scheduled task engine that wakes up every so often and see if there are any scheduled tasks that need running at this time (i.e. next-run-date has passed, but expiration-date has not yet been reached). A: our system has 60 computers, each running 12 tasks (threads) which need to "get next job". All in all, it comes to 50K "jobs" per day. do the math of how many transactions per minute and realize task time is variable, so it is possible to get multiple "pop" events at the exact same time. We had our first version using MSMQ. conclusion: stay away. While it did do just fine with the load and synchronization issues, it had 2 problems. one annoying and one deal breaker. Annoying: as Enterprise software, MSMQ has security needs that just make it one more thing to set up and fight with the customers network admin. Deal breaker: then came the time we wanted to take the next job, but not using a simple pop but something like "get next BLUE job" or "get next YELLOW job". can't do it! We went to plan B: Implemented our own Q with a single SQL 2005 table. could not be happier I stressed test it with 200K messages per day, worked. We can make the "next" logic as complicated as we want. the catch: you need to be very careful with the SQL that takes the next item. Since you want it to be fast and NON locking. there are 2 very important SQL hints we used based on some research. The magic goes something like this: SELECT TOP 1 @Id = callid FROM callqtbl WITH (READPAST, XLOCK) where 1=1 ORDER BY xx,yy A: I've seen MSMQ used transactionally and it didn't seem particularly complicated - a Transaction SCope wrapped the enqueue or dequeue calls along with database access and all was well as long as the queue was defined as transactional once it was created. I don't think this is true with ActiveMQ, which is a message broker, but MSMQ is installed locally on each endpoint machine so getting an item transactionally into the queue doesn't require a fancy distributed transaction. You are probably already be aware of this, but on .NET there are a few lightweight libraries that provide some nice abstractions over MSMQ (and theoretically other transports as well) nServiceBus : www.nservicebus.com Mass Transit : http://code.google.com/p/masstransit/ Also, Oren Eini has an interesting if experimental file system based, transactional queue. The benefit of this library is that, unlike MSMQ, it can be deployed as a library and does not require the maintenance headache of deploying MSMQ. You can read about that here: http://ayende.com/Blog/archive/2008/08/01/Rhino.Queues.Storage.Disk.aspx Also, SQL Server 2005 does handle queuing fairly elegantly, using SQL Server Service Broker, but you'll need SQL Server installed at each endpoint, and I don't know if SSB crosses the firewall. Finally, if you don't get the answer you're looking for here I highly recommend the nSErviceBus discussion forum. Udi Dahan answers these kinds of questions along with his small band of message oriented followers, and it is the best resource I have found so far to get my queue oriented questions answered quickly and competently. That forum is here: http://tech.groups.yahoo.com/group/nservicebus/ A: Quartz.Net is an open source job scheduling system. A: This is what MSMQ is designed for - queuing with transactions. If that doesn't work for you, check out the "Service Broker" feature of SQL Server - its the "queue in a SQL table" that 'csmba' describes in his answer, but it is an integrated SQL Server component, nicely packaged and exposed for your use. A: Is WebSphere MQ (MQ Series) an option? Is supports transactional messaging. A: You can look at Oracle feature named Advanced Queuing
{ "language": "en", "url": "https://stackoverflow.com/questions/165828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Daily Build vs. Zero Defect How do you go about doing a daily build and striving for a zero-defect environment? Does it mean I never get to go home until I've killed all the bugs in my new code? Or does it mean I just don't check my code back in until I've fully tested it, which leaves the code effectively branched for a much longer time? I'm working with a handful of programmers for the first time (as opposed to working on my own, or with just one other coder), so I'm just wrestling with decisions like this for the first time. Should we adopt a software development process? A: Yes, please adopt a software development process. There are a variety out there, of which I'm sure more than one will fit your team. Even one that isn't a perfect match is much better than no process at all. So how does my company go about having daily builds and striving for zero-defects? We run our test suite before we check in our code. The unique problem for us is that a full run of our test suite takes over 72 hours, so we run a limited set of unit tests before checking in code. For our nightly builds, we run a set of tests that take about 8 hours to run. Then on the weekends we run the full test suite. Each stage catches more and more problems, but over 90% are caught with the 5-minute developer tests and probably over 98% with the nightly tests. This still alerts us pretty early to problems before they get out to our customers and cost a lot to fix. A: It means make much smaller commits. The more often you commit working revisions, the less often your own working copy is ever broken. Iterative development starts with you. A: Integrate early, integrate often, integrate fast. Rather than a 'daily build', build every time someone commits and commit often (at least once a day, preferably more than 2). Important: Fast feedback is necessary for low-defect. If your build takes many minutes or even over an hour, eventually you will grow to hate the build, learn to avoid it, run it as little as possible, etc. It's value drops rapidly to the point of being worthless and your defect count will start skyrocketing. Invest some time up front to getting your build running fast. If there is slow stuff, find out why it's slow and eliminate that. If you can't, then at least set up a cascading builds so the rest of the build goes fast (think <2-5 minutes) and the long-running stuff can follow immediately after and take as long as it wants (though try to keep it under 10m tops). Can't say it enough: Fast feedback cycle on changes is extremely important! A: The trick is to check-in as often as possible, just made some tests pass, nice check it in! Fixed a single bug, check it in! Try to find the smalles increment possible and check it in! This has the added benefit of actually making it possible and convinent to write checkin comments that actually are relevant so thats a nice bonus. Ofcourse that requires that you have a CI environment that builds more often than the nightly, as often as possible really is the best option there. Oh and remember, if it never breaks, then you're doing it wrong. (I.e. you're being overly conservative, a little bit of breakage now and then only goes to show that you're hopefully pushing it.) A: Simple: Never check in code with (known) bugs in it. This doesn't mean you check in once per day. Check in when you have a meaningful change implemented so the other developers can get access to it. We always integrate locally, run our tests against the code, and when all passes, we check in. I check in about 20-30 times a day when working. The build server picks up changes and it runs builds against the system. Continous Integration (CI) is a good thing. :D Continuous Integration - Automate Your Builds Start out with building successfully and keep it that way as much as possible. It is essential in a team environment. Just remember that builds will break. It's expected that they break every once in awhile. It is a sign that you just inadvertently checked in something bad, and you stop what you are doing to make the build green again. A build server that never has broken builds is a warning sign! I also agree with chadmyers' answer: Whatever you decide, it needs to be automatic and automated. The best thing about automating tools to do this kind of stuff for you is that you no longer have to think about it or remember to do it. Or like Chad said, you don't stop doing it. I could recommend make a recommendation for CI tools but have a look here: What tools do you use for Automated Builds / Automated Deployments? Why? Once you have CI, you can get higher quality if you can inject some humor (and shame) by introducing a broken build token! http://ferventcoder.com/archive/2008/08/20/continuous-integration-enhancement--the-broken-build-token.aspx Use a Good Tool for Automated Builds Most people in .NET land use NAnt or MSBuild scripts to have automated builds that they can later hook up to their CI server. If you are just starting out, my suggestion would be to use UppercuT, it is an insanely easy to use build framework that uses NAnt. Here is a second link with deeper explanations: UppercuT. Branches vs Trunk for Active Development You would not have to branch unless you leave trunk open only for releases (which means that everyone else is working in the same branch as you). But I would have CI on both the trunk and the active development branch. Software Development Process Also to answer the question on a software development process, the answer is a resounding yes. But don't rush into anything unless a drastic change is required. Pick a process you want to migrate towards and slowly start adopting processes. And evaluate, evaluate, evaluate. If the particular process is not working for your group, figure out if you are doing something wrong or if you just need to eliminate it. And then do. Whatever process you end up with needs to work for you or it won't work. A: If you don't go home till all your defects are gone then you will never go home. My thoughts on this are that the daily build should be automated at certain times. Any code not checked in before this doesn't get built and if there are no checkins from someone for 2 days(builds) in a row then the build system should notify them and the tech lead as this is a warning sign. A: One perhaps more pragmatic approach is to have zero defects on the trunk, and branch for all development, then having daily builds is possible on both the trunk and branches, but zero defects does not apply to dev branches. While there may still be a certain level of stigma from having your branch breaking its builds, it is less of a problem than breaking the trunk. A: About the zero-defect-strategy: You can go home, if known bugs are in your code. It's more about, that defects should be fixed, before new features are implemented. That must not apply for the whole team, but if a developer has a bug assigned, that bug has priority over new features this developers has to create. A: Looking through the answers I'm surprised that nobody has mentioned Test Driven Development. If your goal is zero-defects that's the best place to start. After that I would strongly recommend pair-programming. Finally understand that tools like CruiseControl are useful but, as Jim Shore said, continuous integration is an attitude not a tool. It is the group commitment to keeping the code working that is key. A: Depending on what you're building, adopting an approach that defects are not allowed may not be appropriate. My personal opinion is that it rarely, if ever, is. The whole point of a defect management system is exactly that - to allow you to manage defects. If the defect is a show-stopping one then sure, you probably don't want to check it in, but if it's something minor or an edge case then it may make sense to check it in with the known defect as long as you're tracking it. Allowing defects to exist lets you focus on more important things - for example if you only have a limited amount of time in a release you may not have time to fix everything as well as to get all functionality in, so if it's a choice between fixing ten edge-case minor bugs, or creating a piece of value-adding functionality then the pragmatic choice may be to ship with known bugs. I'm not saying zero defects is a bad idea - in fact we strive for this by the end of each release cycle - but like many things in software development, pragmatism normally works better in the real world than puritanism. A: I'd go with @feverentcoder on the CI arguments. CI is your friend, let him help you! As for the Branch/Trunk point- everyone should always work on the trunk, branches are for spikes and POCs, tag all releases When it comes to processes, it's usually beneficial to find practices that are are relevant to you and then build micro-processes around them. Also, use only those practices/processes that you feel enhance productivity. Lastly, be courageous- try a practice for a week or two to see if it works with you; if it doesn't, throw it away. You just learned one more way not to make a light bulb!
{ "language": "en", "url": "https://stackoverflow.com/questions/165831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Site-Mining tools Many of the questions asked here are relevant to research I'm doing. These questions and answers are widely dispersed and not always easy to find, doing manual browsing, and sometimes an insightful answer or comment occurs in unrelated topics as well. I want to automate finding these relevant Q's & A's, based on sets of keywords, then use the information as pointers towards further in-depth research. What tools, preferably open-source, are available that I can use for this type of site-mining? I am not a web guru & for me to try to develop them will take a long time and also impact on time I could have spent on my R&D. A: Another option would be using Yahoo! Pipes. (demo) You can build such system visually online using a combination of feed urls, filters, etc... Learning time is minimal compared to programming. [edited: tense] A: It is not clear from your question whether you are a programmer or not, so I'm not sure whether you are after tools in the sense of apps or services that to what you want, or a library that makes site-mining easier. If the latter is the case and you use ruby, I can thoroughly recommend WWW::Mechanize. It provides a nice API for writing scripts to search web pages (by DOM or by text), follow links, and fill out forms. I've used it several times to organise information that's spread over several web pages within a site. I believe the ruby version was based on an earlier library for perl but I can't vouch for the perl version it I've not used it. A: Human interaction tools might be useful in such case (no development cost, probably a more consistent outcome, and evolving requirements). Couple comes to mind: * *Mechanical Turk. *Time Svr (more expensive) - experiment/review. A: All of the tags based on keywords have RSS feeds attached to them, so I'd start by subscribing to relevant keywords and searching the data. It seems like the simplest way to find related concepts and other related keywords.
{ "language": "en", "url": "https://stackoverflow.com/questions/165840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best enterprise repository tool for Maven 2? Some of the other questions and answers here on SO extol the virtues of using an enterprise repository tool like Archiva, Artifactory, or Nexus. What are the pros and cons of each? How do I choose between them? In case it helps: * *We use both Maven 1 and Maven 2 (at least for a while) *We want to store both internally-generated artifacts, publicly-available ones (ibiblio, codehaus, etc.), and proprietary ones (e.g. Sun's licensed JARs like the Servlet API). *We would like something that runs on Windows, Linux, or both. *We use Luntbuild as our CI server (but intend moving to Hudson some time). N.B. this question is not a duplicate of either this one or this one. A: I have used Archiva for over a year and found it met all of the basic requirements, however, we were restricted to a Windows server and as such found a few limitations and had a large memory footprint. The main reason we started looking for an alternative, however, was the painful process to upload artifacts to the repository that didn't exist in the Maven repositories on the web (such as some of the Sun or IBM jar files). We made the switch to Nexus about two months ago and have been impressed very much by its clean interface, ease of use, and general non-invasiveness. Uploading new artifacts is a breeze, and we haven't had a single issue. We've been using Mule and CXF a bit, and so we've had to download from both legacy (Maven1) and standard (Maven2) repositories - these are straightforward to set up and require little (if any) administration. The documentation is excellent with the free PDF on the Nexus site (also you can buy the hardcopy version if you like). I've used it on both Windows (at work) and Linux (at home) without any problems. A: We used to use artifactory, but ended up switching to nexus a while back. The main problem was that the disk space used by artifactory kept growing, and we couldn't find a way to stop it. We're now very happy with nexus. It's a great UI, easy to configure in settings.xml, and easy to manage as a service. A: I have used Archiva for over a year now and have been very happy with its reliability and performance. Both Archiva and Artifactory are available as .war files so you can deploy them on an application server. One advantage of Archiva over Artifactory is that it can share its user database with Continuum. A: In our company we chose Maven 2 and Nexus..it's awesome :) (same case as yours) A: We switched from Archiva to Nexus since we get too many problems with it's SQL support. With MySQL we got DB corruption after a shutdown ;( As soon as Nexus OSS was available as a simple war (so useable on our Tomcat farms), we used it and are very happy with it. Reliable and faster than Archiva. A: We had been using Archiva for a while, and were happy with it. We recently switched hardware, and decided to try out Nexus because we had read some good things about it. We didn't know what we were missing in Archiva, but Nexus is far better. The repository aspect is easier because it "groups" all the repositories into one url, for easier settings.xml configuration. Further, the web site rocks -- easy search for artifacts, and even searches the global central repo, without having downloaded it all to your proxy. I highly recommend Nexus!
{ "language": "en", "url": "https://stackoverflow.com/questions/165846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Dynamic Database Binding in C# I have a project here that connects to an in-production database and grabs tables and views. The code I've inherited connects to a SQL Server database via a SqlConnection, with the user's provided credentials and the database location they provide. When data is required, it uses the connection and a SQL string to create a SqlDataAdapter, and fills a brand new DataSet. That data then gets displayed after manipulating it, replacing table column names with appropriate display names, that sort of thing. Problem is, the entire process is slow, and the icing on the cake is that it displays large amounts of data in ListViews, which do not take kindly to being given ten thousand rows of data. For design reasons, we're not going to be breaking into pages - there is a search control - and I can implement a virtual ListView at great effort to simply get back to where I was. I simply think this is the wrong application for ListViews - I'm connecting to a database, and I'm displaying records. Sounds like a job for a DataGridView. Sadly, it simply won't work. I'm trying to bind the DataGridView to the DataSet I've gotten from the connection code via a DataBinder, but when I fire it up to have a look the data's sitting in the DataSet while the DataGridView is completely empty. However, if I use the GUI binding on my test database, taking the current database schema and my credentials, lo and behold it works a treat. But I can't use this stuff, because it's simply not flexible enough - I don't want to define a schema in the code that I have to change every time we update the database, and I need access to the connection string, and I don't seem to get that from the TableAdapter it creates. Am I missing something simple here to make my DataSet/BindingSource solution work? Am I barking up the wrong tree? Is it even worth fiddling around with binding anyway? All of the binding stuff I can see seems to get me 90% of the way there, but then I can't modify the connection string or sort particular columns the way I want, and it seems to want me to give it a defined schema which is going to break as soon as the database changes - whereas the handwritten code is at least defensively designed and quite flexible. I'm not cutting features, and the slow solution already works - if I have to give up on some of my requirements in order to get it to work, we'll just deal with what we've got. A: I am unsure if you cannot change the Query at all, or just in the context of the situation you mentioned at the end of you post. But I would suggest implementing some sort of paging if you can, and only retreive the rows of the particular page the user of the data is on. This would make a HUGE difference in performance, especially if the result set is as big as you say it is. This would probably be the biggest performance increasing change you could make in my opinion. And you can keep the currently working listview implementation. Besides, even if you had a grid with your own query populating it, you would still need some sort of paging strategy, or that would be slow too. i just grabbed a random article about implementing but you can find many others EDIT: In regards to your updated Question, the whole point of my answer was, if it aint broke, dont fix it. If the listview works right now, why change it? A: It should work fine as long as you have specified the table name (else IIRC the first table is used). Of course, you can simplify things by giving the appropriate DataTable (rather than the DataSet) to the DGV. You might also want to check that auto column-generation is enabled on the DGV. But binding to an ad-hoc DataTable works fine; I use it all the time for examples etc. A: open your dataset in the designer, click on the adapter, and change the connection to public (it defaults to internal), then you can access it as for the rest of your issues, show us the codez - databinding on a datagridview works fine AFAIK...
{ "language": "en", "url": "https://stackoverflow.com/questions/165848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Apache configuration help -- Why are different processes "in" different time zones? I have Apache 2 running on a VPS server (running Debian). I recently changed the timezone on the server (using dpkg-reconfigure tzdata) from America/New_York to America/Los_Angeles to match my move across country. I have also rebooted the virtual machine since making the change. However, the Apache processes seem to flitter between timezones. See this snippet from the access_log: 127.0.0.1 - - [02/Oct/2008:23:01:13 -0700] "GET /favicon.ico HTTP/1.0" 301 - "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.3) Gecko/2008092414 Firefox/3.0.3" 127.0.0.1 - - [03/Oct/2008:02:01:25 -0400] "GET /tag/wikipedia/?page=1 HTTP/1.0" 200 5984 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 127.0.0.1 - - [03/Oct/2008:02:01:36 -0400] "GET /index.atom HTTP/1.0" 200 7648 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.2) Gecko/2008091618 Firefox/3.0.2" 127.0.0.1 - - [03/Oct/2008:02:01:45 -0400] "GET /tag/moblog/ HTTP/1.0" 200 6563 "-" "msnbot/1.1 (+http://search.msn.com/msnbot.htm)" 127.0.0.1 - - [02/Oct/2008:23:01:46 -0700] "GET /tag/opensource/ HTTP/1.0" 200 5954 "-" "msnbot/1.1 (+http://search.msn.com/msnbot.htm)" 127.0.0.1 - - [03/Oct/2008:02:01:56 -0400] "GET /tag/dopplr/ HTTP/1.0" 200 3407 "-" "msnbot/1.1 (+http://search.msn.com/msnbot.htm)" It jumps from 23:01 to 02:01 and back. Any idea how I can keep it consistent? A: As it turns out, I had two Django projects running on this Apache instance, one of which I had fixed to point to America/Los_Angeles, but the other I had left behind. Depending on which app was accessed first when a new Apache process was created, it would muck up the time zone! A: Are you by any chance using ntpd and the peers against which you synchronize are flaky? A: Possibly some of the Apache worker processes were started before you changes the timezone, and some afterwards. Have you completely stopped and re-started Apache since changing the system timezone setting?
{ "language": "en", "url": "https://stackoverflow.com/questions/165866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Python object attributes - methodology for access Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like obj.attr ? Or perhaps write get accessors ? What are the accepted naming styles for such things ? Edit: Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used. If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one. A: Edit: Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used. Single underscore doesn't mean anything special to python, it is just best practice, to tell "hey you probably don't want to access this unless you know what you are doing". Double underscore however makes python mangle the name internally making it accessible only from the class where it is defined. Double leading AND trailing underscore denotes a special function, such as __add__ which is called when using the + operator. Read more in PEP 8, especially the "Naming Conventions" section. A: With regards to the single and double-leading underscores: both indicate the same concept of 'privateness'. That is to say, people will know the attribute (be it a method or a 'normal' data attribute or anything else) is not part of the public API of the object. People will know that to touch it directly is to invite disaster. On top of that, the double-leading underscore attributes (but not the single-leading underscore attributes) are name-mangled to make accessing them by accident from subclasses or anywhere else outside the current class less likely. You can still access them, but not as trivially. For example: >>> class ClassA: ... def __init__(self): ... self._single = "Single" ... self.__double = "Double" ... def getSingle(self): ... return self._single ... def getDouble(self): ... return self.__double ... >>> class ClassB(ClassA): ... def getSingle_B(self): ... return self._single ... def getDouble_B(self): ... return self.__double ... >>> a = ClassA() >>> b = ClassB() You can now trivially access a._single and b._single and get the _single attribute created by ClassA: >>> a._single, b._single ('Single', 'Single') >>> a.getSingle(), b.getSingle(), b.getSingle_B() ('Single', 'Single', 'Single') But trying to access the __double attribute on the a or b instance directly won't work: >>> a.__double Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: ClassA instance has no attribute '__double' >>> b.__double Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: ClassB instance has no attribute '__double' And though methods defined in ClassA can get at it directly (when called on either instance): >>> a.getDouble(), b.getDouble() ('Double', 'Double') Methods defined on ClassB can not: >>> b.getDouble_B() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in getDouble_B AttributeError: ClassB instance has no attribute '_ClassB__double' And right in that error you get a hint about what's happening. The __double attribute name, when accessed inside a class, is being name-mangled to include the name of the class that it is being accessed in. When ClassA tries to access self.__double, it actually turns -- at compiletime -- into an access of self._ClassA__double, and likewise for ClassB. (If a method in ClassB were to assign to __double, not included in the code for brevity, it would therefor not touch ClassA's __double but create a new attribute.) There is no other protection of this attribute, so you can still access it directly if you know the right name: >>> a._ClassA__double, b._ClassA__double ('Double', 'Double') So why is this a problem? Well, it's a problem any time you want to inherit and change the behaviour of any code dealing with this attribute. You either have to reimplement everything that touches this double-underscore attribute directly, or you have to guess at the class name and mangle the name manually. The problem gets worse when this double-underscore attribute is actually a method: overriding the method or calling the method in a subclass means doing the name-mangling manually, or reimplementing all the code that calls the method to not use the double-underscore name. Not to mention accessing the attribute dynamically, with getattr(): you will have to manually mangle there, too. On the other hand, because the attribute is only trivially rewritten, it offers only superficial 'protection'. Any piece of code can still get at the attribute by manually mangling, although that will make their code dependant on the name of your class, and efforts on your side to refactor your code or rename your class (while still keeping the same user-visible name, a common practice in Python) would needlessly break their code. They can also 'trick' Python into doing the name-mangling for them by naming their class the same as yours: notice how there is no module name included in the mangled attribute name. And lastly, the double-underscore attribute is still visible in all attribute lists and all forms of introspection that don't take care to skip attributes starting with a (single) underscore. So, if you use double-underscore names, use them exceedingly sparingly, as they can turn out quite inconvenient, and never use them for methods or anything else a subclass may ever want to reimplement, override or access directly. And realize that double-leading underscore name-mangling offers no real protection. In the end, using a single leading underscore wins you just as much and gives you less (potential, future) pain. Use a single leading underscore. A: I think most just access them directly, no need for get/set methods. >>> class myclass: ... x = 'hello' ... >>> >>> class_inst = myclass() >>> class_inst.x 'hello' >>> class_inst.x = 'world' >>> class_inst.x 'world' BTW, you can use the dir() function to see what attributes/methods are attached to your instance: >>> dir(class_inst) ['__doc__', '__module__', 'x'] Two leading underbars, "__" are used to make a attribute or function private. For other conventions refer to PEP 08: http://www.python.org/dev/peps/pep-0008/ A: The generally accepted way of doing things is just using simple attributes, like so >>> class MyClass: ... myAttribute = 0 ... >>> c = MyClass() >>> c.myAttribute 0 >>> c.myAttribute = 1 >>> c.myAttribute 1 If you do find yourself needing to be able to write getters and setters, then what you want to look for is "python class properties" and Ryan Tomayko's article on Getters/Setters/Fuxors is a great place to start (albeit a little long) A: Python does not need to define accessors right from the beginning, since converting attributes into properties is quick and painless. See the following for a vivid demonstration: Recovery from Addiction A: There is no real point of doing getter/setters in python, you can't protect stuff anyway and if you need to execute some extra code when getting/setting the property look at the property() builtin (python -c 'help(property)') A: Some people use getters and setters. Depending on which coding style you use you can name them getSpam and seteggs. But you can also make you attributes readonly or assign only. That's a bit awkward to do. One way is overriding the > __getattr__ and > __setattr__ methods. Edit: While my answer is still true, it's not right, as I came to realize. There are better ways to make accessors in python and are not very awkward.
{ "language": "en", "url": "https://stackoverflow.com/questions/165883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: .NET Date Compare: Count the amount of working days since a date? What's the easiest way to compute the amount of working days since a date? VB.NET preferred, but C# is okay. And by "working days", I mean all days excluding Saturday and Sunday. If the algorithm can also take into account a list of specific 'exclusion' dates that shouldn't count as working days, that would be gravy. Thanks in advance for the contributed genius. A: Here is a sample of Steve's formula in VB without the holiday subtraction: Function CalcBusinessDays(ByVal DStart As Date, ByVal DEnd As Date) As Decimal Dim Days As Decimal = DateDiff(DateInterval.Day, DStart, DEnd) Dim Weeks As Integer = Days / 7 Dim BusinessDays As Decimal = Days - (Weeks * 2) Return BusinessDays Days = Nothing Weeks = Nothing BusinessDays = Nothing End Function A: DateDiff along with a few other Date* functions are unique to VB.NET and often the subject of envy from C# developers. Not sure it'll be very helpful in this case, though. A: in general (no code) - * *subtract the dates to get the number of days *divide by 7 to get the number of weeks *subtract number of weeks times 2 *count the number of holiday dates that fall with the date range *subtract that count fiddle with the start/end dates so that they fall monday to monday, then add back the difference [apologies for the no-code generalities, it's late] [c.f. endDate.Subtract(startDate).TotalDays] A: This'll do what you want it to. It should be easy enough to convert to VB.NET, it's been too long for me to be able to do it though. DateTime start = DateTime.Now; DateTime end = start.AddDays(9); IEnumerable<DateTime> holidays = new DateTime[0]; // basic data int days = (int)(end - start).TotalDays; int weeks = days / 7; // check for a weekend in a partial week from start. if (7- (days % 7) <= (int)start.DayOfWeek) days--; if (7- (days % 7) <= (int)start.DayOfWeek) days--; // lose the weekends days -= weeks * 2; foreach (DateTime dt in holidays) { if (dt > start && dt < end) days--; } A: The easiest way is probably something like DateTime start = new DateTime(2008, 10, 3); DateTime end = new DateTime(2008, 12, 31); int workingDays = 0; while( start < end ) { if( start.DayOfWeek != DayOfWeek.Saturday && start.DayOfWeek != DayOfWeek.Sunday ) { workingDays++; } start = start.AddDays(1); } It may not be the most efficient but it does allow for the easy checking of a list of holidays. A: Here's a method for SQL Server. There's also a vbscript method on the page. Not exactly what you asked for, I know. A: We combined two CodeProject articles to arrive at a complete solution. Our library is not concise enough to post as source code, but I can point you to the two projects we used to achieve what we needed. As always with CodeProject articles, read the comments, there may be important info in them. Calculating business days:http://www.codeproject.com/KB/cs/busdatescalculation.aspx An alternative business day calc: http://www.codeproject.com/KB/cs/datetimelib.aspx Calculating Holidays:http://www.codeproject.com/KB/dotnet/HolidayCalculator.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/165887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Thread-safe atomic operations in gcc In a program I work on, I have a lot of code as follows: pthread_mutex_lock( &frame->mutex ); frame->variable = variable; pthread_mutex_unlock( &frame->mutex ); This is clearly a waste of CPU cycles if the middle instruction can just be replaced with an atomic store. I know that gcc is quite capable of this, but I haven't been able to find much documentation on such simple thread-safe atomic operations. How would I replace this set of code with an atomic operation? (I know that simple stores should theoretically be atomic, but I don't want to have to hope that the optimizer isn't screwing up their atomic-ness at some point in the process.) Clarification: I do not need them to be strictly atomic; these variables are solely used for thread synchronization. That is, Thread B reads the value, checks if its correct, and if its not correct, it sleeps. So even if Thread A updates the value and Thread B doesn't realize its updated, that isn't a problem, since that just means Thread B sleeps when it didn't really need to, and when it wakes up, the value will be correct. A: On x86 and most other architectures, aligned 4-byte reads and writes are always atomic. The optimizer may skip/reorder reads and writes within a single thread, though. What you want to do is inform the compiler that other threads may have touched this memory location. (A side effect of pthread_mutex_lock is telling the compiler that other threads may have touched any part of memory.) You may see volatile recommended, but this not in the C specification, and GCC doesn't interpret volatile that way. asm("" : "=m" (variable)); frame->variable = variable; is a GCC-specific mechanism to say that "variable has been written to, reload it". A: You could check the gcc documentation. For the current gcc version (4.3.2) it would be chapter 5.47 Built-in functions for atomic memory access - for other gcc versions please check your docs. It should be in chapter 5- Extensions to the C Language Family. Incidentally, the C compiler makes absolutely no guarantee as to simple store operations being atomic. You cannot rely on that assumption. In order for a machine opcode to be executed atomically, it needs the LOCK prefix. A: Up to a certain point, atomic operations in C were provided straight from the kernel sources via the atomic.h header. However, having kernel headers being used directly in user-space code is a very bad practice, so the atomic.h header file was removed some time ago. Instead we ca now make use of the "GCC Atomic Builtins" which are a far better and more reliable approach. There is a very good explanation provided by Tudor Golubenco on his blog. He even provides a drop-in replacement for the initial atomic.h file, in case you have some code that needs it. Unfortunately I'm new to stackoverflow, so I can only use one link in my comments, so check Tudor's post and get enlightened. A: AFAIK, you can't prefix MOV instructions with LOCK; this is allowed only for RMW operations. But if he does use a simple store, he might also need a memory barrier, which is implicit with mutex, as well as with instructions that allow LOCK. A: As i can see, you're using gnu platform for development, so it's safe to say that glic provides a datatype int ranged with atomic capabilities, 'sig_atomic_t' . So this approach can assure you atomic operations at kernel levels. not gcc levels.
{ "language": "en", "url": "https://stackoverflow.com/questions/165931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Retrieving session information with ASP.NET MVC Let's say you have an ASP.NET MVC web application with a page that is accessible to all registered users of your application. Now this page has mostly general information, but there is also one panel that shows the current logged in user's personal information. How do you go about implementing this? My initial thought would be to have the controller read the current logged in user's ID from the session and pull the information necessary and pass that into the view. But that seems somewhat awkward to me for some reason to pull session information from within the controller. Is this the only way to do it? A: With MVC preview 5 you get authorization controller. It sets user name or id in cookie when user logins, and you can read it with User.Identity.Name (if i remember correct) Now, in one component (or ex user control in previous versions) you can read that id or username, or get data from session or db, and display rest of user info.
{ "language": "en", "url": "https://stackoverflow.com/questions/165937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I debug a .BAT script? Is there a way to step through a .bat script? The thing is, I have a build script , which calls a lot of other scripts, and I would like to see what is the order in which they are called, so that I may know where exactly I have to go about and add my modifications. A: I found 'running steps' (win32) software doing exactly what I was looking for: http://www.steppingsoftware.com/ You can load a bat file, place breakpoints / start stepping through it while seeing the output and environment variables. The evaluation version only allows to step through 50 lines... Does anyone have a free alternative with similar functionality? A: I don't know of anyway to step through the execution of a .bat file but you can use echo and pause to help with debugging. ECHO Will echo a message in the batch file. Such as ECHO Hello World will print Hello World on the screen when executed. However, without @ECHO OFF at the beginning of the batch file you'll also get "ECHO Hello World" and "Hello World." Finally, if you'd just like to create a blank line, type ECHO. adding the period at the end creates an empty line. PAUSE Prompt the user to press any key to continue. Source: Batch File Help @workmad3: answer has more good tips for working with the echo command. Another helpful resource... DDB: DOS Batch File Tips A: rem out the @ECHO OFF and call your batch file redirectin ALL output to a log file.. c:> yourbatch.bat (optional parameters) > yourlogfile.txt 2>&1 found at http://www.robvanderwoude.com/battech_debugging.php IT WORKS!! don't forget the 2>&1... WIZ A: Did you try to reroute the result to a file? Like whatever.bat >log.txt You have to make sure that in this case every other called script is also logging to the file like >>log.txt Also if you put a date /T and time /T in the beginning and in the end of that batch file, you will get the times it was at that point and you can map your script running time and order. A: The only way I can think of is spinkle the code with echos and pauses. A: A quite frequent issue is that a batch script is run by double-clicking its icon. Since the hosting Command Prompt (cmd.exe) instance also terminates as soon as the batch script is finished, it is not possible to read potential output and error messages. To read such messages, it is very important that you explicitly open a Command Prompt window, manoeuvre to the applicable working directory and run the batch script by typing its path/name. A: you can use cmd \k at the end of your script to see the error. it won't close your command prompt after the execution is done A: Make sure there are no 'echo off' statements in the scripts and call 'echo on' after calling each script to reset any you have missed. The reason is that if echo is left on, then the command interpreter will output each command (after parameter processing) before executing it. Makes it look really bad for using in production, but very useful for debugging purposes as you can see where output has gone wrong. Also, make sure you are checking the ErrorLevels set by the called batch scripts and programs. Remember that there are 2 different methods used in .bat files for this. If you called a program, the Error level is in %ERRORLEVEL%, while from batch files the error level is returned in the ErrorLevel variable and doesn't need %'s around it. A: Facing similar concern, I found the following tool with a trivial Google search : JPSoft's "Take Command" includes a batch file IDE/debugger. Their short presentation video demonstrates it nicely. I'm using the trial version since a few hours. Here is my first humble opinion: * *On one side, it indeed allows debugging .bat and .cmd scripts and I'm now convinced it can help in quite some cases *On the other hand, it sometimes blocks and I had to kill it... specially when debugging subscripts (not always systematically).. it doesn't show a "call stack" nor a "step out" button. It deverves a try. A: Or.... Call your main .bat file from another .bat file and output the result to a result file i.e. runner.bat > mainresults.txt Where runner.bat calls the main .bat file You should see all the actions performed in the main .bat file now A: or, open a cmd window, then call the batch from there, the output will be on the screen.
{ "language": "en", "url": "https://stackoverflow.com/questions/165938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "107" }
Q: What is the best approach for IPC between Java and C++? I would like to implement a robust IPC solution between a single JVM app (one process, potentially multiple threads) and a native C++ application that is linked to a C++ dll. The dll may or may not be on the same physical machine. What is the best approach for doing so? Any suggestions will be greatly appreciated! Thanks! A: I'd use a standard TCP/IP socket, where the app listens on some port and the library connects to it to report what it has to report and expect the answers. The abstraction is robust, well supported and will have no interop issues. A: Have you considered Facebook's Thrift framework? Thrift is a software framework for scalable cross-language services development. It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml. Thrift allows you to define data types and service interfaces in a simple definition file. Taking that file as input, the compiler generates code to be used to easily build RPC clients and servers that communicate seamlessly across programming languages. It can work over TCP sockets and the serialization/deserialization is already built-in. Read the whitepaper for details. A: Google protocol buffer can help you serialize data in a language and platform neutral way. It will also generate code in Java and C++ to handle reading and writing the serialized data. You can then use any communication mechanism you wish to send the data. For example, you could send it over a TCP socket or via shared memory IPC. A: mmm - DLLs are not processes, so I'm assuming you mean IPC between your Java app, and some other native application that is linked to the DLL. Sockets, for certain, are the way to go here. It will make everything easier for you. Another option would be to use JNI to talk to a DCOM implementation, but I don't think you'll gain much (other than having to deal with the headaches of COM and JNI :-) ).
{ "language": "en", "url": "https://stackoverflow.com/questions/165945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Mono created features available on .NET? I noticed the following today: Mono at the PDC 2008? My talk will cover new technologies that we have created as part of Mono. Some of them are reusable on .NET (we try to make our code cross platform) and some other are features that specific to Mono's implementation of the CLI. Posted by Miguel de Icaza on 01 Oct 2008 Does anybody know what type of new technologies he is refering too? Sounds like a great talk [UPDATE] Here is the video of Miguel's talk * *Mono's SIMD Support: Making Mono safe for Gaming *Static Compilation in Mono *Unity on Linux, First Screenshots A: These are some of the major libraries that you can use: * *Gtk#, the Cross platform GUI API Unix, Windows, MacOS X, * *this is an entire stack of libraries and includes widgets (with Gtk+), Accessibility and international text rendering (with PangoSharp). *Mono.DataConvert - System.BitConverter implemented correctly, and well designed. *Mono.Addins - Extensibility Framework, similar to MEF. *Mono.Cairo - Cairo Graphics Binding. *Mono.Cecil - ECMA CIL Image Manipulation. *Xml.Relaxng - RelaxNG parsing and validation. *Novell.Directory.Ldap - LDAP libraries. *Daap.Sharp - An implementation of the DAAP protocol * *(Music exchange protocol, you can consume or expose music sources) *Mono.Upnp - Universal Plug and Play implementation in managed code. *Mono.ZeroConf - Cross platform ZeroConf/Bonjour API for .NET apps. *BitSharp - Bittorrent client/server library, now called MonoTorrent *Mono.Nat - Network Address Translation. *Mono.Rocks - Useful Extension methods/Functional features for C#, now superseded by Cadenza *SmugMugSharp - Bindings to talk to SmugMug *Crimson - Crypto libraries beyond what is available in .NET *Mono.WebBrowser - Wrapper for Firefox or WebKit. *WebkitSharp - Bindings to use WebKit from C# *GtkSharpRibbon - The Ribbon, implemented in Gtk# (cross platform) *IPodSharp - Library to communicate and manipulate iPods. *TagLibSharp - Library to annotate multimedia files (tagging). *Exiv2Sharp - EXIF reading/writing library. Linux Specific: * *Mono.Posix/Mono.Unix. *NDesk.DBus *Mono.Fuse - User-space file systems. I am sure I am missing a bunch of other libraries. Most of these (and many more) are linked to via the Libraries page. A: Maybe things like Cecil and Monovation and the interactive shell? A: Looking at the roadmap, maybe the new JIT/IL implementation that they're quite proud of; could be the C# Evaluation API / C# Shell. However, I suspect we'll have to wait for PDC to find out... Many of the roadmap items are (quite reasonably) like-for-like comparable with MS equivalents - but maybe they've sneaked in a few extras on the quiet ;-p A: there's also the C# eval and C# scripting shell that works only on Mono 2.2 at present... A: Miguel himself has been spotted on stack overflow: maybe you'll get an answer straight from him. A: Don't forget Mono.Options, a very useful command-line options parsing library. A: Here is more details about Mono 2.0 A: If you are still targetting 1.1, then Mono.Data is an excellent abstraction similar to what DbProvider does in 2.0 ADO.NET A: Telerik announced will suport Mono in next versions. Maybe will the first thrid-party compnents commercial company support Mono. This is great. MonoDevelop is now supported in Windows. I seen great future for Mono.
{ "language": "en", "url": "https://stackoverflow.com/questions/165949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How can I run a CLI Application as a Windows Service? Say I have a third party Application that does background work, but prints out all errors and messages to the console. This means, that currently, we have to keep a user logged on to the server, and restart the application (double-click) every time we reboot. Not so very cool. I was kind of sure, that there was an easy way to do this - a generic service wrapper, that can be configured with a log file for stdout and stderr. I did check svchost.exe, but according to this site, its only for DLL stuff. Pity. EDIT: The application needs to be started from a batch file. FireDaemon seems to do the trick, but I think it is a bit overkill, for something that can be done in <10 lines of python code... Oh well, Not Invented Here... A: I'd recommend NSSM: The Non-Sucking Service Manager. * *32/64-bit EXEs *Public Domain (!) *Properly implements service stop messages, and sends the proper signal to your applications for graceful shutdown. A: Why not simply implement a very thin service wrapper, here's a quickstart guide for writing a Service in .NET Writing a Useful Windows Service in .NET in Five Minutes When you got that running you can use the Process class to start the application and configure it so that you can handle stdout/stderr yourself (ProcessStartInfo is your friend). A: Check out srvany.exe from the Resource Kit. This will let run anything as a service. You can pass parameters in the service definition to your executable via srvany.exe so you could run a batch file as a service by seting the registry as follows: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MyService\Parameters] "Application"="C:\\Windows\\System32\\cmd.exe" "AppParameters"="/C C:\\My\\Batch\\Script.cmd" "AppDirectory"="C:\\My\\Batch" Note: if you set up these keys in RegEdit rather than using a file you only need single backslashes in the values. A: Check out FireDaemon. There is a free version (FireDaemon lite I think) that only allows 1 service installed at a time, but this is a very useful tool for setting up services. It also wraps around batch files correctly, if this is required. A: I second the firedaemon option. You may also want to set the option to allow the service to interact with the desktop to allow it to display the cli output window. They no longer offer a free version but if you search around the web for firedaemon lite you can find the older free lite version or maybe go the for pay route. A: NSSM is long-dead. It's recommended to use WinSW on Windows 10 or Windows 11
{ "language": "en", "url": "https://stackoverflow.com/questions/165951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What is Code compiled unit in ASP.net I need to know what is code compiled unit in .net. It s located in this path C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\App_folders I need to know why its created ccu file also an complied file with application extension dll. Do this take any reference from here. Thanks A: I suggest you take a look at the article at: http://msdn.microsoft.com/en-us/magazine/cc163496.aspx "By default, when you compile a Web application the compiled code is placed in the Temporary ASP.NET Files folder. This folder is a subdirectory of the location where you installed the .NET framework. Typically, the location is the following %FrameworkInstallLocation%\Temporary ASP.NET Files". http://msdn.microsoft.com/en-us/library/ms366723.aspx "CCU stands for Code Compile Unit and refers to the CodeDOM tree created to generate the source code for the dynamic page class. The CCU file is a binary file that contains the serialized version of the CodeDOM tree for the page. " "The CCU file maintains an up-to-date copy of the CodeDOM structure of the page ready to service these requests." http://msdn.microsoft.com/en-us/magazine/cc163496.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/165971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Determining Referer in PHP What is the most reliable and secure way to determine what page either sent, or called (via AJAX), the current page. I don't want to use the $_SERVER['HTTP_REFERER'], because of the (lack of) reliability, and I need the page being called to only come from requests originating on my site. Edit: I am looking to verify that a script that preforms a series of actions is being called from a page on my website. A: The REFERER is sent by the client's browser as part of the HTTP protocol, and is therefore unreliable indeed. It might not be there, it might be forged, you just can't trust it if it's for security reasons. If you want to verify if a request is coming from your site, well you can't, but you can verify the user has been to your site and/or is authenticated. Cookies are sent in AJAX requests so you can rely on that. A: What I have found best is a CSRF token and save it in the session for links where you need to verify the referrer. So if you are generating a FB callback then it would look something like this: $token = uniqid(mt_rand(), TRUE); $_SESSION['token'] = $token; $url = "http://example.com/index.php?token={$token}"; Then the index.php will look like this: if(empty($_GET['token']) || $_GET['token'] !== $_SESSION['token']) { show_404(); } //Continue with the rest of code I do know of secure sites that do the equivalent of this for all their secure pages. A: Using $_SERVER['HTTP_REFERER'] The address of the page (if any) which referred the user agent to the current page. This is set by the user agent. Not all user agents will set this, and some provide the ability to modify HTTP_REFERER as a feature. In short, it cannot really be trusted. if (!empty($_SERVER['HTTP_REFERER'])) { header("Location: " . $_SERVER['HTTP_REFERER']); } else { header("Location: index.php"); } exit; A: There is no reliable way to check this. It's really under client's hand to tell you where it came from. You could imagine to use cookie or sessions informations put only on some pages of your website, but doing so your would break user experience with bookmarks. A: We have only single option left after reading all the fake referrer problems: i.e. The page we desire to track as referrer should be kept in session, and as ajax called then checking in session if it has referrer page value and doing the action other wise no action. While on the other hand as he request any different page then make the referrer session value to null. Remember that session variable is set on desire page request only.
{ "language": "en", "url": "https://stackoverflow.com/questions/165975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106" }
Q: Moving between dialog controls in Windows Mobile without the tab key I have a windows mobile 5.0 app, written in C++ MFC, with lots of dialogs. One of the devices I'm currently targetting does not have a tab key, so I would like to use another key to move between controls. This is fine for buttons but not edit controls or combo boxes. I have looked at a similar question but the answer does not really suit. I've tried overriding the CDialog::OnKeyDown to no avail, and would rather not have to override the keystroke functionality for every control in every dialog. My thoughts so far are to write new classes replacing CEdit and CComboBox, but as always am just checking if there is an easier way, such as temporarily re-programming another key. A: I don't know MFC that good, but maybe you could pull it off by subclassing window procedures of all those controls with a single class, which would only handle cases of pressing cursor keys and pass the rest of events to the original procedures. You would have to provide your own mechanism of moving to an appropriate control, depending on which cursor key was pressed but it may be worth the usability gains. If that worked, you could enumerate all dialog controls and subclass them automatically. Windows Mobile 6 allows switching between dialog controls using cursors by default - it's a new, more "smartphoney" way of moving around the UI and it's incredibly convenient. A: Can you not use the D-Pad to navigate between fields?
{ "language": "en", "url": "https://stackoverflow.com/questions/165984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Automatically starting Windows Sound Recorder My division has been tasked with recording the morning presentation audio for future use, using the built-in Windows Sound Recorder. Because of human nature, we don't always remember to start it on time. Windows doesn't have a built-in equivalent to the Unix cron function. Besides installing a new software program (which will take time, possibly cost money, and require IA certification), is there an easy way to automate the recording? I'm not adverse to writing a simple Python script for it, but I haven't programmed for Windows before; I don't know the APIs or anything required for this type of program. Edit Thanks for the responses. I feel like an imbecile. I don't normally use Windows computers so I wasn't aware that Windows had the Task Scheduler. However, when I tested it with the recorder program, all it did was open the program; it didn't actually start recording. How do I get it to actually start recording when it is opened? A: set WshShell = WScript.CreateObject("WScript.Shell") WScript.Sleep(100) WshShell.Run "%SystemRoot%\system32\sndrec32.exe" WScript.Sleep(100) WshShell.AppActivate "Sound - Sound Recorder" WScript.Sleep(100) WshShell.SendKeys " " WScript.Sleep(100) Save the above text as RunSoundRecorder.vbs. This will start the sound record application and start it recording. Just point the task scheduler at this file. Incase you want to make changes: The third line is the exe to run The fifth line is the what is in the application title bar. A: Use AutoIt3 Run ( @SystemDir + "\sndrec32.exe", "workingdir" ) Sleep(5000) ;five seconds WinActivate( "Sound - Sound Recorder" ) Sleep(100) Send( " " ) Note: I have not tested this, because I don't use Windows very often anymore. Definitely worth checking out if you want to automate any Win32 Gui. It actually seems like it has received even more features since I last used it. Features: ( taken from www.autoitscript.com/autoit3/ ) * *Easy to learn BASIC-like syntax *Simulate keystrokes and mouse movements *Manipulate windows and processes *Interact with all standard windows controls *Scripts can be compiled into standalone executables *Create Graphical User Interfaces (GUIs) *COM support *Regular expressions *Directly call external DLL and Windows API functions *Scriptable RunAs functions *Detailed helpfile and large community-based support forums *Compatible with Windows 95 / 98 / ME / NT4 / 2000 / XP / 2003 / Vista / 2008 *Unicode and x64 support *Digitally signed for peace of mind *Works with Windows Vista's User Account Control (UAC) A: There is no command line parameter to start in recording mode. You have to Start recording manually! A: Start-Programs-Accessories-System Tools-Scheduled Tasks
{ "language": "en", "url": "https://stackoverflow.com/questions/165987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Tool or code for Cache and Memory Bus performances I am facing a performance issue on a multi-core (8+) architecture with software written in C++ / VistualStudio / WindowsXP. Suddenly I realized that I have no idea of the performances of my L1 and L2 cache and CPU->to->Memory bandwidth. I have tested several tools (including VTune, Glowcode, etc, etc) but all of them fails when tested on load in a multicore architecture (which is the very reason why I need them!). Can you suggest any other tool which is not so fancy in doing graphs but can give me at least few indications of my cache/memory performances or can suggest snippets of code to manually instrument my application? Thanks! A: Maybe Memtest86+ is what you are looking for. A: I believe the EVEREST test suite will check your memory/cpu/cache performance. You may want to look into this site. Lavalys' Everest Page A: you can use hardware counters to measure the L1 and L2 miss rate; however, I am not too sure which library package to use with a windows platform. A: Did you try cachegrind ? Its a simulator, of course, but still, it will let you catch most of the real problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/166004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Need assistance with accelerating a GOval object (making a Breakout game) Hey peoples, I've been studying Java for a couple of weeks, and have decided to produce my own version of Breakout. The game is working fine, apart from the method which is supposed to modify the ball (GOval) speed based on the speed of the paddle (GRect). Anyway, this is the method: private double paddleVelocity(){ double paddleTracker=(paddleXTracker+paddleXTrackerTwo+paddleXTrackerThree)/3; //get the average x-coordinate for the paddle if(loopCounter>3){ //if the game has been through the loop more than three times (to ensure three x-coordinates have been stored) paddleVelocity=(paddle.getX()-paddleTracker)/50; //set paddleVelocity to the current paddle x-coordinate minus the average of the previous three paddle x-coordinates (which therefore finds the velocity and direction) }else{ paddleVelocity=(paddle.getX()-paddleXTracker)/50; //takes the previous paddle x-coordinate and subtracts it from the current x-coordinate (which therefore finds the velocity and direction) } return (paddleVelocity); } And this is the method that calls it: private void detectCollisionPaddle(){; double ballY = ball.getY()+(BALL_RADIUS*2); if(paddle.getY()<=ballY && ball.getX()+BALL_RADIUS>=paddle.getX() && ball.getX()+BALL_RADIUS/2<=(paddle.getX()+PADDLE_WIDTH)){ //a mess of code which ensures the ball is within the bounds of the paddle yVel=-yVel; //make the ball bounce off bounceClip.play(); //play the bounce clip xVel=xVel+paddleVelocity(); //increase the velocity of the ball in the direction and speed of the paddle } } The instance variables include: private int loopCounter; //how many loops of the game code have been executed private double paddleVelocity; //velocity of the paddle private double paddleXTracker = 1; //value of the x-coordinate of the paddle one loop previous private double paddleXTrackerTwo; //value of the x-coordinate of the paddle two loops previous private double paddleXTrackerThree; //value of the x-coordinate of the paddle three loops previous private double yVel = -1; //y velocity of the ball private double xVel = 1; //x velocity of the ball Basically I want paddleVelocity to grab three consecutive x-coordinates of the paddle, and compare them to the current position (paddle.getX()). This allows me to determine the speed and direction that the user has moved the paddle. I then return the result (paddleVelocity) which is supposed to add to the current velocity of the ball (xVel). However, this doesn't seem to function properly. When the call for paddleVelocity() is removed, the code works perfectly (albeit without acceleration). So I know there's an issue with the paddleVelocity method. It seems to be biased to the right side. Perhaps you guys can help, afterall, this is my first serious attempt at programming. Here's the game atm: http://cyb3rglitch.com/games/Glitchout_0.1/ Yes, the bricks don't break yet. :P Cheers! EDIT: To make it easier, here's all the code in its messy glory: /* * File: Breakout.java * ------------------- * Name: Vito Cassisi */ import acm.graphics.*; import acm.program.*; import acm.util.*; import java.applet.*; import java.awt.*; import java.awt.event.*; public class Breakout extends GraphicsProgram { /** Width and height of application window in pixels */ public static final int APPLICATION_WIDTH = 400; public static final int APPLICATION_HEIGHT = 600; /** Dimensions of game board (usually the same) */ private static final int WIDTH = APPLICATION_WIDTH; private static final int HEIGHT = APPLICATION_HEIGHT; /** Dimensions of the paddle */ private static final int PADDLE_WIDTH = 60; private static final int PADDLE_HEIGHT = 10; /** Offset of the paddle up from the bottom */ private static final int PADDLE_Y_OFFSET = 30; /** Number of bricks per row */ private static final int NBRICKS_PER_ROW = 10; /** Number of rows of bricks */ private static final int NBRICK_ROWS = 10; /** Separation between bricks */ private static final int BRICK_SEP = 4; /** Width of a brick */ private static final int BRICK_WIDTH = (WIDTH - (NBRICKS_PER_ROW - 1) * BRICK_SEP) / NBRICKS_PER_ROW; /** Height of a brick */ private static final int BRICK_HEIGHT = 8; /** Radius of the ball in pixels */ private static final int BALL_RADIUS = 10; /**Ball speed in milliseonds*/ private static final int DELAY = 10; /** Offset of the top brick row from the top */ private static final int BRICK_Y_OFFSET = 70; /** Number of turns */ private static final int NTURNS = 3; private static final int COUNTDOWN = 200; private GOval ball; private GRect paddle; private GRect block; private GLabel life; private GLabel countDown; private GObject getObject; AudioClip bounceClip = MediaTools.loadAudioClip("bounce.au"); /* Method: run() */ /** Runs the Breakout program. */ public void run() { drawBlocks(); drawPaddle(); drawBall(); drawLife(); addMouseListeners(); while(!lose) { detectCollisionWall(); detectCollisionBlock(); detectCollisionPaddle(); decelerate(); assignPaddleTrackers(); pause(DELAY); loopCounter++; } GLabel youLose = new GLabel("You lose!",APPLICATION_WIDTH/2,APPLICATION_HEIGHT/2); youLose.setLocation((getWidth()-youLose.getWidth())/2, (getHeight()-youLose.getAscent())/2); add(youLose); } private void drawBlocks(){ /**Draw as many rows as defined in the constant 'NBRICKS_PER_ROW'*/ for(int i = 0;i<NBRICK_ROWS;i++){ blockX=(APPLICATION_WIDTH-((BRICK_WIDTH+BRICK_SEP)*NBRICKS_PER_ROW))/2; //center the blocks horizontally drawRow(); //draw the row blockY+=BRICK_HEIGHT+BRICK_SEP; //move the row down } } private void drawRow(){ /**Draw as many bricks as defined in the constant 'NBRICKS_PER_ROW'*/ for(int i = 0;i<NBRICKS_PER_ROW;i++){ block = new GRect(blockX,blockY,BRICK_WIDTH,BRICK_HEIGHT); //initialise the blocks add(block); //draw the blocks blockX+=BRICK_WIDTH+BRICK_SEP; //move x co-ordinate across } } private void drawPaddle(){ paddle = new GRect((APPLICATION_WIDTH-PADDLE_WIDTH)/2,(APPLICATION_HEIGHT-PADDLE_Y_OFFSET-PADDLE_HEIGHT),PADDLE_WIDTH, PADDLE_HEIGHT); add(paddle); } private void drawBall(){ double diameter = BALL_RADIUS*2; ball = new GOval((APPLICATION_WIDTH-diameter)/2,APPLICATION_HEIGHT-PADDLE_Y_OFFSET-PADDLE_HEIGHT*2-diameter,diameter,diameter); add(ball); } private void moveBall(){ ball.move(xVel, yVel); } private void detectCollisionWall(){ if(ball.getY()+(BALL_RADIUS*2)<getHeight() && ball.getY()>0 && ball.getX()+(BALL_RADIUS*2)<getWidth() && ball.getX()>0){ moveBall(); } if(ball.getY()+(BALL_RADIUS*2)>getHeight()){ yVel=-yVel; turnsLeft--; testForLose(); bounceClip.play(); } if(ball.getY()<0){ yVel=-yVel; bounceClip.play(); } if(ball.getX()+(BALL_RADIUS*2)>=getWidth()){ xVel=-xVel; bounceClip.play(); } if(ball.getX()<=0){ xVel=-xVel; bounceClip.play(); } moveBall(); } private void testForLose(){ if(turnsLeft==0){ lose=true; life.setLabel("You have "+turnsLeft+" lives!"); return; } life.setLabel("You have "+turnsLeft+" lives!"); remove(ball); drawBall(); xVel=1; yVel=-1; drawCountDown(); } private void drawCountDown(){ GLabel countDown = new GLabel("",APPLICATION_WIDTH/2,APPLICATION_HEIGHT/2); for(int i=3;i>0;i--){ countDown.setLabel("Respawn in: "+i); countDown.setLocation((getWidth()-countDown.getWidth())/2, (getHeight()-countDown.getAscent())/2); add(countDown); pause(COUNTDOWN); } countDown.setLabel("Go"); countDown.setLocation((getWidth()-countDown.getWidth())/2, (getHeight()-countDown.getAscent())/2); pause(COUNTDOWN); remove(countDown); } private void drawLife(){ life = new GLabel("You have "+turnsLeft+" lives!", APPLICATION_WIDTH/2,15); life.setLocation((getWidth()-life.getWidth())/2, 20); add(life); } private void detectCollisionBlock(){ GObject collisionObject = detectObject(ball, BALL_RADIUS*2, BALL_RADIUS*2); if(collisionObject==block){ remove(collisionObject); yVel=-yVel; } } private double paddleVelocity(){ double paddleTracker=(paddleXTracker+paddleXTrackerTwo+paddleXTrackerThree)/3; if(loopCounter>3){ paddleVelocity=(paddle.getX()-paddleTracker)/50; //paddleXTracker=paddle.getX(); }else{ paddleVelocity=(paddle.getX()-paddleXTracker)/50; } return (paddleVelocity); } private void detectCollisionPaddle(){; double ballY = ball.getY()+(BALL_RADIUS*2); //double ballX = ball.getX()+(BALL_RADIUS*2); /*if(ball.getY()+BALL_RADIUS*2>=paddle.getY() && ball.getX()+BALL_RADIUS*2>=paddle.getX() && ball.getX()<=paddle.getX()){ ball.setLocation(ball.getX(),paddle.getY()-BALL_RADIUS*2-1); }*/ if(paddle.getY()<=ballY && ball.getX()+BALL_RADIUS>=paddle.getX() && ball.getX()+BALL_RADIUS/2<=(paddle.getX()+PADDLE_WIDTH)){ yVel=-yVel; bounceClip.play(); xVel=xVel+paddleVelocity(); /* int x = 4; int center = PADDLE_WIDTH/2; double parts=center/x; int location=0; if(ball.getX()+BALL_RADIUS*2>paddle.getX()&& ball.getX()<paddle.getX()+PADDLE_WIDTH){ for(int i = 0;i<x;i++){ if (ball.getX()<center+paddle.getX() && ball.getX()>(parts*i)+paddle.getX()){ }else{ location = i; break; } } xVel=x/location; } } // if(ball.getX()+BALL_RADIUS<paddle.getX()+PADDLE_WIDTH/4){ if(xVel!=1){ if(xVel<0) { xVel=xVel-1; }else{ xVel=-xVel-1; } } } if (ball.getX()>paddle.getX()+(PADDLE_WIDTH-PADDLE_WIDTH/4)){ if(xVel>0){ xVel=xVel+1; }else{ xVel=-xVel+1; } } if (ball.getX()<paddle.getX()+(PADDLE_WIDTH-PADDLE_WIDTH/4) && ball.getX()+BALL_RADIUS>paddle.getX()+PADDLE_WIDTH/4){ if(xVel>1){ xVel=xVel-1; } if(xVel<-1){ xVel=xVel+1; } } */ } } public void mouseMoved(MouseEvent e){ paddle.move((e.getX()-paddle.getX()-PADDLE_WIDTH/2),0); } /*Passes in an object to be checked, i.e. the ball, and asks for it's height and width. It returns the object if one is present, otherwise it retuns null.**/ private GObject detectObject(GObject object, int width, int height){ double right = object.getX()+width; double bottom = object.getY()+height; getObject = getElementAt(object.getX(),object.getY()); if(getObject==null){ getObject = getElementAt(right,object.getY()); }else{ return getObject; } if(getObject==null){ getObject = getElementAt(object.getX(),bottom); }else{ return getObject; } if(getObject==null){ getObject = getElementAt(right,bottom); }else{ return getObject; } if(getObject==null){ return null; }else{ return getObject; } } private void decelerate(){ if(xVel<-1){ xVel=xVel+0.01; } if(xVel>1){ xVel=xVel-0.01; } } private void assignPaddleTrackers(){ paddleXTrackerThree=paddleXTrackerTwo; paddleXTrackerTwo=paddleXTrackerThree; paddleXTracker=paddle.getX(); } private int loopCounter; private double paddleVelocity; private double paddleXTracker = 1; private double paddleXTrackerTwo; private double paddleXTrackerThree; private double yVel = -1; private double xVel = 1; private int turnsLeft = NTURNS; private int blockY = BRICK_Y_OFFSET; private int blockX = 0; private static boolean lose = false; private RandomGenerator rgen = RandomGenerator.getInstance(); } A: I can't see where you are setting paddleXTracker and friends when paddle velocity is updated. So here are some possible fixes: * *You are not updating the past values continuously, only once at the start. *you never updated the paddleXTrackerTwo and paddleXTrackerThree values so the average is to the left (due to divide by 3), if origin is on the left (likely). Both of these would account for the right bias you are seeing. Further, you appear to be doing an adhoc version of moving average. There are other variations which maybe more suited to what you are trying to do. edit Now that your full code is available, I can't see what is wrong. I can only suggest you employ debugging techniques, such as printing out the paddle*Tracker* while the game is running to see what values are coming out, and whether or not they are as expected. edit 2 Just to make this answer an answer, as per our OOB discussion, your assign tracker function is wrong, and is not updating paddleXTrackerTwo with paddleXTracker.
{ "language": "en", "url": "https://stackoverflow.com/questions/166015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does ASP Classic has its own security framework or does it use that of IIS? I'm supporting a site that still uses mixed ASP.NET and ASP Classic. The user receives a 'You are not authorized' error page while accessing certain ASP Classic page. I've checked her active directory account and she could access other pages in the said site. Can it be attributed to classic ASP or to IIS? A: ASP Classic is a very simple framework. I cannot imagine it has its own security framework (I assume that means user authentication, etc.) unless it was programmed into the application itself. A: ASP is entirely dependent on the underlying IIS and the OS for security. It has none of its own. In ASP you access Request.ServerVariables("AUTH_USER"), etc. when the connection is authenticated, but this is done by IIS. A: You can force ASP to use the ASP.NET authentication by making a few changes in IIS so ASP files are using the aspnet_isapi.dll just like the asp.net pages. Scott Guthrie published an article about this Tip/Trick: Integrating ASP.NET Security with Classic ASP and Non-ASP.NET URLs Once you make this change classic asp pages can be protected just as asp.net pages using the standard asp.net security features.
{ "language": "en", "url": "https://stackoverflow.com/questions/166023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Find running clickonce deployed single instance app? I'm currently facing a problem with my single instance clickonce deployed app, and maybe someone has a good idea about it. My app is called by a web application, which passes parameters to this exe. Now the problem is, if the application is already running, it takes quite a long time, until the new instance is started, the parameters are handed over to the running instance, and closed (with opening the URL, checking for updates,...). So is there a way to detect that there is a running instance, without introducing a new small app, which does this detection? The goal is to decrease the time, which the second, third, ... call needs to get the parameters to the running instance. A: When you setup the deployment setting, you can tell VS to only let the application update every x amount of time (once a day, week, etc). You can also tell it to launch the application and update in the background. Both these would solve your problems on their own I think. The settings are in the Projects settings, on the Publish tab. Click the "Update" button in the "Install mode and settings" section and set appropriate settings. A: No it's not a background app. The Web app and the Winforms app are working with a similar subset of the database. I'm trying not to go into details, because it's not important for the question, but to make it more clear: With the Web-app the users are creating the meta data for our business case, and with the Winforms app the users doing their concrete work. So with this link, it is possible to create a new set of meta data, and cross-check the result in the "working-app". So there are 2 concrete scenarios: * *The Winforms-App is not running on the client: When users clicks on the click-once start menu entry, or the link in the web app, everything should be done in the way it is now (with update check, ....). So this scenario works for me. *The Winforms-App is running on the client: The running instance should display the new set of meta data as quickly as possible, without any click-once update check, or whatever. I'm trying to bypass in this scenario, that the "click-once starting app" dialog is poping up, the new app instance is starting, the new instance is passing the parameters to the running instance and closing itself. So I searching for a solution, where I achieve that, without creating a new small exe, which is known by the web app, which does the work. A: I think I made it not clear, what I'm trying to achieve. What I'm trying to do is, if there is an instance running, accessing this one directly, without starting the clickonce url. I searching for a solution, where I don't have to write a little program (which has to be deployed as well, ...), which checks if the app is running, if yes hand over the params, if not starting the clickonce url. The background update is not really an option, because this "connecting to app" screen is still there and consuming time, and it's a must, that every user is running at every time the most recent version of the app. A: This seems an interesting use of Click-Once technology. I was under the impression that Click-Once is ideal for for distributing a client application to a multiple end-user machines within an enterprise. In the situation described here, this is a background application used by a web-server application - which I would expect to be only installed on a few servers in the enterprise. Questions I have are: * *How would your web application pass the parameters to the running instance if the web application could detect it? (eg .NET remoting?) *What's your reason for distributing this background application via Click-Once (as opposed to a windows installer)? Knowing this might help to resolve your issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/166025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Handling large databases I have been working in a web project(asp.net) for around six months. The final product is about to go live. The project uses SQL Server as the database. We have done performance testing with some large volumes of data, results show that performance degrades when data becomes too large, say 2 million rows (timeout issues, delayed reponses, etc). At first we were using fully normailized database, but now we made it partially normalized due to performance issues (to reduce joins). First of all, is it the right decision? Plus what are the possible solutions when data size becomes very large, as the no. of clients increase in future? I would like to add further: * *2 million rows are entity tables, tables resolving the relations have much larger rows. *Performance degrades when data + no. of users increases. *Denormalization was done after identifying the heavily used queries. *We are also using some heavy amount of xml columns and xquery. Can this be the cause? *A bit off the topic, some folks in my project say that dynamic sql query is faster than a stored procedure approach. They have done some kind of performance testing to prove their point. I think the opposite is true. Some of the heavily used queries are dynamicaly created where as most of other queries are encapsulated in stored procedures. A: 2 million rows is normally not a Very Large Database, depending on what kind of information you store. Usualy when performance degrades you should verify your indexing strategy. The SQL Server Database Engine Tuning Advisor may be of help there. A: In the scheme of things, a few million rows is not a particulary large Database. Assuming we are talking about an OLTP database, denormalising without first identifying the root cause of your bottlenecks is a very, very bad idea. The first thing you need to do is profile your query workload over a representative time period to identify where most of the work is being done (for instance, using SQL Profiler, if you are using SQL Server). Look at the number of logical reads a query performs multiplied by the number of times executed. Once you have identified the top ten worst performing queries, you need to examine the query execution plans in detail. I'm going to go out on a limb here (because it is usually the case), but I would be surprised if your problem is not either * *Absence of the 'right' covering indexes for the costly queries *Poorly configured or under specified disk subsystem This SO answer describes how to profile to find the worst performing queries in a workload. A: There can be a million reasons for that; use SQL Profiler and Query analyzer to determine why your queries are getting slow before going down the "schema change" road. It is not unlikely that all you need to do is create a couple of indexes and schedule "update statistics"... ...but as I said, Profiler and Query Analyzer are the best tools for finding out what is going on... A: As the old saying goes "normalize till it hurts, denormalise till it works". I love this one! This is typically the kind of thing that must not be accepted anymore. I can imagine that, back at DBASEIII times, where you could not open more than 4 tables at a time (unless changing some of your AUTOEXEC.BAT parameters AND rebooting your computer, ahah! ...), there was some interest in denormalisation. But nowadays I see this solution similar to a gardener waiting for a tsunami to water his lawn. Please use the available watering can (SQL profiler). And don't forget that each time you denormalize part of your database, your capacity to further adapt it decreases, as risks of bugs in code increases, making the whole system less and less sustainable. A: At first we were using fully normailized database, but now we made it partially normailzed due to performance issues (to reduce joins). As the old saying goes "normalize till it hurts, denormalise till it works". It's fairly common in large, heavy-use dbs to see a degree of denormalisation to aid performance, so I wouldn't worry too much about it now, so long as your performance is still where you want it to be and your code to manage the "denormalised" fields doesn't become too onerous. what are the possible solutions when data size becomes very large, as the no. of clients increase in future? Not knowing too much about your application's domain, it's hard to say how you can future-proof it, but splitting out recently used and old data to separate tables is a fairly common approach in heavily-trafficked databases - if 95% of your users are querying their data from the last 30/45 days, having a "live_data" table containing, say, the last 60 day's worth of data and an "old_data" for the older stuff can help your performance. A good idea would be to make sure you have extensive performance monitoring set up so that you can measure your db's performance as the data and load increases. If you find a noticeable drop in performance, it might be time to revisit your indexes! A: That may not be the right decision. Identify all your DB interactions and profile them independently, then find the offending ones and strategize to maximize performance there. Also turning on the audit logs on your DB and mining them might provide better optimization points. A: * *First make sure your database is reasonably healthy, run DBCC DBREINDEX on it if possible, DBCC INDEXDEFRAG and update statistics if you can't afford the performance hit. *Run Profiler for a reasonable sample time, enough to capture most of the typical functions, but filter on duration greater than something like 10 seconds, you don't care about the things that only take a few milliseconds, don't even look at those. *Now that you have your longest running queries, tune the snot out of them; get the ones that show up the most, look at the execution plans in Query Analyzer, take some time to understand them, add indexes where necessary to speed retrieval *look at creating covered indexes; change the app if needed if it's doing SELECT * FROM... when it only needs SELECT LASTNAME, FIRSTNAME.... *Repeat the profiler sampling, with duration of 5 seconds, 3 seconds, etc. until performance meets your expectations. A: I think its best to keep your OLTP type data denormalized to prevent your core data from getting 'polluted'. That will bite you down the road. If the bottle neck is because of reporting or read-only needs, I personally see no problem have denormalized reporting tables in addition to the normalized 'production' tables; create a process to roll up to whatever level you need to make queries snappy. A simple SP or nightly process that periodically rolls up and denormalizes tables used only in a read-only fashion can often make a huge difference in the users experience. After all, what good is it to have a theoretically clean, perfectly normalized set of data if no one wants to use your system because it is to slow? A: We've always tried to develop using a database that is as close to the "real world" as possible. That way you avoid a lot of gotcha's like this one, since any ol' developer would go mental if his connection kept timing out during debugging. The best way to debug Sql performance problems IMO is what Mitch Wheat suggest; profile to find the offending scripts and start with them. Optimizing scripts can take you far and then you need to look at indexes. Also make sure that you Sql Server has enought horsepower, especially IO (disk) is important. And don't forget; cache is king. Memory is cheap; buy more. :) A: First off as many others have said a few million rows is not large. The current application I'm working on has several tables all with over a hundred million rows in which are all normalised. We did suffer from some poor performance but this was caused by using the default table statistics settings. Inserting small numbers of records relative to the total size of the table, i.e. inserting a million records into a table containing 100+ million records wasn't causing an automatic update of the table stats and so we'd get poor query plans which manifested itself as serial queries being produced instead of parallel. As to whether it's the right decision to denormalise, depends on your schema. Do you have to perform deep queries regularly i.e. loads of joins to get at data that you regularly need access to, if so then partial denormaisation might be a way forward. BUT NOT BEFORE you've checked your indexing and table statistic strategies. Check that you're using sensible, well structured queries and that your joins are well formed. Check your query plans that your queries are actually parsing the way you expect. As others have said SQL Profiler/Database Engine Tuning Advisor do actually make a good job of it. For me denormalisation is usually near the bottom of my list of things to do. If you're still having problems then check your Server Software and Hardware setup. * *Are your database and log files on separate physical disks using separate controllers? *Does it have enough memory? *Is the log file set to autogrow? If so is the autogrow limit to low, i.e. is it growing to often. A: You are right to do whatever works. ... as long as you realise that there may be a price to pay later. It sounds like you are thinking about this anyway. Things to check: Deadlocks * *Are all processes accessing tables in the same order? Slowness * *Are any queries doing tablescans? * *Check for large joins (more than 4 tables) *Check your indeces See my other posts on general performance tips: * *How do you optimize tables for specific queries? *Favourite performance tuning tricks A: After having analyzed indexes and queries you might want to just by more hardware. A few more gigs of ram might do the trick. A: Interesting... a lot of answers on here.. Is the rdbms / os version 64 bit? Appears to me that the performance degrade is several fold. part of the reason is certainly due to indexing. Have you considered partitioning some of the tables in a manner that's consistent with how the data is stored? Meaning, create partitions based on how the data goes in (based on order). This will give you a lot of performance increase as the majority of the indexes are static. Another issue is the xml data. Are you utilizing xml indexes? From books on line (2008) "Using the primary XML index, the following types of secondary indexes are supported: PATH, VALUE, and PROPERTY." Lastly, is the system currently designed to run / execute a lot of dynamic sql? If so, you will have degregation from a memory perspecive as plans need to be generated, re generated and seldom resued. I call this memory churn or memory thrashing. HTH A: A few million records is a tiny database to SQL Server. It can handle terrabytes of data with lots of joins, no sweat. You likely have a design problem or very poorly written queries. Kudos for performance testing before you go live. It is a lot harder to fix this stuff after you have been in production for months or years. What you did is probably a bad choice. If you denormalize, you need to set up triggers to make sure the data stays in synch. Did you do that? How much did it increase your insert and update time? My first guess would be that you didn't put indexes on the foreign keys. Other guesses as to what could be wrong include, overuse of things such as: correlated subqueries scalar functions views calling views cursors EAV tables lack of sargability use of select * Poor table design can also make it hard to have good performance. For instance, if your tables are too wide, accessing them will be slower. If you are often converting data to another data type in order to use it, then you have it stored incorrectly and this will always be a drag on the system. Dynamic SQl may be faster than a stored proc, it may not. There is no one right answer here for performance. For internal security (you do not have to set rights at the table level) and ease of making changes to the database, stored procs are better. You need to to run profiler and determine what your slowest queries are. Also look at all the queries that are run very frequently. A small change can pay off big whenteh query is run thosands of times a day. You also shoudl go get some books on performance tuning. These will help you through the process as performance problems can be due to many things: Database design Query design Hardware Indexing etc. There is no one quick fix and denormalizing randomly can get you in more trouble than not if you don't maintain the data integrity.
{ "language": "en", "url": "https://stackoverflow.com/questions/166028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What do ‘value semantics’ and ‘pointer semantics’ mean? What is meant by ‘value semantics’, and what is meant by ‘implicit pointer semantics’? A: Java is using implicit pointer semantics for Object types and value semantics for primitives. Value semantics means that you deal directly with values and that you pass copies around. The point here is that when you have a value, you can trust it won't change behind your back. With pointer semantics, you don't have a value, you have an 'address'. Someone else could alter what is there, you can't know. Pointer Semantics in C++ : void foo(Bar * b) ... ... b->bar() ... You need an * to ask for pointer semantics and -> to call methods on the pointee. Implicit Pointer Semantics in Java : void foo(Bar b) ... ... b.bar() ... Since you don't have the choice of using value semantics, the * isn't needed nor the distinction between -> and ., hence the implicit. A: Basically, value semantics means that assigning one value to another creates a copy: int x = 1; int y = x; x = 2; // y remains the same! A special case is a function call which gets passed an argument: void f(int x) { x = 5; } int a = 1; f(a); // a is still 1 This is actually the same for Java and C++. However, Java knows only a few primitive types, among them int, double, boolean and char, along with enums which behave in this manner. All other types use reference semantics which means that an assignment of one value to another actually redirects a pointer instead of copying the underlying value: class Foo { int x; public Foo(int x) { this.x = x; } } Foo a = new Foo(42); Foo b = a; // b and a share the same instance! a.x = 32; //b.x is now also changed. There are a few caveats however. For example, many reference types (String, Integer …) are actually immutables. Their value cannot be changed and any assignment to them overrides the old value. Also, arguments still get passed by value. This means that the value of an object passed to a function can be changed but its reference can't: void f(Foo foo) { foo.x = 42; } void g(Foo foo) { foo = new Foo(42); } Foo a = new Foo(23); f(a); // a.x is now 42! Foo b = new Foo(1); g(b); // b remains unchanged! A: Java uses implicit pointer semantics on variable access (you can not directly edit the reference, it autmatically (implicit) gets resolved to the Object on access) and also uses Pass-by-Value semantics on method parameters passing. Read Pass-by-value semantics in Java applications: In Java applications, when an object reference is a parameter to a method, you are passing a copy of the reference (pass by value), not the reference itself. Note that the calling method's object reference and the copy are pointing to the same object. This is an important distinction. A Java application does nothing differently when passing parameters of varying types like C++ does. Java applications pass all parameters by value, thus making copies of all parameters regardless of type. Short: All parameters in Java are passed by value. But that doesn't mean an Object gets copied (like the default in PHP4), but the reference to that object gets copied. You'll see all explanations and in-depth examples on Pass-by-value semantics in Java applications A: Java is pass by value. C++ can use both, value and reference semantics. http://javadude.com/articles/passbyvalue.htm
{ "language": "en", "url": "https://stackoverflow.com/questions/166033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Sleeping in a batch file When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it here, which describes a callable "wait.bat", implemented as follows: @ping 127.0.0.1 -n 2 -w 1000 > nul @ping 127.0.0.1 -n %1% -w 1000> nul You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep. Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) Windows NT, is there a better way? I modified the sleep.py script in the accepted answer, so that it defaults to one second if no arguments are passed on the command line: import time, sys time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1) A: You could use the Windows cscript WSH layer and this wait.js JavaScript file: if (WScript.Arguments.Count() == 1) WScript.Sleep(WScript.Arguments(0)*1000); else WScript.Echo("Usage: cscript wait.js seconds"); A: Depending on your compatibility needs, either use ping: ping -n <numberofseconds+1> localhost >nul 2>&1 e.g. to wait 5 seconds, use ping -n 6 localhost >nul 2>&1 or on Windows 7 or later use timeout: timeout 6 >nul A: There is a better way to sleep using ping. You'll want to ping an address that does not exist, so you can specify a timeout with millisecond precision. Luckily, such an address is defined in a standard (RFC 3330), and it is 192.0.2.x. This is not made-up, it really is an address with the sole purpose of not-existing. To be clear, this applies even in local networks. 192.0.2.0/24 - This block is assigned as "TEST-NET" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. Addresses within this block should not appear on the public Internet. To sleep for 123 milliseconds, use ping 192.0.2.1 -n 1 -w 123 >nul Update: As per the comments, there is also 127.255.255.255. A: If you've got PowerShell on your system, you can just execute this command: powershell -command "Start-Sleep -s 1" Edit: from my answer on a similar thread, people raised an issue where the amount of time powershell takes to start is significant compared to how long you're trying to wait for. If the accuracy of the wait time is important (ie a second or two extra delay is not acceptable), you can use this approach: powershell -command "$sleepUntil = [DateTime]::Parse('%date% %time%').AddSeconds(5); $sleepDuration = $sleepUntil.Subtract((get-date)).TotalMilliseconds; start-sleep -m $sleepDuration" This takes the time when the windows command was issued, and the powershell script sleeps until 5 seconds after that time. So as long as powershell takes less time to start than your sleep duration, this approach will work (it's around 600ms on my machine). A: timeout /t <seconds> <options> For example, to make the script perform a non-uninterruptible 2-second wait: timeout /t 2 /nobreak >NUL Which means the script will wait 2 seconds before continuing. By default, a keystroke will interrupt the timeout, so use the /nobreak switch if you don't want the user to be able to interrupt (cancel) the wait. Furthermore, the timeout will provide per-second notifications to notify the user how long is left to wait; this can be removed by piping the command to NUL. edit: As @martineau points out in the comments, the timeout command is only available on Windows 7 and above. Furthermore, the ping command uses less processor time than timeout. I still believe in using timeout where possible, though, as it is more readable than the ping 'hack'. Read more here. A: Just put this in your batch file where you want the wait. @ping 127.0.0.1 -n 11 -w 1000 > null A: The Resource Kit has always included this. At least since Windows 2000. Also, the Cygwin package has a sleep - plop that into your PATH and include the cygwin.dll (or whatever it's called) and way to go! A: In Notepad, write: @echo off set /a WAITTIME=%1+1 PING 127.0.0.1 -n %WAITTIME% > nul goto:eof Now save as wait.bat in the folder C:\WINDOWS\System32, then whenever you want to wait, use: CALL WAIT.bat <whole number of seconds without quotes> A: The usage of ping is good, as long as you just want to "wait for a bit". This since you are dependent on other functions underneath, like your network working and the fact that there is nothing answering on 127.0.0.1. ;-) Maybe it is not very likely it fails, but it is not impossible... If you want to be sure that you are waiting exactly the specified time, you should use the sleep functionality (which also have the advantage that it doesn't use CPU power or wait for a network to become ready). To find an already made executable for sleep is the most convenient way. Just drop it into your Windows folder or any other part of your standard path and it is always available. Otherwise, if you have a compiling environment you can easily make one yourself. The Sleep function is available in kernel32.dll, so you just need to use that one. :-) For VB / VBA declare the following in the beginning of your source to declare a sleep function: private Declare Sub Sleep Lib "kernel32" Alias "Sleep" (byval dwMilliseconds as Long) For C#: [DllImport("kernel32.dll")] static extern void Sleep(uint dwMilliseconds); You'll find here more about this functionality (available since Windows 2000) in Sleep function (MSDN). In standard C, sleep() is included in the standard library and in Microsoft's Visual Studio C the function is named Sleep(), if memory serves me. ;-) Those two takes the argument in seconds, not in milliseconds as the two previous declarations. A: I have been using this C# sleep program. It might be more convenient for you if C# is your preferred language: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace sleep { class Program { static void Main(string[] args) { if (args.Length == 1) { double time = Double.Parse(args[0]); Thread.Sleep((int)(time*1000)); } else { Console.WriteLine("Usage: sleep <seconds>\nExample: sleep 10"); } } } } A: Even more lightweight than the Python solution is a Perl one-liner. To sleep for seven seconds put this in the BAT script: perl -e "sleep 7" This solution only provides a resolution of one second. If you need higher resolution then use the Time::HiRes module from CPAN. It provides usleep() which sleeps in microseconds and nanosleep() which sleeps in nanoseconds (both functions takes only integer arguments). See the Stack Overflow question How do I sleep for a millisecond in Perl? for further details. I have used ActivePerl for many years. It is very easy to install. A: I like Aacini's response. I added to it to handle the day and also enable it to handle centiseconds (%TIME% outputs H:MM:SS.CC): :delay SET DELAYINPUT=%1 SET /A DAYS=DELAYINPUT/8640000 SET /A DELAYINPUT=DELAYINPUT-(DAYS*864000) ::Get ending centisecond (10 milliseconds) FOR /F "tokens=1-4 delims=:." %%A IN ("%TIME%") DO SET /A H=%%A, M=1%%B%%100, S=1%%C%%100, X=1%%D%%100, ENDING=((H*60+M)*60+S)*100+X+DELAYINPUT SET /A DAYS=DAYS+ENDING/8640000 SET /A ENDING=ENDING-(DAYS*864000) ::Wait for such a centisecond :delay_wait FOR /F "tokens=1-4 delims=:." %%A IN ("%TIME%") DO SET /A H=%%A, M=1%%B%%100, S=1%%C%%100, X=1%%D%%100, CURRENT=((H*60+M)*60+S)*100+X IF DEFINED LASTCURRENT IF %CURRENT% LSS %LASTCURRENT% SET /A DAYS=DAYS-1 SET LASTCURRENT=%CURRENT% IF %CURRENT% LSS %ENDING% GOTO delay_wait IF %DAYS% GTR 0 GOTO delay_wait GOTO :EOF A: The timeout command is available from Windows Vista onwards: c:\> timeout /? TIMEOUT [/T] timeout [/NOBREAK] Description: This utility accepts a timeout parameter to wait for the specified time period (in seconds) or until any key is pressed. It also accepts a parameter to ignore the key press. Parameter List: /T timeout Specifies the number of seconds to wait. Valid range is -1 to 99999 seconds. /NOBREAK Ignore key presses and wait specified time. /? Displays this help message. NOTE: A timeout value of -1 means to wait indefinitely for a key press. Examples: TIMEOUT /? TIMEOUT /T 10 TIMEOUT /T 300 /NOBREAK TIMEOUT /T -1 Note: It does not work with input redirection - trivial example: C:\>echo 1 | timeout /t 1 /nobreak ERROR: Input redirection is not supported, exiting the process immediately. A: Using the ping method as outlined is how I do it when I can't (or don't want to) add more executables or install any other software. You should be pinging something that isn't there, and using the -w flag so that it fails after that amount of time, not pinging something that is there (like localhost) -n times. This allows you to handle time less than a second, and I think it's slightly more accurate. e.g. (test that 1.1.1.1 isn't taken) ECHO Waiting 15 seconds PING 1.1.1.1 -n 1 -w 15000 > NUL or PING -n 15 -w 1000 127.1 >NUL A: Or command line Python, for example, for 6 and a half seconds: python -c "import time;time.sleep(6.5)" A: The best solution that should work on all Windows versions after Windows 2000 would be: timeout numbersofseconds /nobreak > nul A: There are lots of ways to accomplish a 'sleep' in cmd/batch: My favourite one: TIMEOUT /NOBREAK 5 >NUL 2>NUL This will stop the console for 5 seconds, without any output. Most used: ping localhost -n 5 >NUL 2>NUL This will try to make a connection to localhost 5 times. Since it is hosted on your computer, it will always reach the host, so every second it will try the new every second. The -n flag indicates how many times the script will try the connection. In this case is 5, so it will last 5 seconds. Variants of the last one: ping 1.1.1.1 -n 5 >nul In this script there are some differences comparing it with the last one. This will not try to call localhost. Instead, it will try to connect to 1.1.1.1, a very fast website. The action will last 5 seconds only if you have an active internet connection. Else it will last approximately 15 to complete the action. I do not recommend using this method. ping 127.0.0.1 -n 5 >nul This is exactly the same as example 2 (most used). Also, you can also use: ping [::1] -n 5 >nul This instead, uses IPv6's localhost version. There are lots of methods to perform this action. However, I prefer method 1 for Windows Vista and later versions and the most used method (method 2) for earlier versions of the OS. A: UPDATE The timeout command, available from Windows Vista and onwards should be the command used, as described in another answer to this question. What follows here is an old answer. Old answer If you have Python installed, or don't mind installing it (it has other uses too :), just create the following sleep.py script and add it somewhere in your PATH: import time, sys time.sleep(float(sys.argv[1])) It will allow sub-second pauses (for example, 1.5 sec, 0.1, etc.), should you have such a need. If you want to call it as sleep rather than sleep.py, then you can add the .PY extension to your PATHEXT environment variable. On Windows XP, you can edit it in: My Computer → Properties (menu) → Advanced (tab) → Environment Variables (button) → System variables (frame) A: SLEEP.exe is included in most Resource Kits e.g. The Windows Server 2003 Resource Kit which can be installed on Windows XP too. Usage: sleep time-to-sleep-in-seconds sleep [-m] time-to-sleep-in-milliseconds sleep [-c] commited-memory ratio (1%-100%) A: I disagree with the answers I found here. I use the following method entirely based on Windows XP capabilities to do a delay in a batch file: DELAY.BAT: @ECHO OFF REM DELAY seconds REM GET ENDING SECOND FOR /F "TOKENS=1-3 DELIMS=:." %%A IN ("%TIME%") DO SET /A H=%%A, M=1%%B%%100, S=1%%C%%100, ENDING=(H*60+M)*60+S+%1 REM WAIT FOR SUCH A SECOND :WAIT FOR /F "TOKENS=1-3 DELIMS=:." %%A IN ("%TIME%") DO SET /A H=%%A, M=1%%B%%100, S=1%%C%%100, CURRENT=(H*60+M)*60+S IF %CURRENT% LSS %ENDING% GOTO WAIT You may also insert the day in the calculation so the method also works when the delay interval pass over midnight. A: I faced a similar problem, but I just knocked up a very short C++ console application to do the same thing. Just run MySleep.exe 1000 - perhaps easier than downloading/installing the whole resource kit. #include <tchar.h> #include <stdio.h> #include "Windows.h" int _tmain(int argc, _TCHAR* argv[]) { if (argc == 2) { _tprintf(_T("Sleeping for %s ms\n"), argv[1]); Sleep(_tstoi(argv[1])); } else { _tprintf(_T("Wrong number of arguments.\n")); } return 0; } A: You can use ping: ping 127.0.0.1 -n 11 -w 1000 >nul: 2>nul: It will wait 10 seconds. The reason you have to use 11 is because the first ping goes out immediately, not after one second. The number should always be one more than the number of seconds you want to wait. Keep in mind that the purpose of the -w is not to control how often packets are sent, it's to ensure that you wait no more than some time in the event that there are network problems. There are unlikely to be problems if you're pinging 127.0.0.1 so this is probably moot. The ping command on its own will normally send one packet per second. This is not actually documented in the Windows docs but it appears to follow the same rules as the Linux version (where it is documented). A: Over at Server Fault, a similar question was asked, and the solution there was: choice /d y /t 5 > nul A: I am impressed with this one: http://www.computerhope.com/batch.htm#02 choice /n /c y /d y /t 5 > NUL Technically, you're telling the choice command to accept only y. It defaults to y, to do so in 5 seconds, to draw no prompt, and to dump anything it does say to NUL (like null terminal on Linux). A: You can also use a .vbs file to do specific timeouts: The code below creates the .vbs file. Put this near the top of you rbatch code: echo WScript.sleep WScript.Arguments(0) >"%cd%\sleeper.vbs" The code below then opens the .vbs and specifies how long to wait for: start /WAIT "" "%cd%\sleeper.vbs" "1000" In the above code, the "1000" is the value of time delay to be sent to the .vbs file in milliseconds, for example, 1000 ms = 1 s. You can alter this part to be however long you want. The code below deletes the .vbs file after you are done with it. Put this at the end of your batch file: del /f /q "%cd%\sleeper.vbs" And here is the code all together so it's easy to copy: echo WScript.sleep WScript.Arguments(0) >"%cd%\sleeper.vbs" start /WAIT "" "%cd%\sleeper.vbs" "1000" del /f /q "%cd%\sleeper.vbs" A: The pathping.exe can sleep less than second. @echo off setlocal EnableDelayedExpansion echo !TIME! & pathping localhost -n -q 1 -p %~1 2>&1 > nul & echo !TIME! . > sleep 10 17:01:33,57 17:01:33,60 > sleep 20 17:03:56,54 17:03:56,58 > sleep 50 17:04:30,80 17:04:30,87 > sleep 100 17:07:06,12 17:07:06,25 > sleep 200 17:07:08,42 17:07:08,64 > sleep 500 17:07:11,05 17:07:11,57 > sleep 800 17:07:18,98 17:07:19,81 > sleep 1000 17:07:22,61 17:07:23,62 > sleep 1500 17:07:27,55 17:07:29,06 A: Just for fun, if you have Node.js installed, you can use node -e 'setTimeout(a => a, 5000)' to sleep for 5 seconds. It works on a Mac with Node v12.14.0. A: You can get fancy by putting the PAUSE message in the title bar: @ECHO off SET TITLETEXT=Sleep TITLE %TITLETEXT% CALL :sleep 5 GOTO :END :: Function Section :sleep ARG ECHO Pausing... FOR /l %%a in (%~1,-1,1) DO (TITLE Script %TITLETEXT% -- time left^ %%as&PING.exe -n 2 -w 1000 127.1>NUL) EXIT /B 0 :: End of script :END pause ::this is EOF A: From Windows Vista on you have the TIMEOUT and SLEEP commands, but to use them on Windows XP or Windows Server 2003, you'll need the Windows Server 2003 resource tool kit. Here you have a good overview of sleep alternatives (the ping approach is the most popular as it will work on every Windows machine), but there's (at least) one not mentioned which (ab)uses the W32TM (Time Service) command: w32tm /stripchart /computer:localhost /period:1 /dataonly /samples:N >nul 2>&1 Where you should replace the N with the seconds you want to pause. Also, it will work on every Windows system without prerequisites. Typeperf can also be used: typeperf "\System\Processor Queue Length" -si N -sc 1 >nul With mshta and javascript (can be used for sleep under a second): start "" /wait /min /realtime mshta "javascript:setTimeout(function(){close();},5000)" This should be even more precise (for waiting under a second) - self compiling executable relying on .net: @if (@X)==(@Y) @end /* JScript comment @echo off setlocal ::del %~n0.exe /q /f :: :: For precision better call this like :: call waitMS 500 :: in order to skip compilation in case there's already built .exe :: as without pointed extension first the .exe will be called due to the ordering in PATEXT variable :: :: for /f "tokens=* delims=" %%v in ('dir /b /s /a:-d /o:-n "%SystemRoot%\Microsoft.NET\Framework\*jsc.exe"') do ( set "jsc=%%v" ) if not exist "%~n0.exe" ( "%jsc%" /nologo /w:0 /out:"%~n0.exe" "%~dpsfnx0" ) %~n0.exe %* endlocal & exit /b %errorlevel% */ import System; import System.Threading; var arguments:String[] = Environment.GetCommandLineArgs(); function printHelp(){ Console.WriteLine(arguments[0]+" N"); Console.WriteLine(" N - milliseconds to wait"); Environment.Exit(0); } if(arguments.length<2){ printHelp(); } try{ var wait:Int32=Int32.Parse(arguments[1]); System.Threading.Thread.Sleep(wait); }catch(err){ Console.WriteLine('Invalid Number passed'); Environment.Exit(1); } A: This was tested on Windows XP SP3 and Windows 7 and uses CScript. I put in some safe guards to avoid del "" prompting. (/q would be dangerous) Wait one second: sleepOrDelayExecution 1000 Wait 500 ms and then run stuff after: sleepOrDelayExecution 500 dir \ /s sleepOrDelayExecution.bat: @echo off if "%1" == "" goto end if NOT %1 GTR 0 goto end setlocal set sleepfn="%temp%\sleep%random%.vbs" echo WScript.Sleep(%1) >%sleepfn% if NOT %sleepfn% == "" if NOT EXIST %sleepfn% goto end cscript %sleepfn% >nul if NOT %sleepfn% == "" if EXIST %sleepfn% del %sleepfn% for /f "usebackq tokens=1*" %%i in (`echo %*`) DO @ set params=%%j %params% :end A: Since others are suggesting 3rd party programs (Python, Perl, custom app, etc), another option is GNU CoreUtils for Windows available at http://gnuwin32.sourceforge.net/packages/coreutils.htm. 2 options for deployment: * *Install full package (which will include the full suite of CoreUtils, dependencies, documentation, etc). *Install only the 'sleep.exe' binary and necessary dependencies (use depends.exe to get dependencies). One benefit of deploying CoreUtils is that you'll additionally get a host of other programs that are helpful for scripting (Windows batch leaves a lot to be desired). A: use timeout e.g.- timeout 10 will await for 10 seconds before executing next command in cmd or powershell A: There are many answers on this issue noting the use of ping, but most of them point to loopback addresses or addresses that are now seen as valid addresses for DNS. Instead of that, you should use a TEST-NET IP address reserved for documentation use only, per the IETF. (more information on that here) Here is a fully commented batch script to demonstrate the use of a sleep function using ping: @echo off :: turns off command-echoing echo/Script will now wait for 2.5 seconds & echo/ :: prints a line followed by a linebreak call:sleep 2500 :: waits for two-and-a-half seconds (2500 milliseconds) echo/Done! Press any key to continue ... & pause >NUL :: prints a line and pauses goto:EOF :: prevents the batch file from executing functions beyond this point ::--FUNCTIONS--:: :SLEEP :: call this function with the time to wait (in milliseconds) ping 203.0.113.0 -n 1 -w "%~1" >NUL :: 203.0.113.0 = TEST-NET-3 reserved IP; -n = ping count; -w = timeout goto:EOF :: ends the call subroutine And of course, you can also just use the command directly if you don't want to make a function: ping 203.0.113.0 -n 1 -w timeInMilliseconds >NUL A: ping -n X 127.0.0.1 > nul Replace X with the number of seconds + 1. For example, if you were to wait 10 seconds, replace X with 11. To wait 5 seconds, use 6. Read earlier answers for milliseconds.
{ "language": "en", "url": "https://stackoverflow.com/questions/166044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: How should I name packages if I don't have a domain associated with me? So most Java resources when speaking of packages mention a com.yourcompany.project setup. However, I do not work for a company, and don't have a website. Are there any naming conventions that are common? An email address, perhaps? A: If your creating an open source project, you could register it with Sourceforge and use net.sourceforge.myproject. This is common with a lot of Java projects. An example is PMD http://pmd.sourceforge.net/. A: Note that the "reverse domain name" thing is just a convention: useful since it definitely avoids clashes if everyone adhers to it, but you don't have to follow it. Just choose a name that you can be reasonably sure nobody else will use and which is not registered as trademark by anyone - because that's the one way you could actually get into legal trouble. And that means it's in fact a rather bad idea to use some sort of "subdomain" of a free service you're using, like deviantart or a dyndns or free mail service! Because most (if not all) of those domains are trademarked terms, and if your projects ever gets widely distributed, it could be seen as violating the trademark. Just because they allow you to use that name as an email address (or whatever) doesn't mean you can use it for anything else - in fact, their EULA almost certainly restricts usage to exactly that one purpose. A: Good advice on this topic found on the web: "Start your package names with your email address, reversed.[...] Or, host your code at a site which will give you a slice of their domain." A: Why not register a domain? They're fairly cheap and doing so will guarantee that you don't clash with anybody else (or at least give you the satisfaction that if a clash does occur, it's the other person who will have to rewrite their code). Either register your own name, or try to make up a name that you may use as the basis for a business at a later date. * *bernard.surname.net *madeupname.net This will cost you less than 10GBP per year. Personally, I'd go for the made up name approach, as it's likely to look more professional (unless you choose something really strange). An added advantage is that a lot of domains will come with email capabilities, giving you a better email address than bernard.surname@hotmail.com. A: What you can do also is register a domain (actually a sub-domain) through a service such as DynDns (or one of the equivalents) and then use that domain name. You will be the sole controller and it is free and easy to maintain. They have a choice of 88 top domains at the moment (October 2008). dyndns dynamic dns service A: Use a top-level domain like 'bernard' or something else unique. The important part is that the domain is unique so that you avoid clashes, and not that it starts with a real Internet top-level domain like org or com. E.g. import java.util.*; import bernard.myProject.*; import org.apache.commons.lang.*; A: For my own personal work when I don't have a namespace, I go for something simple like org.<myname>.* A: I've been in a couple different companies who write house java classes. Often they're just com.blah.blah.blah without regard to whether there's a actual domain name behind it. A: IMHO, best if it does not depend on any external information, such as hosting provider or company (it could be released to the open source community), since package-level refactoring is not quite desirable, especially in the case of frameworks and libraries. I suggest choosing your project name carefully and unambiguously, then using org.<project name> as the root package. A: Many people have their own websites and relatively unique names (or login names). If your name is Bernard Something, you may own BernardSomething.com, making com.bernardsomething.xxxx (or com.bsomething.xxx) a legitimate package name IMHO for personal code. That being said, If your project name is unique, you may want to name the package after that. And of course, get the domain after your name if you don't own it yet!
{ "language": "en", "url": "https://stackoverflow.com/questions/166051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: C# structural highlighting in Visual Studio 2008 I'm looking for an add-in to VS2008 that can enable structural highlighting (vertical lines in blocks of code) in the C# code editor. I tried out CodeRush, but while the structural highlighting was great, I was annoyed with all the other stuff in CodeRush. No matter how much I disabled in the options, I couldn't quite get rid of it. So I'm looking for another add-in that enables structural highlighting and (ideally) nothing else. Know of any? A: While browsing the ViEmu site, I saw the Codekana product which looks like it may do what you want. A: Is the Indentation Guide from the SlickEdit Free Gadgets what you're looking for? A: I'm using a JetBrains resharper to handle this as well (http://www.jetbrains.com/resharper/) and found it to be a great tool for much more tasks. If you need just highlighting, that may be npt your case though A: ReSharper (www.jetbrains.com) can outline a scope. If the scope is too large for the editor it displays the first line (heading) of the scope when the cursor stays on the closing curly brace/parenthesis.
{ "language": "en", "url": "https://stackoverflow.com/questions/166067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I get data from a stored procedure into a temp table? Am working on sybase ASE 15. Looking for something like this Select * into #tmp exec my_stp; my_stp returns 10 data rows with two columns in each row. A: In ASE 15 I believe you can use functions, but they're not going to help with multirow datasets. If your stored proc is returning data with a "select col1,col2 from somewhere" then there's no way of grabbing that data, it just flows back to the client. What you can do is insert the data directly into the temp table. This can be a little tricky as if you create the temp table within the sproc it is deleted once the sproc finishes running and you don't get to see the contents. The trick for this is to create the temp table outside of the sproc, but to reference it from the sproc. The hard bit here is that every time you recreate the sproc you must create the temp table, or you'll get "table not found" errors. --You must use this whole script to recreate the sproc create table #mine (col1 varchar(3), col2 varchar(3)) go create procedure my_stp as insert into #mine values("aaa","aaa") insert into #mine values("bbb","bbb") insert into #mine values("ccc","ccc") insert into #mine values("ccc","ccc") go drop table #mine go The to run the code: create table #mine (col1 varchar(3), col2 varchar(3)) go exec my_stp go select * from #mine drop table #mine go A: I've just faced this problem, and better late than never... It's doable, but a monstrous pain in the butt, involving a Sybase "proxy table" which is a standin for another local or remote object (table, procedure, view). The following works in 12.5, newer versions hopefully have a better way of doing it. Let's say you have a stored proc defined as: create procedure mydb.mylogin.sp_extractSomething ( @timestamp datetime) as select column_a, column_b from sometable where timestamp = @timestamp First switch to the tempdb: use tempdb Then create a proxy table where the columns match the result set: create existing table myproxy_extractSomething ( column_a int not null, -- make sure that the types match up exactly! column_b varchar(20) not null, _timestamp datetime null, primary key (column_a)) external procedure at "loopback.mydb.mylogin.sp_extractSomething" Points of note: * *"loopback" is the Sybase equivalent of localhost, but you can substitute it for any server registered in the server's sysservers table. *The _timestamp parameter gets translated to @timestamp when Sybase executes the stored proc, and all parameter columns declared like this must be defined as null. You can then select from the table like this from your own db: declare @myTimestamp datetime set @myTimestamp = getdate() select * from tempdb..myproxy_extractSomething where _timestamp = @myTimestamp Which is straightforward enough. To then insert into a temporary table, create it first: create table #myTempExtract ( column_a int not null, -- again, make sure that the types match up exactly column_b varchar(20) not null, primary key (column_a) ) and combine: insert into #myTempExtract (column_a, column_b) select column_a, column_b from tempdb..myproxy_extractSomething where _timestamp = @myTimestamp A: Not sure about Sybase, but in SQL Server the following should work: INSERT INTO #tmp (col1,col2,col3...) exec my_stp A: If my_stp is populating data by computing values from different tables, you can create an equivalent view which does exactly the same as my_stp. CREATE VIEW My_view AS /* My_stp body */ Then select data from view SELECT * INTO #x FROM my_view
{ "language": "en", "url": "https://stackoverflow.com/questions/166080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: #define TRACE(...) doesn't work in C++ I have the following preprocessor divective: #ifndef NDEBUG #define TRACE printf #else #define TRACE(...) #endif and example of usage is: TRACE("TRACE: some parameter = %i\n", param); In C all works perfectly well when I build both debug and release versions, but in C++ compiler emits the following: warning: invalid character in macro parameter name error: badly punctuated parameter list in `#define' and points these warning and error to the 'TRACE(...)' directive. How to write this in C++ correctly? A: #define TRACE false || This turns TRACE(x,y,z) into false || (x,y,z). Since x,y and z will be expressions, (x,y,z) evaluates to z (comma operator). z must be a built-in type (to be legally passed to a printf-style function) so it should be valid on the right side of ||. It won't be evaluated, but it must be a legal expression (e.g. you can't reference class members which only exist in DEBUG builds) Vararg macros are a C99 invention, they're not in C++98 but might very well be in C++0x [edit] - Using || to guarantee non-evaluation A: You could do: inline void TRACE(...) {}
{ "language": "en", "url": "https://stackoverflow.com/questions/166083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best available online resource for Windows Mobile Development? What's the best available online resource. A: I have only played with the Mobile Framework briefly but a good start would be Windows Mobile Developer Center Also it is worth looking at some example open source projects on CodePlex. Microsoft on every training event is encouraging people to use it, and from my experience you can find some interesting projects there at least in standard .Net. I suppose the same applies to Mobile A: Coding4Fun has a few tutorials on Windows Mobile Device. Most of them are quite involved, spread over multiple posts, and covers different aspects of Windows Mobile development. As the name implies most applications are about small games and hacks rather than anything you could use in a pure business environment A: I've just got a new smart phone, (WinMo 6 Pro) and looking at developing for it. Unfortunately there really seems a lack of clear concise guides/documentation/tutorials/books for native Window Mobile development. Currently the best resource I've found is MSDN, pity it I find it so painful to trawl through. I can now see why Palm had such an active Dev community, good references, some free tools and simple code/APIs. A: It depends - do you intend to build software with the Compact Framework or using Embedded C++ (native applications)? There doesn't seem to be much in the way of online resources for writing unmanaged applications, but there is a bit of content on Compact Framework development. The first decent guidance I can recall was from 2006, from the Patterns and Practices team - the Mobile Client Software Factory. I haven't used it myself (all my mobile app work was done in Visual Studio without the use of additional software) but I know some folks who used it successfully. It seems Patterns and Practices moved their Smart Client Guidance to codeplex here where they have updated the Mobile Client Software Factory to an April 2008 release, supporting Visual Studio 2008. Of course there's the Windows Mobile 6 SDK.. I think the latest SDK is available from here.
{ "language": "en", "url": "https://stackoverflow.com/questions/166084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is C# analog of C++ std::pair? I'm interested: What is C#'s analog of std::pair in C++? I found System.Web.UI.Pair class, but I'd prefer something template-based. Thank you! A: System.Web.UI contained the Pair class because it was used heavily in ASP.NET 1.1 as an internal ViewState structure. Update Aug 2017: C# 7.0 / .NET Framework 4.7 provides a syntax to declare a Tuple with named items using the System.ValueTuple struct. //explicit Item typing (string Message, int SomeNumber) t = ("Hello", 4); //or using implicit typing var t = (Message:"Hello", SomeNumber:4); Console.WriteLine("{0} {1}", t.Message, t.SomeNumber); see MSDN for more syntax examples. Update Jun 2012: Tuples have been a part of .NET since version 4.0. Here is an earlier article describing inclusion in.NET4.0 and support for generics: Tuple<string, int> t = new Tuple<string, int>("Hello", 4); A: Apart from custom class or .Net 4.0 Tuples, since C# 7.0 there is a new feature called ValueTuple, which is a struct that can be used in this case. Instead of writing: Tuple<string, int> t = new Tuple<string, int>("Hello", 4); and access values through t.Item1 and t.Item2, you can simply do it like that: (string message, int count) = ("Hello", 4); or even: (var message, var count) = ("Hello", 4); A: If it's about dictionaries and the like, you're looking for System.Collections.Generic.KeyValuePair<TKey, TValue>. A: Unfortunately, there is none. You can use the System.Collections.Generic.KeyValuePair<K, V> in many situations. Alternatively, you can use anonymous types to handle tuples, at least locally: var x = new { First = "x", Second = 42 }; The last alternative is to create an own class. A: Tuples are available since .NET4.0 and support generics: Tuple<string, int> t = new Tuple<string, int>("Hello", 4); In previous versions you can use System.Collections.Generic.KeyValuePair<K, V> or a solution like the following: public class Pair<T, U> { public Pair() { } public Pair(T first, U second) { this.First = first; this.Second = second; } public T First { get; set; } public U Second { get; set; } }; And use it like this: Pair<String, int> pair = new Pair<String, int>("test", 2); Console.WriteLine(pair.First); Console.WriteLine(pair.Second); This outputs: test 2 Or even this chained pairs: Pair<Pair<String, int>, bool> pair = new Pair<Pair<String, int>, bool>(); pair.First = new Pair<String, int>(); pair.First.First = "test"; pair.First.Second = 12; pair.Second = true; Console.WriteLine(pair.First.First); Console.WriteLine(pair.First.Second); Console.WriteLine(pair.Second); That outputs: test 12 true A: I created a C# implementation of Tuples, which solves the problem generically for between two and five values - here's the blog post, which contains a link to the source. A: I typically extend the Tuple class into my own generic wrapper as follows: public class Statistic<T> : Tuple<string, T> { public Statistic(string name, T value) : base(name, value) { } public string Name { get { return this.Item1; } } public T Value { get { return this.Item2; } } } and use it like so: public class StatSummary{ public Statistic<double> NetProfit { get; set; } public Statistic<int> NumberOfTrades { get; set; } public StatSummary(double totalNetProfit, int numberOfTrades) { this.TotalNetProfit = new Statistic<double>("Total Net Profit", totalNetProfit); this.NumberOfTrades = new Statistic<int>("Number of Trades", numberOfTrades); } } StatSummary summary = new StatSummary(750.50, 30); Console.WriteLine("Name: " + summary.NetProfit.Name + " Value: " + summary.NetProfit.Value); Console.WriteLine("Name: " + summary.NumberOfTrades.Value + " Value: " + summary.NumberOfTrades.Value); A: C# has tuples as of version 4.0. A: Depending on what you want to accomplish, you might want to try out KeyValuePair. The fact that you cannot change the key of an entry can of course be rectified by simply replacing the entire entry by a new instance of KeyValuePair. A: I was asking the same question just now after a quick google I found that There is a pair class in .NET except its in the System.Web.UI ^ ~ ^ (http://msdn.microsoft.com/en-us/library/system.web.ui.pair.aspx) goodness knows why they put it there instead of the collections framework A: Since .NET 4.0 you have System.Tuple<T1, T2> class: // pair is implicitly typed local variable (method scope) var pair = System.Tuple.Create("Current century", 21); A: Some answers seem just wrong, * *you can't use dictionary how would store the pairs (a,b) and (a,c). Pairs concept should not be confused with associative look up of key and values *lot of the above code seems suspect Here is my pair class public class Pair<X, Y> { private X _x; private Y _y; public Pair(X first, Y second) { _x = first; _y = second; } public X first { get { return _x; } } public Y second { get { return _y; } } public override bool Equals(object obj) { if (obj == null) return false; if (obj == this) return true; Pair<X, Y> other = obj as Pair<X, Y>; if (other == null) return false; return (((first == null) && (other.first == null)) || ((first != null) && first.Equals(other.first))) && (((second == null) && (other.second == null)) || ((second != null) && second.Equals(other.second))); } public override int GetHashCode() { int hashcode = 0; if (first != null) hashcode += first.GetHashCode(); if (second != null) hashcode += second.GetHashCode(); return hashcode; } } Here is some test code: [TestClass] public class PairTest { [TestMethod] public void pairTest() { string s = "abc"; Pair<int, string> foo = new Pair<int, string>(10, s); Pair<int, string> bar = new Pair<int, string>(10, s); Pair<int, string> qux = new Pair<int, string>(20, s); Pair<int, int> aaa = new Pair<int, int>(10, 20); Assert.IsTrue(10 == foo.first); Assert.AreEqual(s, foo.second); Assert.AreEqual(foo, bar); Assert.IsTrue(foo.GetHashCode() == bar.GetHashCode()); Assert.IsFalse(foo.Equals(qux)); Assert.IsFalse(foo.Equals(null)); Assert.IsFalse(foo.Equals(aaa)); Pair<string, string> s1 = new Pair<string, string>("a", "b"); Pair<string, string> s2 = new Pair<string, string>(null, "b"); Pair<string, string> s3 = new Pair<string, string>("a", null); Pair<string, string> s4 = new Pair<string, string>(null, null); Assert.IsFalse(s1.Equals(s2)); Assert.IsFalse(s1.Equals(s3)); Assert.IsFalse(s1.Equals(s4)); Assert.IsFalse(s2.Equals(s1)); Assert.IsFalse(s3.Equals(s1)); Assert.IsFalse(s2.Equals(s3)); Assert.IsFalse(s4.Equals(s1)); Assert.IsFalse(s1.Equals(s4)); } } A: On order to get the above to work (I needed a pair as the key of a dictionary). I had to add: public override Boolean Equals(Object o) { Pair<T, U> that = o as Pair<T, U>; if (that == null) return false; else return this.First.Equals(that.First) && this.Second.Equals(that.Second); } and once I did that I also added public override Int32 GetHashCode() { return First.GetHashCode() ^ Second.GetHashCode(); } to suppress a compiler warning. A: The PowerCollections library (formerly available from Wintellect but now hosted on Codeplex @ http://powercollections.codeplex.com) has a generic Pair structure.
{ "language": "en", "url": "https://stackoverflow.com/questions/166089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "336" }
Q: How can I clone a .NET solution? Starting new .NET projects always involves a bit of work. You have to create the solution, add projects for different tiers (Domain, DAL, Web, Test), set up references, solution structure, copy javascript files, css templates and master pages etc etc. What I'd like is an easy way of cloning any given solution. If you use copy/paste, the problem is that you need to then go through renaming namespaces, assembly names, solution names, GUIDs etc. Is there a way of automating this? Something like this would be great: solutionclone.exe --solution=c:\code\abc\template.sln --to=c:\code\xyz --newname=MySolution I'm aware that Visual Studio has project templates, but I've not seen solution templates. A: I have created a small application for this. It works just like the previously mentioned Solutionclone app, except that it is both a command line application as well as a WPF application. Cloney copies a source folder to a target one, without any Git or Svn integration. It will also replace the old namespace everywhere (in file names as well as within files) with the new namespace (the name of the target folder) and exclude certain files (e.g. *.suo, *.user, *.vssscc) and folders (e.g. .git, .svn). You can grab the source code or download an executable at https://github.com/danielsaidi/cloney. Cloney can also be added to the Windows Explorer context menu, which makes it possible to clone .NET solutions by just right-clicking the .sln file. A: As you already found out: Copy the .sln File and make sure the paths/guids match. Because the .sln are text/plain just use your favourite scripting language to script a cloner. Maybe this is a good time to learn Python/Ruby/Perl/Windows Script Host MSDN Solution (.sln) File Definition A: Look at Tree Surgeon on CodePlex, it creates a development tree for you. A: I believe the Guidance Automation Toolkit allows you to do this, but may not be an "easy" way. I have the same problem as you and intend to look at it in detail "real soon now". A: May be you should check Warmup open source project. Find brief description on http://devlicious.com/blogs/rob_reynolds/archive/2010/02/01/warmup-getting-started.aspx. IMHO, advantage of Warmup approach is that it can clone the whole tree with solution directly from SVN or GIT. Note! I haven't use it personally, but plan to give it a try in the next project. Please leave a comment if you use it. A: Old post, I know, but I recently had a need to do this, and although there are several homegrown tools out there to do it, I wanted something more lightweight that I could toss around wherever I feel like it. Since I like python for interpreted language scripting, I wrote a single file tool in python to do the job. It's only dependency is python3 and chardet2. https://gist.github.com/4159114 A: You can use a multi project template to get solution like behavior. The folder structure will be slightly off putting all the projects into a level below the .sln file. You can also implement a custom IWizard to have complete control.
{ "language": "en", "url": "https://stackoverflow.com/questions/166099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Function returning the return of another function If I want to call Bar() instead of Foo(), does Bar() return me a copy (additional overhead) of what Foo() returns, or it returns the same object which Foo() places on the temporary stack? vector<int> Foo(){ vector<int> result; result.push_back(1); return result; } vector<int> Bar(){ return Foo(); } A: Normally it returns a copy of the returned vector<int>. However this highly depends on the optimization done by the compiler. See the following discussion. Debug Build vector<int> Foo(){ 004118D0 push ebp 004118D1 mov ebp,esp 004118D3 push 0FFFFFFFFh 004118D5 push offset __ehhandler$?Foo@@YA?AV?$vector@HV?$allocator@H@std@@@std@@XZ (419207h) 004118DA mov eax,dword ptr fs:[00000000h] 004118E0 push eax 004118E1 sub esp,0F4h 004118E7 push ebx 004118E8 push esi 004118E9 push edi 004118EA lea edi,[ebp-100h] 004118F0 mov ecx,3Dh 004118F5 mov eax,0CCCCCCCCh 004118FA rep stos dword ptr es:[edi] 004118FC mov eax,dword ptr [___security_cookie (41E098h)] 00411901 xor eax,ebp 00411903 push eax 00411904 lea eax,[ebp-0Ch] 00411907 mov dword ptr fs:[00000000h],eax 0041190D mov dword ptr [ebp-0F0h],0 vector<int> result; 00411917 lea ecx,[ebp-24h] 0041191A call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (411050h) 0041191F mov dword ptr [ebp-4],1 result.push_back(1); 00411926 mov dword ptr [ebp-0FCh],1 00411930 lea eax,[ebp-0FCh] 00411936 push eax 00411937 lea ecx,[ebp-24h] 0041193A call std::vector<int,std::allocator<int> >::push_back (41144Ch) return result; 0041193F lea eax,[ebp-24h] 00411942 push eax 00411943 mov ecx,dword ptr [ebp+8] 00411946 call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (41104Bh) 0041194B mov ecx,dword ptr [ebp-0F0h] 00411951 or ecx,1 00411954 mov dword ptr [ebp-0F0h],ecx 0041195A mov byte ptr [ebp-4],0 0041195E lea ecx,[ebp-24h] 00411961 call std::vector<int,std::allocator<int> >::~vector<int,std::allocator<int> > (411415h) 00411966 mov eax,dword ptr [ebp+8] } Here we can see that for vector<int> result; a new object is created on the stack at [ebp-24h] 00411917 lea ecx,[ebp-24h] 0041191A call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (411050h) When we get to return result; a new copy is created in storage allocated by the caller at [ebp+8] 00411943 mov ecx,dword ptr [ebp+8] 00411946 call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (41104Bh) And the destructor is called for the local parameter vector<int> result at [ebp-24h] 0041195E lea ecx,[ebp-24h] 00411961 call std::vector<int,std::allocator<int> >::~vector<int,std::allocator<int> > (411415h) Release Build vector<int> Foo(){ 00401110 push 0FFFFFFFFh 00401112 push offset __ehhandler$?Foo@@YA?AV?$vector@HV?$allocator@H@std@@@std@@XZ (401F89h) 00401117 mov eax,dword ptr fs:[00000000h] 0040111D push eax 0040111E sub esp,14h 00401121 push esi 00401122 mov eax,dword ptr [___security_cookie (403018h)] 00401127 xor eax,esp 00401129 push eax 0040112A lea eax,[esp+1Ch] 0040112E mov dword ptr fs:[00000000h],eax 00401134 mov esi,dword ptr [esp+2Ch] 00401138 xor eax,eax 0040113A mov dword ptr [esp+8],eax vector<int> result; 0040113E mov dword ptr [esi+4],eax 00401141 mov dword ptr [esi+8],eax 00401144 mov dword ptr [esi+0Ch],eax result.push_back(1); return result; 00401147 push eax 00401148 mov dword ptr [esp+28h],eax 0040114C mov ecx,1 00401151 push esi 00401152 lea eax,[esp+14h] 00401156 mov dword ptr [esp+10h],ecx 0040115A mov dword ptr [esp+14h],ecx 0040115E push eax 0040115F lea ecx,[esp+1Ch] 00401163 push ecx 00401164 mov eax,esi 00401166 call std::vector<int,std::allocator<int> >::insert (401200h) 0040116B mov eax,esi } 0040116D mov ecx,dword ptr [esp+1Ch] 00401171 mov dword ptr fs:[0],ecx 00401178 pop ecx 00401179 pop esi 0040117A add esp,20h 0040117D ret The line vector<int> result does not call the vector allocator because it is done at call site in Bar. The optimization makes no copy of the result from Foo. A: This is a trivial case for NRVO – names return value optimization (a misnomer in this case since there's no name). Stan Lippman hat a blog entry with a nice explanation of the mechanism involved. A: Both may happen. However, most compiler will not do copy as soon as you optimize. Your code indicate there should be a copy. However, the compiler is allowed to remove any copy that do not change the semantic and the program. Note: This is why you should NEVER have a copy constructor that does anything but copying correctly as you can never be sure if a copy will be actually done or not.
{ "language": "en", "url": "https://stackoverflow.com/questions/166113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: .NET design course? Is there a course that can help non OO programmers how to develop in .NET in a good way? I don't mean just the syntax and how to design a class and the relationship between class but how organize a solution into projects (naming, what to put on each one) what method is more suitable to access data (dataobjects, remoting, ORM) how to design forms with databinding, how to validate, why is important to use interfaces if we want to unit test later and so on. There are so many things that I would like to study! But I can only find reference books, or some generic XP or Agile practices. I have lots of really good books (head first design patterns, head first C#, The Art of Agile Development, Code Complete, The pragmatic programmer series (subversion, unit testing and interface programming), but they don't say a word about organizing programs in .NET I found http://www.learnvisualstudio.net, https://www.microsoftelearning.com and http://www.franklins.net but they don't really deliver a strategy to develop in a maintenable way, they just show me the tools I can use. I also follow some very good blogs and websites but I can only get bits and pieces. How did you learn how to develop mid-size applications? Can you recommend any good web course? video tutorials? blogs? ebooks? A: You could learn that by looking at the real code. One of the things I like doing is to read good open source code. Try looking into NUnit source code which is very well done. A: Have a look at the patterns & practices Application Architecture Guide 2.0 (Beta 2 Release) on Codeplex. The patterns & practices division of Microsoft also has guidelines for naming etc. A: Yes as mentioned go to CodePlex and look at a variety of open source projects there and see how they set up things. For LOB applications it might be worth checking out Rocky Lhokta's CSLA business object's framework. This has an accompanying book and project for you to look at. Alternatively you could see if there is anything done byt Alt.Net people.
{ "language": "en", "url": "https://stackoverflow.com/questions/166121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: C++: Multithreading and refcounted object I'm currently trying to pass a mono threaded program to multithread. This software do heavy usage of "refCounted" objects, which lead to some issues in multithread. I'm looking for some design pattern or something that might solve my problem. The main problem is object deletion between thread, normally deletion only decrement the reference counting, and when refcount is equal to zero, then the object is deleted. This work well in monothread program, and allow some great performance improvement with copy of big object. However, in multithread, two threads might want to delete the same object concurrently, as the object is protected by a mutex, only one thread delete the object and block the other one. But when it releases the mutex, then the other thread continue its execution with invalid (freed object), which lead to memory corruption. Here is an example with this class RefCountedObject class RefCountedObject { public: RefCountedObject() : _refCount( new U32(1) ) {} RefCountedObject( const RefCountedObject& obj ) : _refCount( obj._refCount ) { ACE_Guard< ACE_Mutex > guard( _refCountMutex ); ++(*_refCount); } ~RefCountedObject() { Destroy(); } RefCountedObject& operator=( const RefCountedObject& obj ) { if( this != &obj ) { Destroy(); ACE_Guard< ACE_Mutex > guard( _refCountMutex ); _refCount = obj._refCount; ++(*_refCount); } return *this; } private: void Destroy() { ACE_Guard< ACE_Mutex > guard( _refCountMutex ); // thread2 are waiting here --(*_refCount); // This cause a free memory write by the thread2 if( 0 == *_refCount ) delete _refCount; } private: mutable U32* _refCount; mutable ACE_Mutex _refCountMutex; // BAD: this mutex only protect the refCount pointer, not the refCount itself }; Suppose that two threads want to delete the same RefCountedObject, both are in ~RefCountedObject and call Destroy(), the first thread has locked the mutex and the other one is waiting. After the deletion of the object by the first thread, the second will continue its execution and cause a free memory write. Anyone has experience with a similar problem and found a solution ? Thanks all for your help, I realize my mistake: The mutex is only protecting refCount pointer, not the refCount itself! I've created a RefCount class which is mutex protected. The mutex is now shared between all refCounted object. Now all works fine. A: If the count is part of the object then you have an inherent problem if one thread can be trying to increase the reference count whilst another is trying to remove the last reference. There needs to be an extra value on the ref count for each globally accessible pointer to the object, so you can always safely increase the ref count if you've got a pointer. One option would be to use boost::shared_ptr (see the docs). You can use the free functions atomic_load, atomic_store, atomic_exchange and atomic_compare_exchange (which are conspicuously absent from the docs) to ensure suitable protection when accessing global pointers to shared objects. Once your thread has got a shared_ptr referring to a particular object you can use the normal non-atomic functions to access it. Another option is to use Joe Seigh's atomic ref-counted pointer from his atomic_ptr_plus project A: Surely each thread simply needs to manage the reference counts correctly... That is, if ThreadA and ThreadB are both working with Obj1 then BOTH ThreadA and ThreadB should own a reference to the object and BOTH should call release when they're done with the object. In a single threaded application it's likely that you have a point where a reference counted object is created, you then do work on the object and eventually call release. In a multi-threaded program you would create the object and then pass it to your threads (however you do that). Before passing the object to the thread you should call AddRef() on your object to give the thread its own reference count. The thread that allocated the object can then call release as it's done with the object. The threads that are working with the object will then call release when they're done and when the last reference is released the object will be cleaned up. Note that you dont want the code that's running on the threads themselves to call AddRef() on the object as you then have a race condition between the creating thread calling release on the object before the threads that you've dispatched to get a chance to run and call AddRef(). A: thinking about your issue a little... what you're saying is that you have 1 object (if the refcount is 1) and yet 2 threads both call delete() on it. I think this is where your problem truly lies. The other way round this issue, if you want a threaded object you can safely reuse between threads, is to check that the refcount is greater than 1 before freeing internal memory. Currently you free it and then check whether the refcount is 0. A: This isn't an answer, but just a bit of advice. In a situation like this, before you start fixing anything, please make sure you can reliably duplicate these problems. Sometimes this is a simple as running your unit tests in a loop for a while. Sometimes putting some clever Sleeps into your program to force race conditions is helpful. Ref counting problems tend to linger, so an investment in your test harness will pay off in the long run. A: Any object that you are sharing between threads should be protected with a mutex, and the same applies to refcount handles ! That means you will never be deleting the last one handle to an object from two threads. You might be concurrently deleting two distinct handles that happen to point to one object. In Windows, you could use InterlockedDecrement. This ensures that precisely one of the two decrements will return 0. Only that thread will delete the refcounted object. Any other thread cannot have been copying one of the two handles either. By common MT rules one thread may not delete an object still used by another thread, and this extends to refcount handles too. A: One solution is to make the reference counter an atomic value, so that each concurrent call to destroy can safely proceed with only 1 deletion actually occurring, the other merely decrementing the atomic reference count. The Intel Thread Building Blocks library (TBB) provides atomic values. Also, so does the ACE library in the ACE_Atomic_Op template. The Boost library provides a reference counting smart pointer library that already implements this. http://www.dre.vanderbilt.edu/Doxygen/Current/html/ace/a00029.html http://www.boost.org/doc/libs/release/libs/smart_ptr/shared_ptr.htm A: I believe something along this line would solve your problem: private: void Destroy() { ACE_Guard< ACE_Mutex > guard( _refCountMutex ); // thread2 are waiting here if (_refCount != 0) { --(*_refCount); // This cause a free memory write by the thread2 if( 0 == *_refCount ) { delete _refCount; _refcount = 0; } } } private: mutable U32* _refCount; mutable ACE_Mutex _refCountMutex;
{ "language": "en", "url": "https://stackoverflow.com/questions/166125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Prevent Default Behavior in Key Listeners in YUI I have a web page where I'd like to remap Ctrl+N to a different behavior. I followed YUI's example of register Key Listeners and my function is called but Firefox still creates a new browser window. Things seem to work fine on IE7. How do I stop the new window from showing up? Example: var kl2 = new YAHOO.util.KeyListener(document, { ctrl:true, keys:78 }, {fn:function(event) { YAHOO.util.Event.stopEvent(event); // Doesn't help alert('Click');}}); kl2.enable(); It is possible to remove default behavior. Google Docs overrides Ctrl+S to save your document instead of bringing up Firefox's save dialog. I tried the example above with Ctrl+S but Firefox's save dialog still pops up. Since Google can stop the save dialog from coming up I'm sure there's a way to prevent most default keyboard shortcuts. A: The trick is the 'fn' function is whack. Experimentally, you can see that the function type for fn takes two parameters. The first param actually contains the TYPE of event. The second one contains... and this is screwy: an array containing the codepoint at index 0 and the actual event object at index 1. So changing your code around a bit, it should look like this: function callback(type, args) { var event = args[1]; // the actual event object alert('Click'); // like stopEvent, but the event still propogates to other YUI handlers YAHOO.util.Event.preventDefault(event); } var kl2 = new YAHOO.util.KeyListener(document, { ctrl:true, keys:78 }, {fn:callback}); kl2.enable(); Also, for the love of lisp, don't use raw code points in your code. Use 'N'.charCodeAt(0) instead of "78". Or wrap it up as a function function ord(char) { return char.charCodeAt(0); } A: I'm just guessing here but I don't think it can be done. If it's possible it definitely shouldn't be. Generic keyboard shortcuts are something you should not mess with. What's next? Hook the window close button to open a new window... A: Using YUI's event util, you could try and use the stopEvent method: However, because most users are used to those keypresses doing a particular thing in the browser (new window in your example), I always avoid clashes, which in effect means I don't use any of the meta or control keys. I simply use letters, on their own, which is fine until you have text entry boxes, so this bit of advice might be less useful to you. A: Although overriding default browser shortcuts is not trivial, in some cases it is worth to do this since it gives a more professional look of the application. Take a look at this script: http://www.openjs.com/scripts/events/keyboard_shortcuts/index.php#disable_in_input It turns out to work fine for me..
{ "language": "en", "url": "https://stackoverflow.com/questions/166127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Maximum length of the textual representation of an IPv6 address? I want to store the data returned by $_SERVER["REMOTE_ADDR"] in PHP into a DB field, pretty simple task, really. The problem is that I can't find any proper information about the maximum length of the textual representation of an IPv6 address, which is what a webserver provides through $_SERVER["REMOTE_ADDR"]. I'm not interested in converting the textual representation into the 128 bits the address is usually encoded in, I just want to know how many characters maximum are needed to store any IPv6 address returned by $_SERVER["REMOTE_ADDR"]. A: 45 characters. You might expect an address to be 0000:0000:0000:0000:0000:0000:0000:0000 8 * 4 + 7 = 39 8 groups of 4 digits with 7 : between them. But if you have an IPv4-mapped IPv6 address, the last two groups can be written in base 10 separated by ., eg. [::ffff:192.168.100.228]. Written out fully: 0000:0000:0000:0000:0000:ffff:192.168.100.228 (6 * 4 + 5) + 1 + (4 * 3 + 3) = 29 + 1 + 15 = 45 Note, this is an input/display convention - it's still a 128 bit address and for storage it would probably be best to standardise on the raw colon separated format, i.e. [0000:0000:0000:0000:0000:ffff:c0a8:64e4] for the address above. A: I think @Deepak answer in this link is more close to correct answer. Max length for client ip address. So correct size is 45 not 39. Sometimes we try to scrounge in fields size but it seems to better if we prepare enough storage size. A: Watch out for certain headers such as HTTP_X_FORWARDED_FOR that appear to contain a single IP address. They may actually contain multiple addresses (a chain of proxies I assume). They will appear to be comma delimited - and can be a lot longer than 45 characters total - so check before storing in DB. A: As indicated a standard ipv6 address is at most 45 chars, but an ipv6 address can also include an ending % followed by a "scope" or "zone" string, which has no fixed length but is generally a small positive integer or a network interface name, so in reality it can be bigger than 45 characters. Network interface names are typically "eth0", "eth1", "wlan0", a small number of chars. The max interface name length in linux is 15 chars, so choosing 61 bytes will cover all interface names on linux. A: Answered my own question: IPv6 addresses are normally written as eight groups of four hexadecimal digits, where each group is separated by a colon (:). So that's 39 characters max. A: On Linux, see constant INET6_ADDRSTRLEN (include <arpa/inet.h>, see man inet_ntop). On my system (header "in.h"): #define INET6_ADDRSTRLEN 46 The last character is for terminating NULL, as I belive, so the max length is 45, as other answers.
{ "language": "en", "url": "https://stackoverflow.com/questions/166132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "532" }
Q: What's quicker; including another file or querying a MySQL database in PHP? In PHP, which is quicker; using include('somefile.php') or querying a MySQL database with a simple SELECT query to get the same information? For example, say you had a JavaScript autocomplete search field which needed 3,000 terms to match against. Is it quicker to read those terms in from another file using include or to read them from a MySQL database using a simple SELECT query? Edit: This is assuming that the database and the file I want to include are on the same local machine as my code. A: Including a file should almost always be quicker. If your database is on another machine (e.g. in shared hosting) or in a multi-server setup the lookup will have to make an extra hop. However, in practice the difference is probably not going to matter. If the list is dynamic then storing it in MySQL will make your life easier. Static lists (e.g. countries or states) can be stored in a PHP include. If the list is quite short (a few hundred entries) and often used, you could load it straight into JavaScript and do away with AJAX. If you are going the MySQL route and are worried about speed then use caching. $query = $_GET['query']; $key = 'query' . $query; if (!$results = apc_fetch($key)) { $statement = $db->prepare("SELECT name FROM list WHERE name LIKE :query"); $statement->bindValue(':query', "$query%"); $statement->execute(); $results = $statement->fetchAll(); apc_store($key, $results); } echo json_encode($results); A: The difference in time is more down to the system design than the underlying technique I'd dare say. Both a MySQL result and a file can be cached in memory, and the performance difference there would be so small it is neglectable. Instead I would ask myself what the difference in maintenance would be. Are you likely to ever change the data? If not, just pop it in a plain file. Are you likely to change bits of the content ever so often? If so, a database is way easier to manipulate. Same thing for the structure of the data, if it needs "restructuring", maybe it is more efficient to put it in a database? So: Do what you feel is most convenient for you and the future maintainer of the code and data. :-) A: It's very hard/impossible to give an exact answer, as there are too many unknown variables - what if the filesystem is mounted on an NFS that resides on the other side of the world? Or you have the whole MySQL database in memory. The size of the database should be factored in too. But, on a more answer-y note, a safe guess would be that MySQL is faster, given good indexes, good database structure/normalization and not too fancy/complex queries. I/O operations are always expensive (read: slow), while, as previously mentioned, the whole dataset is already cached in memory by MySQL. Besides, I imagine you thought of doing further string manipulation with those included files, which makes things even more troublesome - I'm convinced MySQL's string searching algorithms are much better optimized than what you could come up with in PHP. A: It depends. If your file is stored locally in your server and the database is installed in another machine, then the faster is to include the file. Buuuuut, because it depends on your system it could be not true. I suggest to you to make a PHP test script and run it 100 times from the command line, and repeat the test through HTTP (using cURL) Example: use_include.php <?php start = microtime(true); include( 'somefile.php' ); echo microtime(true)-start; ?> use_myphp.php <?php start = microtime(true); __put_here_your_mysql_statements_to_retrieve_the_file__ echo microtime(true)-start; ?> A: If this is something you're going to be fetching on a regular basis it might be worthwhile to prefetch the data (from disk or the database, doesn't matter) and have your script pull it from a RAM cache like memcached. A: I recently had this issue. I had some data in mysql that I was querying on every page request. For my data set, it was faster to write a fixed record length file than to use MySQL. There were a few different factors that made a file faster than MySQL for me: * *File size was small -- under 100kb of text data *I was randomly picking and not searching -- indexes made no difference *Connection time -- opening the file and reading it in was faster than connecting to the database when the server load was high. This was especially true since the OS cached the file in memory Bottom line was that I benchmarked it and compared results. For my workload, the file system was faster. I suspect if my data set ever grows, that will change. I'm going to be keeping an eye on performance and I'm ready to change how it works in the future. A: definitely include as long as the file isn't too big and you end up using too much memory in which case a database would be recommended A: Reading in raw data to a script from a file will generally be faster than from a database. However it sounds like you are wanting to query that data in order to find a match to return to the javascript. You may find in that case that MySQL will be faster for the actual querying/searching of the data (especially if correctly indexed etc.) as this is something a database is good at. Reading in a big file is also less scalable as you will be using lots of server memory while the script executes. A: Why not do it both ways and see which is faster? Both solutions are pretty trivial. A: If you expect the number of terms to become larger at a later date, you're better off using MySQL with a fulltext search field. A: If you use a PHP bytecode cache like APC or Xcache, including the file is likely to be faster. If you're using PHP and you want performance, a bytecode cache is absolutely a requirement. It sounds like you're considering keeping static data around in a PHP script that you include, to avoid hitting the database. You're basically doing a rudimentary cache. This can work okay, as long as you have some way to refresh that file if/when the data does change. You might also look want to learn about the MySQL Query Cache to make SQL queries against static data faster. Or Memcached for keeping static data in memory. A: I exactly don't know, but in my opinio using MySQL, even if can be slower, sould be used if the content is dynamic. But I'm pretty sure it is faster, for big contents, using include.
{ "language": "en", "url": "https://stackoverflow.com/questions/166134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Learning VB6 I'm moving from Java Development to a MSFT environment. The app is currently written in VB6 and while its going to go to VB.NET/C# in the future, I need to find a way to pick up VB6 now. I'm told its old, and there will be no books on it available these days. Any tips? Sites? A: There are tons of books and you can probably get them dirt cheap since the technology is so old. For example, I just picked up some extra copies of a 3" thick hardback books that I did some writing for back in 1998 for under $3 on Amazon. Also, given its longevity there is a ton of reference material out on the Net for it that has accumulated over time. Don't let people scare you about VB6, it is a bit primitive compared to modern development platforms, but it wouldn't be so widely used if it didn't get the job done. That said, go with a more modern development tool unless you don't have a choice for the reasons given by the others on this post. A: The best tip is to... RUN! ;-) No, there are still tons to vb sites out there, and you should still be able to pick up loads of second hand books for VB6 for next to nothing. A: I'm sure you can get some books on it. If Amazon has none, try Ebay? It's a simple language, though - you shouldn't have much trouble picking it up! There's always the MSDN documentation. I'm having the opposite problem: I've got a few old apps in VB and need to update one of them, but can't find the install media! A: IMHO the step from Java to VB6 is not that big... If you install Visual Studio and the MSDN library that comes with it you have a good starting point. Look at some code, put the cursor at a function and press F1. The "online" documentation that comes with VB6 is really helpful, unlike later versions. ;-) Also the auto complete functionality in Visual Studio is really helpful. I find it more helpful that the in-line completion in Eclipse for Java. One of the upside of the Visual Basic design is that it is designed to be human readable (with if-then-else instead of brackets and so on). Of course that comes down to the single developer to write understandable and well commented code there as well... A good starting point would be to find a guide that explains how different datatypes in VB6 are working. The difference between simple data types and objects. And how these are passed in to a function as an argument: "ByVal" versus "ByRef". I think this is one of the big "dangers" as a beginner in VB6. Once you get your head around it, it is easy. A: As previous posters said there is an absolute ton of help available for vb6 online and very cheaply on amazon. Francesco Balana's book "Programming Visual Basic 6.0" would be my recommendation as the best book to get. It's tough enough in parts but well worth the effort as the reason for that is the information he's delivering will give you a far deeper understanding of the subject than "for dummies" types of book. He's also written what is I believe is considered one of best books on the .net visual basic and is probably the foremost expert on migrating from vb6 to vb.net and the pitfalls therein.
{ "language": "en", "url": "https://stackoverflow.com/questions/166138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a difference between DataTable.Clear and DataTable.Rows.Clear? I recall there is a difference between some methods/properties called directly on the DataTable class, and the identically named methods/properties on the DataTable.Rows property. (Might have been the RowCount/Count property for which I read this.) The difference is one of them disregards DataRow.RowState, and the other respects/uses it. In this particular case I'm wondering about the difference between DataTable.Clear and DataTable.Rows.Clear. I can imagine one of them actually removes all rows, and the other one just marks them as deleted. So my question is, is there a difference between the two Clear methods, and if so what is the difference? (Oh, this is for .NET 1.1 btw, in case the semantics changed from one version to another.) A: In .Net 1.1, DataRowCollection.Clear calls DataTable.Clear However, in .Net 2.0, there is a difference. If I understand the source correctly, DataTable.Clear will clear unattached rows (created using DataTable.NewRow) whereas DataRowCollection.Clear won't. The difference is in RecordManager.Clear (source below, from the .Net Reference Source for v3.5 SP 0); clearAll is true only when called from DataTable.Clear. internal void Clear(bool clearAll) { if (clearAll) { for(int record = 0; record < recordCapacity; ++record) { rows[record] = null; } int count = table.columnCollection.Count; for(int i = 0; i < count; ++i) { // DataColumn column = table.columnCollection[i]; for(int record = 0; record < recordCapacity; ++record) { column.FreeRecord(record); } } lastFreeRecord = 0; freeRecordList.Clear(); } else { // just clear attached rows freeRecordList.Capacity = freeRecordList.Count + table.Rows.Count; for(int record = 0; record < recordCapacity; ++record) { if (rows[record]!= null && rows[record].rowID != -1) { int tempRecord = record; FreeRecord(ref tempRecord); } } } } A: I've been testing the different methods now in .NET 1.1/VS2003, seems Matt Hamilton is right. * *DataTable.Clear and DataTable.Rows.Clear seem to behave identical with respect to the two things I tested: both remove all rows (they don't mark them as deleted, they really remove them from the table), and neither removes the columns of the table. *DataTable.Reset clears rows and columns. *DataTable.Rows.Count does include deleted rows. (This might be 1.1 specific) *foreach iterates over deleted rows. (I'm pretty sure deleted rows are skipped in 2.0.) A: AFAIK, the main difference between datatable.clear and datatable.rows.clear, is that datatable.clear clears both rows and columns. So if you want to keep the table structure (i.e. columns), use datatable.rows.clear. And if you want to start from scratch, use datatable.clear, or even datatable.reset to go right back to the beginning. datatable.reset is effectively the next level up from datatable.clear. Using datatable.clear will fail if there are any constraints applied that would be violated, but using datatable.reset will get rid of anything and everything that has been put in place since the datatable was created. A: I don't believe that DataTable.Clear does clear columns. This code writes "1" to standard output: var d = new DataTable(); d.Columns.Add("Hello", typeof(string)); d.Clear(); Console.WriteLine(d.Columns.Count); A: There is no difference between them. DataRowCollection.Clear() calls Table.Clear() Table.Clear() checks that the table can be cleared (constraints can prevent this), removes the rows and rebuilds any indexes. A: The both do the same thing. One is just an inherited method from the Collections class. And the Table.Clear() just calls that method. A: Do Below and Its working absolutely fine.... DataRow[] d_row = dt_result.Select("isfor_report='True'"); DataTable dt = dt_result.Clone(); foreach (DataRow dr in d_row) { dt.ImportRow(dr); } gv_view_result.DataSource = dt; gv_view_result.DataBind();
{ "language": "en", "url": "https://stackoverflow.com/questions/166159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I scale the content of an iframe? How can I scale the content of an iframe (in my example it is an HTML page, and is not a popup) in a page of my web site? For example, I want to display the content that appears in the iframe at 80% of the original size. A: Thought I'd share what I came up with, using much of what was given above. I haven't checked Chrome, but it works in IE, Firefox and Safari, so far as I can tell. The specifics offsets and zoom factor in this example worked for shrinking and centering two websites in iframes for Facebook tabs (810px width). The two sites used were a wordpress site and a ning network. I'm not very good with html, so this could probably have been done better, but the result seems good. <style> #wrap { width: 1620px; height: 3500px; padding: 0; position:relative; left:-100px; top:0px; overflow: hidden; } #frame { width: 1620px; height: 3500px; position:relative; left:-65px; top:0px; } #frame { -ms-zoom: 0.7; -moz-transform: scale(0.7); -moz-transform-origin: 0px 0; -o-transform: scale(0.7); -o-transform-origin: 0 0; -webkit-transform: scale(0.7); -webkit-transform-origin: 0 0; } </style> <div id="wrap"> <iframe id="frame" src="http://www.example.com"></iframe> </div> A: With CSS: html{ zoom:0.4; } ?-) A: If you want the iframe and its contents to scale when the window resizes, you can set the following to the window's resize event as well as the iframes onload event. function() { var _wrapWidth=$('#wrap').width(); var _frameWidth=$($('#frame')[0].contentDocument).width(); if(!this.contentLoaded) this.initialWidth=_frameWidth; this.contentLoaded=true; var frame=$('#frame')[0]; var percent=_wrapWidth/this.initialWidth; frame.style.width=100.0/percent+"%"; frame.style.height=100.0/percent+"%"; frame.style.zoom=percent; frame.style.webkitTransform='scale('+percent+')'; frame.style.webkitTransformOrigin='top left'; frame.style.MozTransform='scale('+percent+')'; frame.style.MozTransformOrigin='top left'; frame.style.oTransform='scale('+percent+')'; frame.style.oTransformOrigin='top left'; }; This will make the iframe and its content scale to 100% width of the wrap div (or whatever percent you want). As an added bonus, you don't have to set the css of the frame to hard coded values since they'll all be set dynamically, you'll just need to worry about how you want the wrap div to display. I've tested this and it works on chrome, IE11, and firefox. A: I found a solution that works in IE and Firefox (at least on the current versions). On Safari/Chrome, the iframe is resized to 75% of its original size, but the content within the iframe is not scaled at all. In Opera, this doesn't seem to work. This feels a bit esoteric, so if there is a better way to do it I'd welcome suggestions. <style> #wrap { width: 600px; height: 390px; padding: 0; overflow: hidden; } #frame { width: 800px; height: 520px; border: 1px solid black; } #frame { zoom: 0.75; -moz-transform: scale(0.75); -moz-transform-origin: 0 0; } </style> ... <p>Some text before the frame</p> <div id="wrap"> <iframe id="frame" src="test2.html"></iframe> </div> <p>Some text after the frame</p> </body> Note: I had to use the wrap element for Firefox. For some reason, in Firefox when you scale the object down by 75%, it still uses the original size of the image for layout reasons. (Try removing the div from the sample code above and you'll see what I mean.) I found some of this from this question. A: I think you can do this by calculating the height and width you want with javascript (via document.body.clientWidth etc.) and then injecting the iframe into your HTML like this: var element = document.getElementById("myid"); element.innerHTML += "<iframe src='http://www.google.com' height='200' width='" + document.body.clientWidth * 0.8 + "'/>"; I didn't test this in IE6 but it seems to work with the good ones :) A: After struggling with this for hours trying to get it to work in IE8, 9, and 10 here's what worked for me. This stripped-down CSS works in FF 26, Chrome 32, Opera 18, and IE9 -11 as of 1/7/2014: .wrap { width: 320px; height: 192px; padding: 0; overflow: hidden; } .frame { width: 1280px; height: 786px; border: 0; -ms-transform: scale(0.25); -moz-transform: scale(0.25); -o-transform: scale(0.25); -webkit-transform: scale(0.25); transform: scale(0.25); -ms-transform-origin: 0 0; -moz-transform-origin: 0 0; -o-transform-origin: 0 0; -webkit-transform-origin: 0 0; transform-origin: 0 0; } For IE8, set the width/height to match the iframe, and add -ms-zoom to the .wrap container div: .wrap { width: 1280px; /* same size as frame */ height: 768px; -ms-zoom: 0.25; /* for IE 8 ONLY */ } Just use your favorite method for browser sniffing to conditionally include the appropriate CSS, see Is there a way to do browser specific conditional CSS inside a *.css file? for some ideas. IE7 was a lost cause since -ms-zoom did not exist until IE8. Here's the actual HTML I tested with: <div class="wrap"> <iframe class="frame" src="http://time.is"></iframe> </div> <div class="wrap"> <iframe class="frame" src="http://apple.com"></iframe> </div> http://jsfiddle.net/esassaman/PnWFY/ A: I just tested and for me, none of the other solutions worked. I simply tried this and it worked perfectly on Firefox and Chrome. <div class='wrap'> <iframe ...></iframe> </div> and the css: .wrap { width: 640px; height: 480px; overflow: hidden; } iframe { width: 76.92% !important; height: 76.92% !important; -webkit-transform: scale(1.3); transform: scale(1.3); -webkit-transform-origin: 0 0; transform-origin: 0 0; } This scales all the content by 23.08%, the equivalent of the original being 30% larger than the embedded version. (The width/height percentages of course need to be adjusted accordingly (1/scale_factor)). A: For those of you having trouble getting this to work in IE, it is helpful to use -ms-zoom as suggested below and use the zoom function on the #wrap div, not the iframe id. In my experience, with the zoom function trying to scale the iframe div of #frame, it would scale the iframe size and not the content within it (which is what you're going for). Looks like this. Works for me on IE8, Chrome and FF. #wrap { overflow: hidden; position: relative; width:800px; height:850px; -ms-zoom: 0.75; } A: This was my solution on a page with 890px width #frame { overflow: hidden; position: relative; width:1044px; height:1600px; -ms-zoom: 0.85; -moz-transform: scale(0.85); -moz-transform-origin: 0px 0; -o-transform: scale(0.85); -o-transform-origin: 0 0; -webkit-transform: scale(0.85); -webkit-transform-origin: 0 0; } A: Kip's solution should work on Opera and Safari if you change the CSS to: <style> #wrap { width: 600px; height: 390px; padding: 0; overflow: hidden; } #frame { width: 800px; height: 520px; border: 1px solid black; } #frame { -ms-zoom: 0.75; -moz-transform: scale(0.75); -moz-transform-origin: 0 0; -o-transform: scale(0.75); -o-transform-origin: 0 0; -webkit-transform: scale(0.75); -webkit-transform-origin: 0 0; } </style> You might also want to specify overflow: hidden on #frame to prevent scrollbars. A: The #wrap #frame solution works fine, as long as the numbers in #wrap is #frame times the scale factor. It shows only that part of the scaled down frame. You can see it here scaling down websites and putting it into a pinterest like form (with the woodmark jQuery plugin): http://www.genautica.com/sandbox/woodmark-index.html A: So probably not the best solution, but seems to work OK. <IFRAME ID=myframe SRC=.... ></IFRAME> <SCRIPT> window.onload = function(){document.getElementById('myframe').contentWindow.document.body.style = 'zoom:50%;';}; </SCRIPT> Obviously not trying to fix the parent, just adding the "zoom:50%" style to the body of the child with a bit of javascript. Maybe could set the style of the "HTML" element, but didn't try that. A: You don't need to wrap the iframe with an additional tag. Just make sure you increase the width and height of the iframe by the same amount you scale down the iframe. e.g. to scale the iframe content to 80% : #frame { /* Example size! */ height: 400px; /* original height */ width: 100%; /* original width */ } #frame { height: 500px; /* new height (400 * (1/0.8) ) */ width: 125%; /* new width (100 * (1/0.8) )*/ transform: scale(0.8); transform-origin: 0 0; } Basically, to get the same size iframe you need to scale the dimensions. A: Followup to lxs's answer: I noticed a problem where having both the zoom and --webkit-transform tags at the same time seems to confound Chrome (version 15.0.874.15) by doing a double-zoom sort of effect. I was able to work around the issue by replacing zoom with -ms-zoom (targeted only at IE), leaving Chrome to make use of just the --webkit-transform tag, and that cleared things up. A: If your html is styled with css, you can probably link different style sheets for different sizes. A: I do not think HTML has such functionality. The only thing I can imagine would do the trick is to do some server-side processing. Perhaps you could get an image snapshot of the webpage you want to serve, scale it on the server and serve it to the client. This would be a non-interactive page however. (maybe an imagemap could have the link, but still.) Another idea would be to have a server-side component that would alter the HTML. SOrt of like the firefox 2.0 zoom feature. this of course is not perfect zooming, but is better than nothing. Other than that, I am out of ideas. A: These solutions don't work properly for me (blur) with a Flexbox and an iFrame at 100% but if the Iframe uses the rem em or percent units then there is a solution that looks great: window.onload = function(){ let ifElem = document.getElementById("iframe-id"); ifElem.contentWindow.document.documentElement.style.fontSize="80%"; } A: As said, I doubt you can do it. Maybe you can scale at least the text itself, by setting a style font-size: 80%;. Untested, not sure it works, and won't resize boxes or images.
{ "language": "en", "url": "https://stackoverflow.com/questions/166160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "281" }
Q: How can I convert List to Hashtable in C#? I have a list of objects, each containing an Id, Code and Description. I need to convert this list into a Hashtable, using Description as the key and Id as the value. This is so the Hashtable can then be serialised to JSON. Is there a way to convert from List<Object> to Hashtable without writing a loop to go through each item in the list? A: If you have access to Linq, you can use the ToDictionary function. A: Let's assume that your List contains objects of type Foo (with an int Id and a string Description). You can use Linq to turn that list into a Dictionary like this: var dict = myList.Cast<Foo>().ToDictionary(o => o.Description, o => o.Id); A: theList.ForEach(delegate(theObject obj) { dic.Add(obj.Id, obj.Description); }); A: Also look at the System.Collections.ObjectModel.KeyedCollection<TKey, TItem>. It seems like a better match for what you want to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/166174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Searching date meta tags in Sharepoint I'm currently looking at indexing an ASP website from Sharepoint and I need to replicate the old "advanced search" schema that the users are familiar with. In order to do this I need to index a few meta tags from the web pages. This is easily done and for the text fields I can use them in the search as well. However for date meta tags, like "expired" or "published" I'm having some problems. The problem is basically that the meta tags are crawled as "text", but I need Sharepoint to parse them as datetime. I've seen a few posts on TechNet asking for the same, but with no answer. 1: https://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=2614064&SiteID=17 TechNet A: You're not doing anything wrong, this is how the product works. To add to what was said earlier, it's not easy to customize. The proper way to approach this is to create a custom protocol handler for HTML. This is a custom COM Object that implements a few interfaces. The MOSS 2007 SDK has a protocol handler reference. When we did this, we created an ini file so we could define the type we wanted META fields crawled as (String, Int, DateTime). Then when you added the custom properties everything was properly parsed. Then you can use the custom properties like you would normally. A: The web crawler built into search is rudimentary and you won't be able to easily extend it to include meta tags. Allegedly you can write your own protocol handler and crawl the ASP pages in their own content source; allegedly that works. I don't think anyone actually writes their own protocol handlers though. You're going to be disappointed with what the SharePoint crawler offers, which is why there are no answers on the official forum either--because the real answer is "Can't do that easily, sorry." You may be able to hack something up by writing a custom web service (ASMX or WCF-based) that itself crawls the ASP pages' meta tags. From there, you could pull the web service results into the BDC which is searchable, and then in the search results/BDC data you can have a link to the original page. It's like a Rube Goldberg device, I know, but trust me when I say it will be easier than figuring out how to write a protocol handler.
{ "language": "en", "url": "https://stackoverflow.com/questions/166178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Better preverifiers than Sun WTK I'm currently using the sun preverifier to preverify my MIDlet. I'm finding that it can be a bit slow, doesnt give any decent error messages and is only available as a windows exe. Are there any better preverifiers around that will work cross platform (linux specifically) A: Proguard can preverify jar files. I guess you could disable the obfuscation if you didn't want that and just preverify with it. It's a java application so it should run on Linux too. http://proguard.sourceforge.net/manual/examples.html#microedition A: Actually there is also a linux version of WTK (from Sun) and I am using it every day on Ubuntu. Can't really comment on the speed. About Proguard preverification - for some reasons this does not work, for me, on some jars and I have not been able to figure out why. Running 'preverified' jar with WTK emulator throws preverification errors.
{ "language": "en", "url": "https://stackoverflow.com/questions/166179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL 2005 Foreign key between another base Is it possible to create in a table of database a foreign key to a column of table in another database in SQL 2005 ? A: no. if you need cross database referntial integrity the only way is to use triggers.
{ "language": "en", "url": "https://stackoverflow.com/questions/166185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }