text stringlengths 8 267k | meta dict |
|---|---|
Q: What is the best way to add options to a select from a JavaScript object with jQuery? What is the best method for adding options to a <select> from a JavaScript object using jQuery?
I'm looking for something that I don't need a plugin to do, but I would also be interested in the plugins that are out there.
This is what I did:
selectValues = { "1": "test 1", "2": "test 2" };
for (key in selectValues) {
if (typeof (selectValues[key] == 'string') {
$('#mySelect').append('<option value="' + key + '">' + selectValues[key] + '</option>');
}
}
A clean/simple solution:
This is a cleaned up and simplified version of matdumsa's:
$.each(selectValues, function(key, value) {
$('#mySelect')
.append($('<option>', { value : key })
.text(value));
});
Changes from matdumsa's: (1) removed the close tag for the option inside append() and (2) moved the properties/attributes into an map as the second parameter of append().
A: There's a sorting problem with this solution in Chrome (jQuery 1.7.1) (Chrome sorts object properties by name/number?)
So to keep the order (yes, it's object abusing), I changed this:
optionValues0 = {"4321": "option 1", "1234": "option 2"};
to this
optionValues0 = {"1": {id: "4321", value: "option 1"}, "2": {id: "1234", value: "option 2"}};
and then the $.each will look like:
$.each(optionValues0, function(order, object) {
key = object.id;
value = object.value;
$('#mySelect').append($('<option>', { value : key }).text(value));
});
A: Rather than repeating the same code everywhere, I would suggest it is more desirable to write your own jQuery function like:
jQuery.fn.addOption = function (key, value) {
$(this).append($('<option>', { value: key }).text(value));
};
Then to add an option just do the following:
$('select').addOption('0', 'None');
A: You can just iterate over your json array with the following code
$('<option/>').attr("value","someValue").text("Option1").appendTo("#my-select-id");
A: *
*$.each is slower than a for loop
*Each time, a DOM selection is not the best practice in loop $("#mySelect").append();
So the best solution is the following
If JSON data resp is
[
{"id":"0001", "name":"Mr. P"},
{"id":"0003", "name":"Mr. Q"},
{"id":"0054", "name":"Mr. R"},
{"id":"0061", "name":"Mr. S"}
]
use it as
var option = "";
for (i=0; i<resp.length; i++) {
option += "<option value='" + resp[i].id + "'>" + resp[i].name + "</option>";
}
$('#mySelect').html(option);
A: A jQuery plugin could be found here: Auto-populating Select Boxes using jQuery & AJAX.
A: That's what I did with two-dimensional arrays: The first column is item i, add to innerHTML of the <option>. The second column is record_id i, add to the value of the <option>:
*
*PHP
$items = $dal->get_new_items(); // Gets data from the database
$items_arr = array();
$i = 0;
foreach ($items as $item)
{
$first_name = $item->first_name;
$last_name = $item->last_name;
$date = $item->date;
$show = $first_name . " " . $last_name . ", " . $date;
$request_id = $request->request_id;
$items_arr[0][$i] = $show;
$items_arr[1][$i] = $request_id;
$i++;
}
echo json_encode($items_arr);
*JavaScript/Ajax
function ddl_items() {
if (window.XMLHttpRequest) {
// Code for Internet Explorer 7+, Firefox, Chrome, Opera, and Safari
xmlhttp=new XMLHttpRequest();
}
else{
// Code for Internet Explorer 6 and Internet Explorer 5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function() {
if (xmlhttp.readyState==4 && xmlhttp.status==200) {
var arr = JSON.parse(xmlhttp.responseText);
var lstbx = document.getElementById('my_listbox');
for (var i=0; i<arr.length; i++) {
var option = new Option(arr[0][i], arr[1][i]);
lstbx.options.add(option);
}
}
};
xmlhttp.open("GET", "Code/get_items.php?dummy_time=" + new Date().getTime() + "", true);
xmlhttp.send();
}
}
A: Although the previous answers are all valid answers - it might be advisable to append all these to a documentFragmnet first, then append that document fragment as an element after...
See John Resig's thoughts on the matter...
Something along the lines of:
var frag = document.createDocumentFragment();
for(item in data.Events)
{
var option = document.createElement("option");
option.setAttribute("value", data.Events[item].Key);
option.innerText = data.Events[item].Value;
frag.appendChild(option);
}
eventDrop.empty();
eventDrop.append(frag);
A: Yet another way of doing it:
var options = [];
$.each(selectValues, function(key, value) {
options.push($("<option/>", {
value: key,
text: value
}));
});
$('#mySelect').append(options);
A: if (data.length != 0) {
var opts = "";
for (i in data)
opts += "<option value='"+data[i][value]+"'>"+data[i][text]+"</option>";
$("#myselect").empty().append(opts);
}
This manipulates the DOM only once after first building a giant string.
A: I found that this is simple and works great.
for (var i = 0; i < array.length; i++) {
$('#clientsList').append($("<option></option>").text(array[i].ClientName).val(array[i].ID));
};
A: The JSON format:
[{
"org_name": "Asset Management"
}, {
"org_name": "Debt Equity Foreign services"
}, {
"org_name": "Credit Services"
}]
And the jQuery code to populate the values to the Dropdown on Ajax success:
success: function(json) {
var options = [];
$('#org_category').html(''); // Set the Dropdown as Blank before new Data
options.push('<option>-- Select Category --</option>');
$.each(JSON.parse(json), function(i, item) {
options.push($('<option/>',
{
value: item.org_name, text: item.org_name
}));
});
$('#org_category').append(options); // Set the Values to Dropdown
}
A: If you don't have to support old IE versions, using the Option constructor is clearly the way to go, a readable and efficient solution:
$(new Option('myText', 'val')).appendTo('#mySelect');
It's equivalent in functionality to, but cleaner than:
$("<option></option>").attr("value", "val").text("myText")).appendTo('#mySelect');
A: Using the $.map() function, you can do this in a more elegant way:
$('#mySelect').html( $.map(selectValues, function(val, key){
return '<option value="' + val + '">'+ key + '</option>';
}).join(''));
A: This looks nicer, provides readability, but is slower than other methods.
$.each(selectData, function(i, option)
{
$("<option/>").val(option.id).text(option.title).appendTo("#selectBox");
});
If you want speed, the fastest (tested!) way is this, using array, not string concatenation, and using only one append call.
auxArr = [];
$.each(selectData, function(i, option)
{
auxArr[i] = "<option value='" + option.id + "'>" + option.title + "</option>";
});
$('#selectBox').append(auxArr.join(''));
A: <!DOCTYPE html>
<html lang="en">
<head>
<title>append selectbox using jquery</title>
<meta charset="utf-8">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script type="text/javascript">
function setprice(){
var selectValues = { "1": "test 1", "2": "test 2" };
$.each(selectValues, function(key, value) {
$('#mySelect')
.append($("<option></option>")
.attr("value",key)
.text(value));
});
}
</script>
</head>
<body onload="setprice();">
<select class="form-control" id="mySelect">
<option>1</option>
<option>2</option>
<option>3</option>
<option>4</option>
</select>
</body>
</html>
A: var output = [];
$.each(selectValues, function(key, value)
{
output.push('<option value="'+ key +'">'+ value +'</option>');
});
$('#mySelect').html(output.join(''));
In this way you "touch the DOM" only one time.
I'm not sure if the latest line can be converted into $('#mySelect').html(output.join('')) because I don't know jQuery internals (maybe it does some parsing in the html() method)
A: A refinement of older @joshperry's answer:
It seems that plain .append also works as expected,
$("#mySelect").append(
$.map(selectValues, function(v,k){
return $("<option>").val(k).text(v);
})
);
or shorter,
$("#mySelect").append(
$.map(selectValues, (v,k) => $("<option>").val(k).text(v))
// $.map(selectValues, (v,k) => new Option(v, k)) // using plain JS
);
A: This is slightly faster and cleaner.
var selectValues = {
"1": "test 1",
"2": "test 2"
};
var $mySelect = $('#mySelect');
//
$.each(selectValues, function(key, value) {
var $option = $("<option/>", {
value: key,
text: value
});
$mySelect.append($option);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<select id="mySelect"></select>
A: All of these answers seem unnecessarily complicated. All you need is:
var options = $('#mySelect').get(0).options;
$.each(selectValues, function(key, value) {
options[options.length] = new Option(value, key);
});
That is completely cross browser compatible.
A: I combine the two best answers into a great answer.
var outputConcatenation = [];
$.each(selectValues, function(i, item) {
outputConcatenation.push($("<option></option>").attr("value", item.key).attr("data-customdata", item.customdata).text(item.text).prop("outerHTML"));
});
$("#myselect").html(outputConcatenation.join(''));
A: $.each(selectValues, function(key, value) {
$('#mySelect').append($("<option/>", {
value: key, text: value
}));
});
A: Set your HTML select id into following line below. In here mySelect is used as the id of the select element.
var options = $("#mySelect");
then get the object which is the selectValues in this scenario and sets it to the jquery for each loop. It will use the value and text of the objects accordingly and appends it into the option selections as follows.
$.each(selectValues, function(val, text) {
options.append(
$('<option></option>').val(val).html(text)
);
});
This will display text as the option list when drop down list is selected and once a text is selected value of the selected text will be used.
Eg.
"1": "test 1",
"2": "test 2",
Dropdown,
display name: test 1 -> value is 1
display name: test 2 -> value is 2
A: Actually, for getting the improved performance, it's better to make option list separately and append to select id.
var options = [];
$.each(selectValues, function(key, value) {
options.push ($('<option>', { value : key })
.text(value));
});
$('#mySelect').append(options);
http://learn.jquery.com/performance/append-outside-loop/
A: Pure JS
In pure JS adding next option to select is easier and more direct
mySelect.innerHTML+= `<option value="${key}">${value}</option>`;
let selectValues = { "1": "test 1", "2": "test 2" };
for(let key in selectValues) {
mySelect.innerHTML+= `<option value="${key}">${selectValues[key]}</option>`;
}
<select id="mySelect">
<option value="0" selected="selected">test 0</option>
</select>
A: $.each(response, function (index,value) {
$('#unit')
.append($("<option></option>")
.attr("value", value.id)
.text(value.title));
});
A: var output = [];
var length = data.length;
for(var i = 0; i < length; i++)
{
output[i++] = '<option value="' + data[i].start + '">' + data[i].start + '</option>';
}
$('#choose_schedule').get(0).innerHTML = output.join('');
I've done a few tests and this, I believe, does the job the fastest. :P
A: Be forwarned... I am using jQuery Mobile 1.0b2 with PhoneGap 1.0.0 on an Android 2.2 (Cyanogen 7.0.1) phone (T-Mobile G2) and could not get the .append() method to work at all. I had to use .html() like follows:
var options;
$.each(data, function(index, object) {
options += '<option value="' + object.id + '">' + object.stop + '</option>';
});
$('#selectMenu').html(options);
A: There's an approach using the Microsoft Templating approach that's currently under proposal for inclusion into jQuery core. There's more power in using the templating so for the simplest scenario it may not be the best option. For more details see Scott Gu's post outlining the features.
First include the templating js file, available from github.
<script src="Scripts/jquery.tmpl.js" type="text/javascript" />
Next set-up a template
<script id="templateOptionItem" type="text/html">
<option value=\'{{= Value}}\'>{{= Text}}</option>
</script>
Then with your data call the .render() method
var someData = [
{ Text: "one", Value: "1" },
{ Text: "two", Value: "2" },
{ Text: "three", Value: "3"}];
$("#templateOptionItem").render(someData).appendTo("#mySelect");
I've blogged this approach in more detail.
A: The same as other answers, in a jQuery fashion:
$.each(selectValues, function(key, value) {
$('#mySelect')
.append($("<option></option>")
.attr("value", key)
.text(value));
});
A: I have made something like this, loading a dropdown item via Ajax. The response above is also acceptable, but it is always good to have as little DOM modification as as possible for better performance.
So rather than add each item inside a loop it is better to collect items within a loop and append it once it's completed.
$(data).each(function(){
... Collect items
})
Append it,
$('#select_id').append(items);
or even better
$('#select_id').html(items);
A: jQuery
var list = $("#selectList");
$.each(items, function(index, item) {
list.append(new Option(item.text, item.value));
});
Vanilla JavaScript
var list = document.getElementById("selectList");
for(var i in items) {
list.add(new Option(items[i].text, items[i].value));
}
A: function populateDropdown(select, data) {
select.html('');
$.each(data, function(id, option) {
select.append($('<option></option>').val(option.value).html(option.name));
});
}
It works well with jQuery 1.4.1.
For complete article for using dynamic lists with ASP.NET MVC & jQuery visit:
Dynamic Select Lists with MVC and jQuery
A: The simple way is:
$('#SelectId').html("<option value='0'>select</option><option value='1'>Laguna</option>");
A: Most of the other answers use the each function to iterate over the selectValues. This requires that append be called into for each element and a reflow gets triggered when each is added individually.
Updating this answer to a more idiomatic functional method (using modern JS) can be formed to call append only once, with an array of option elements created using map and an Option element constructor.
Using an Option DOM element should reduce function call overhead as the option element doesn't need to be updated after creation and jQuery's parsing logic need not run.
$('mySelect').append($.map(selectValues, (k, v) => new Option(k, v)))
This can be simplified further if you make a factory utility function that will new up an option object:
const newoption = (...args) => new Option(...args)
Then this can be provided directly to map:
$('mySelect').append($.map(selectValues, newoption))
Previous Formulation
Because append also allows passing values as a variable number of arguments, we can precreate the list of option elements map and append them as arguments in a single call by using apply.
$.fn.append.apply($('mySelect'), $.map(selectValues, (k, v) => $("<option/>").val(k).text(v)));
It looks like that in later versions of jQuery, append also accepts an array argument and this can be simplified somewhat:
$('mySelect').append($.map(selectValues, (k, v) => $("<option/>").val(k).text(v)))
A: I decided to chime in a bit.
*
*Deal with prior selected option; some browsers mess up when we append
*ONLY hit DOM once with the append
*Deal with multiple property while adding more options
*Show how to use an object
*Show how to map using an array of objects
// objects as value/desc
let selectValues = {
"1": "test 1",
"2": "test 2",
"3": "test 3",
"4": "test Four"
};
//use div here as using "select" mucks up the original selected value in "mySelect"
let opts = $("<div />");
let opt = {};
$.each(selectValues, function(value, desc) {
opts.append($('<option />').prop("value", value).text(desc));
});
opts.find("option").appendTo('#mySelect');
// array of objects called "options" in an object
let selectValuesNew = {
options: [{
value: "1",
description: "2test 1"
},
{
value: "2",
description: "2test 2",
selected: true
},
{
value: "3",
description: "2test 3"
},
{
value: "4",
description: "2test Four"
}
]
};
//use div here as using "select" mucks up the original selected value
let opts2 = $("<div />");
let opt2 = {}; //only append after adding all options
$.map(selectValuesNew.options, function(val, index) {
opts2.append($('<option />')
.prop("value", val.value)
.prop("selected", val.selected)
.text(val.description));
});
opts2.find("option").appendTo('#mySelectNew');
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<select id="mySelect">
<option value="" selected="selected">empty</option>
</select>
<select id="mySelectNew" multiple="multiple">
<option value="" selected="selected">2empty</option>
</select>
A: Since JQuery's append can take an array as an argument, I'm surprised nobody suggested making this a one-liner with map
$('#the_select').append(['a','b','c'].map(x => $('<option>').text(x)));
or reduce
['a','b','c'].reduce((s,x) => s.append($('<option>').text(x)), $('#the_select'));
A: Getting the object keys to get the object values.
Using map() to add new Options.
const selectValues = {
"1": "test 1",
"2": "test 2"
}
const selectTest = document.getElementById('selectTest')
Object.keys(selectValues).map(key => selectTest.add(new Option(selectValues[key], key)))
<select id="selectTest"></select>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/170986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1475"
} |
Q: How to get the charset from an HTML page I'm trying to get the charset attribute in any HTML meta tag.
(ie.< meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" >)
Is there any way to do that in C++ under linux. I was using HTML tidy as a parser but I can't get that attribute to return me anything different from us-ascii (even if the encoding is utf-8)this is the output I got:
*.*4 Node: meta
Name attr: http-equiv
Value attr: Content-Type
Name attr: content
Value attr: text/html; charset=us-ascii
A: As per request of Vinko Vrsalovic, here is the code that get that result:
void dumpNode( TidyNode tnod, int indent )
{
TidyNode child;
for ( child = tidyGetChild(tnod); child; child = tidyGetNext(child) )
{
ctmbstr name;
switch ( tidyNodeGetType(child) )
{
case TidyNode_Root: name = "Root"; break;
case TidyNode_DocType: name = "DOCTYPE"; break;
case TidyNode_Comment: name = "Comment"; break;
case TidyNode_ProcIns: name = "Processing Instruction"; break;
case TidyNode_Text: name = "Text"; break;
case TidyNode_CDATA: name = "CDATA"; break;
case TidyNode_Section: name = "XML Section"; break;
case TidyNode_Asp: name = "ASP"; break;
case TidyNode_Jste: name = "JSTE"; break;
case TidyNode_Php: name = "PHP"; break;
case TidyNode_XmlDecl: name = "XML Declaration"; break;
case TidyNode_Start:
case TidyNode_End:
case TidyNode_StartEnd:
default:
name = tidyNodeGetName( child );
TidyAttr att = tidyAttrFirst(child);
while (att)
{
std::cout < <"Name attr: " << tidyAttrName(att) << std::endl;
std::cout< <"Value attr:"<< tidyAttrValue(att) << std::endl;
att = tidyAttrNext(att);
}
break;
}
assert( name != NULL );
printf( "%d*.*%d%sNode: %s\n", indent, indent, " ", name );
dumpNode( child, indent + 4 );
}
}
void dumpHtml( TidyDoc tdoc)
{
dumpNode( tidyGetHtml(tdoc),0 );
}
int main(int argc, char **argv) {
std::string toReturn("");
TidyBuffer output;
TidyBuffer errbuf;
int rc = -1;
Bool ok;
tidyBufInit(&output);
tidyBufInit(&errbuf);
TidyDoc tdoc = tidyCreate();
ok = tidyOptSetBool( tdoc, TidyXhtmlOut, yes ); // Convert to XHTML
if ( ok )
rc = tidySetErrorBuffer( tdoc, &errbuf ); // Capture diagnostics
if ( rc >= 0 )
rc = tidyParseFile(tdoc, "fuebuena.html"); // Parse the input
if ( rc >= 0 )
rc = tidyCleanAndRepair( tdoc ); // Tidy it up!
if (rc >= 0)
dumpHtml(tdoc);
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/170988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the best way to remove a table row with jQuery? What is the best method for removing a table row with jQuery?
A: Is the following acceptable:
$('#myTableRow').remove();
A: All you have to do is to remove the table row (<tr>) tag from your table. For example here is the code to remove the last row from the table:
$('#myTable tr:last').remove();
*Code above was taken from this jQuery Howto post.
A: function removeRow(row) {
$(row).remove();
}
<tr onmousedown="removeRow(this)"><td>Foo</td></tr>
Maybe something like this could work as well? I haven't tried doing something with "this", so I don't know if it works or not.
A: try this for size
$(this).parents('tr').first().remove();
full listing:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.4.3.min.js"></script>
<script type="text/javascript">
$(document).ready(function() {
$('.deleteRowButton').click(DeleteRow);
});
function DeleteRow()
{
$(this).parents('tr').first().remove();
}
</script>
</head>
<body>
<table>
<tr><td>foo</td>
<td><a class="deleteRowButton">delete row</a></td></tr>
<tr><td>bar bar</td>
<td><a class="deleteRowButton">delete row</a></td></tr>
<tr><td>bazmati</td>
<td><a class="deleteRowButton">delete row</a></td></tr>
</table>
</body>
</html>
see it in action
A: Assuming you have a button/link inside of a data cell in your table, something like this would do the trick...
$(".delete").live('click', function(event) {
$(this).parent().parent().remove();
});
This will remove the parent of the parent of the button/link that is clicked. You need to use parent() because it is a jQuery object, not a normal DOM object, and you need to use parent() twice, because the button lives inside a data cell, which lives inside a row....which is what you want to remove. $(this) is the button clicked, so simply having something like this will remove only the button:
$(this).remove();
While this will remove the data cell:
$(this).parent().remove();
If you want to simply click anywhere on the row to remove it something like this would work. You could easily modify this to prompt the user or work only on a double-click:
$(".delete").live('click', function(event) {
$(this).parent().remove();
});
A: You're right:
$('#myTableRow').remove();
This works fine if your row has an id, such as:
<tr id="myTableRow"><td>blah</td></tr>
If you don't have an id, you can use any of jQuery's plethora of selectors.
A: You can use:
$($(this).closest("tr"))
for finding the parent table row of an element.
It is more elegant than parent().parent() which is what I started out doing and soon learnt the error of my ways.
--Edit --
Someone pointed out that the question was about removing the row...
$($(this).closest("tr")).remove()
As pointed out below you can simply do:
$(this).closest('tr').remove();
A similar code snippet can be used for many operations such as firing events on multiple elements.
A: Another one by empty() :
$(this).closest('tr').empty();
A: If the row you want to delete might change you can use this. Just pass this function the row # you wish to delete.
function removeMyRow(docRowCount){
$('table tr').eq(docRowCount).remove();
}
A: if you have HTML like this
<tr>
<td><span class="spanUser" userid="123"></span></td>
<td><span class="spanUser" userid="123"></span></td>
</tr>
where userid="123" is a custom attribute that you can populate dynamically when you build the table,
you can use something like
$(".spanUser").live("click", function () {
var span = $(this);
var userid = $(this).attr('userid');
var currentURL = window.location.protocol + '//' + window.location.host;
var url = currentURL + "/Account/DeleteUser/" + userid;
$.post(url, function (data) {
if (data) {
var tdTAG = span.parent(); // GET PARENT OF SPAN TAG
var trTAG = tdTAG.parent(); // GET PARENT OF TD TAG
trTAG.remove(); // DELETE TR TAG == DELETE AN ENTIRE TABLE ROW
} else {
alert('Sorry, there is some error.');
}
});
});
So in that case you don't know the class or id of the TR tag but anyway you are able to delete it.
A: Easy.. Try this
$("table td img.delete").click(function () {
$(this).parent().parent().parent().fadeTo(400, 0, function () {
$(this).remove();
});
return false;
});
A: $('#myTable tr').click(function(){
$(this).remove();
return false;
});
Even a better one
$("#MyTable").on("click", "#DeleteButton", function() {
$(this).closest("tr").remove();
});
A: $('tr').click(function()
{
$(this).remove();
});
i think you will try the above code, as it work, but i don't know why it work for sometime and then the whole table is removed. i am also trying to remove the row by click the row. but could not find exact answer.
A: I appreciate this is an old post, but I was looking to do the same, and found the accepted answer didn't work for me. Assuming JQuery has moved on since this was written.
Anyhow, I found the following worked for me:
$('#resultstbl tr[id=nameoftr]').remove();
Not sure if this helps anyone. My example above was part of a larger function so not wrapped it in an event listener.
A: id is not a good selector now. You can define some properties on the rows. And you can use them as selector.
<tr category="petshop" type="fish"><td>little fish</td></tr>
<tr category="petshop" type="dog"><td>little dog</td></tr>
<tr category="toys" type="lego"><td>lego starwars</td></tr>
and you can use a func to select the row like this (ES6):
const rowRemover = (category,type)=>{
$(`tr[category=${category}][type=${type}]`).remove();
}
rowRemover('petshop','fish');
A: If you are using Bootstrap Tables
add this code snippet to your bootstrap_table.js
BootstrapTable.prototype.removeRow = function (params) {
if (!params.hasOwnProperty('index')) {
return;
}
var len = this.options.data.length;
if ((params.index > len) || (params.index < 0)){
return;
}
this.options.data.splice(params.index, 1);
if (len === this.options.data.length) {
return;
}
this.initSearch();
this.initPagination();
this.initBody(true);
};
Then in your var allowedMethods = [
add 'removeRow'
Finally you can use $("#your-table").bootstrapTable('removeRow',{index:1});
Credits to this post
A: The easiest method to remove rows from table:
*
*Remove row of table using its unique ID.
*Remove based on the order/index of that row. Ex: remove the third or fifth row.
For example:
<table id='myTable' border='1'>
<tr id='tr1'><td>Row1</td></tr>
<tr id='tr2'><td>Row2</td></tr>
<tr id='tr3'><td>Row3</td></tr>
<tr id='tr4'><td>Row4</td></tr>
<tr id='tr5'><td>Row5</td></tr>
</table>
//======REMOVE TABLE ROW=========
//1. remove spesific row using its ID
$('#tr1').remove();
//2. remove spesific row using its order or index.
//row index started from 0-n. Row1 index is 0, Row2 index is 1 and so on.
$('#myTable').find('tr:eq(2)').remove();//removing Row3
A: This is undoubtedly the easiest way:
$("#your_tbody_tag").empty();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/170997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "341"
} |
Q: How do I make an onclientclick post back using jQuery with asp.net I want to recreate the the update panel postback without using an update panel to do the postback. What is the generic method for doing this?
For example, on Stackoverflow, when you vote up or down on a question it does a postback to update the database and I would bet they didn't use an update panel.
What do I have?
I have a table with table data. When I click on the td item as a whole column, I want to do an update to the database and also update a gridview on the page itself. The gridview shows all the currently clicked items in the table because it was updated via "our method".
Looking for a good generic method I could use for a lot of async postbacks without update panel.
A: You can just use a standard AJAX call to accomplish this. Create a .aspx page which updates the database in its Page_Load method, and displays any desired information (like the current DB value after the update) as XML. Then make an AJAX call to that page using jQuery.
You can also return an HTML fragment (i.e. an updated GridView), and use jQuery to insert the updated HTML into the current page.
Edit:
Sample 2 on this page should be very close to what you want:
http://www.codeproject.com/KB/ajax/AjaxJQuerySample.aspx
A: The way that Stack Overflow works differs in two important ways from that CodeProject article.
*
*Stack Overflow is making its AJAX request against an ASP.NET MVC controller action, not a standalone ASPX page. You might consider this as the MVC analogue of an ASP.NET AJAX page method. In both cases, the ASPX method will lag behind in terms of performance.
*Stack Overflow's AJAX request returns a JSON serialized result, not arbitrary plaintext or HTML. This makes handling it on the client side more standardized and generally cleaner.
For example: when I voted this question up an XmlHttpRequest request was made to /questions/171000/vote, with a "voteTypeId" of 2 in the POST data.
The controller that handled the request added my vote to a table somewhere and then responded with this JSON:
{"Success":true,"NewScore":1,"Message":"","LastVoteTypeId":2}
Using that information, this JavaScript takes care of updating the client-side display:
var voteResult = function(jClicked, postId, data) {
if (data.Success) {
jClicked.parent().find("span.vote-count-post").text(data.NewScore);
if (data.Message)
showFadingNotification(jClicked, data.Message);
}
else {
showNotification(jClicked, data.Message);
reset(jClicked, jClicked);
if (data.LastVoteTypeId) {
selectPreviousVote(jClicked, data.LastVoteTypeId);
}
}
};
If you're using WebForms, the example of calling page methods that you found on my blog is definitely in the right ballpark.
However, I would suggest that you consider a web service for any centralized functionality (like this voting example), instead of page methods. Page methods seem to be slightly easier to write, but they also have some reuse drawbacks and tend to provide an illusion of added security that isn't really there.
This is an example of doing the same thing you found, but with web services (the comments on this post actually led to the post you found):
http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: iphone <-> real world connection Any pointers on how to initiate serial communication with the iphone? Or any other idea to interact with external hardware?
A: The only supported way to connect external hardware to the iPhone is through the iPod accessory protocol, through the 30-pin connector. Details on that program are at http://developer.apple.com/ipod/accessories.html. It isn't a free program and the 30-pin connector only supports certain features, but it's the only option available today.
A: Apps compiled with the unofficial toolkit (and running on jailbroken iPhones) can supposedly access the serial port present in the dock connector.
See:
http://devdot.wikispaces.com/Iphone+Serial+Port+Tutorial
A: It depends on what you want to do. For an SSH terminal connection I reccomend TouchTerm (search the appstore).
I have no experience with electrical connections, but you can find the pinout of the iPod/iPhone connector here:
http://pinouts.ru/PortableDevices/ipod_pinout.shtml
You can then download the iPhone developer kit here:
http://developer.apple.com/iphone/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Accepting client certificates from any CA I am setting up support for users to sign in with client certificates. Unfortunately IIS refuses to acknowledge any certificate not chained to an installed CA (see this article).
As the feature is implemented only for users´ convenience, it would be great to allow any client certificate. Is there any way to accomplish this?
My server is running Windows Server 2003 and IIS 6, but the behaviour is no different on my IIS 7 running locally. If IIS 7 could be customized to support any client certificate, I would be able to change though (given no solution for IIS 6 is available).
A: I think the normal way is for you to issue the certificates to them, and then for you to set up IIS to accept your cert as a root.
A: Implement this class:
public class TrustAllCertificatePolicy : System.Net.ICertificatePolicy
{
public TrustAllCertificatePolicy() {}
public bool CheckValidationResult(ServicePoint sp, X509Certificate cert,WebRequest req, int problem)
{
return true;
}
}
Set it using the following line of code. Afterward any certificates, whether expired, name mismatch, etc. will be accepted.
System.Net.ServicePointManager.CertificatePolicy = new TrustAllCertificatePolicy();
A: I think you can add a new root CA cert via the certmgr command
certmgr --add -c -m Trust <CA_cert_DER_fmt>
Note: Unlike UNIXes, Windows manages certs for all applications simultaneously, which can have security implications, so beware of that
A: WCF allows you to write a custom X.509 certificate handler. In the code you can do some check like comparing the thumbprint against know value in the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Close point of approch detection I have a large set of 3rd order polynomials in 3D.
in matrix form
Pn = [1,t,t2,t4]*[An]
[Pn] and [An] are 1xN and 4xN matrices respectively
each function has a weight Wn. I want to, for some n, m, T and t0 find the first t where t>t0 such that
(Wn*Wm) * |Pn-Pm|-2 > T
aside from a the O(n2) "try everything" approach I'm not even sure where to start, For that matter, I'm not shure how to answer this even for the known n & m.
Any ideas
Edit:
*
*the set size is on the order of 10-1000
*the weight's are distributed ~ logarithmically (very few large, many small)
*this test would be in an inner loop of an n-body simulator so it would get run a lot
*versions that do well (amortized) at finding a new answer after one path is altered are a good thing.
A: Not knowing if this is solvable through analytic means, there are many approaches to searching a space and trying to find any t that meets that criteria.
Genetic algorithms, simulated annealing and other algorithms for optimization spring to mind.
A: OK to seed the pot:
*
*Using some form of "close pair finder" algorithm seed a heap with those pairs at t0 and other times.
*Pull the closest pair found
*if close enough and sooner than the best so far, keep
*find if they are closer or further apart
*split the difference between the current pair and the next one in on the "closer" side and add that the the heap.
Thoughts?
A: How large is N? Is an exhaustive search even possible?
I'd ask the question on the numpy or scipy discussion boards and brush up on your python skills. My bet is you could probably turn this into a minimisation problem and use fmin or BFGS or some other bounded quasi-Newton algorithm to find a reasonable minimum. Perhaps minimise the difference between t and T. Unless there is something weird in your matrices it looks like your search space may at least be continuous.
Since you mention closest point of approach in your title check this post out on the numpy board.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Adding a table row in jQuery I'm using jQuery to add an additional row to a table as the last row.
I have done it this way:
$('#myTable').append('<tr><td>my data</td><td>more data</td></tr>');
Are there limitations to what you can add to a table like this (such as inputs, selects, number of rows)? Is there a different way to do it?
A: I found this AddRow plugin quite useful for managing table rows. Though, Luke's solution would be the best fit if you just need to add a new row.
A: Neil's answer is by far the best one. However things get messy really fast. My suggestion would be to use variables to store elements and append them to the DOM hierarchy.
HTML
<table id="tableID">
<tbody>
</tbody>
</table>
JAVASCRIPT
// Reference to the table body
var body = $("#tableID").find('tbody');
// Create a new row element
var row = $('<tr>');
// Create a new column element
var column = $('<td>');
// Create a new image element
var image = $('<img>');
image.attr('src', 'img.png');
image.text('Image cell');
// Append the image to the column element
column.append(image);
// Append the column to the row element
row.append(column);
// Append the row to the table body
body.append(row);
A: <table id="myTable">
<tbody>
<tr>...</tr>
<tr>...</tr>
</tbody>
<tr>...</tr>
</table>
Write with a javascript function
document.getElementById("myTable").insertRow(-1).innerHTML = '<tr>...</tr><tr>...</tr>';
A: jQuery has a built-in facility to manipulate DOM elements on the fly.
You can add anything to your table like this:
$("#tableID").find('tbody')
.append($('<tr>')
.append($('<td>')
.append($('<img>')
.attr('src', 'img.png')
.text('Image cell')
)
)
);
The $('<some-tag>') thing in jQuery is a tag object that can have several attr attributes that can be set and get, as well as text, which represents the text between the tag here: <tag>text</tag>.
This is some pretty weird indenting, but it's easier for you to see what's going on in this example.
A: I know you have asked for a jQuery method. I looked a lot and find that we can do it in a better way than using JavaScript directly by the following function.
tableObject.insertRow(index)
index is an integer that specifies the position of the row to insert (starts at 0). The value of -1 can also be used; which result in that the new row will be inserted at the last position.
This parameter is required in Firefox and Opera, but it is optional in Internet Explorer, Chrome and Safari.
If this parameter is omitted, insertRow() inserts a new row at the last position in Internet Explorer and at the first position in Chrome and Safari.
It will work for every acceptable structure of HTML table.
The following example will insert a row in last (-1 is used as index):
<html>
<head>
<script type="text/javascript">
function displayResult()
{
document.getElementById("myTable").insertRow(-1).innerHTML = '<td>1</td><td>2</td>';
}
</script>
</head>
<body>
<table id="myTable" border="1">
<tr>
<td>cell 1</td>
<td>cell 2</td>
</tr>
<tr>
<td>cell 3</td>
<td>cell 4</td>
</tr>
</table>
<br />
<button type="button" onclick="displayResult()">Insert new row</button>
</body>
</html>
I hope it helps.
A: This is my solution
$('#myTable').append('<tr><td>'+data+'</td><td>'+other data+'</td>...</tr>');
A: Here is some hacketi hack code. I wanted to maintain a row template in an HTML page. Table rows 0...n are rendered at request time, and this example has one hardcoded row and a simplified template row. The template table is hidden, and the row tag must be within a valid table or browsers may drop it from the DOM tree. Adding a row uses counter+1 identifier, and the current value is maintained in the data attribute. It guarantees each row gets unique URL parameters.
I have run tests on Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, Opera, Nokia Lumia 800, Nokia C7 (with Symbian 3), Android stock and Firefox beta browsers.
<table id="properties">
<tbody>
<tr>
<th>Name</th>
<th>Value</th>
<th> </th>
</tr>
<tr>
<td nowrap>key1</td>
<td><input type="text" name="property_key1" value="value1" size="70"/></td>
<td class="data_item_options">
<a class="buttonicon" href="javascript:deleteRow()" title="Delete row" onClick="deleteRow(this); return false;"></a>
</td>
</tr>
</tbody>
</table>
<table id="properties_rowtemplate" style="display:none" data-counter="0">
<tr>
<td><input type="text" name="newproperty_name_\${counter}" value="" size="35"/></td>
<td><input type="text" name="newproperty_value_\${counter}" value="" size="70"/></td>
<td><a class="buttonicon" href="javascript:deleteRow()" title="Delete row" onClick="deleteRow(this); return false;"></a></td>
</tr>
</table>
<a class="action" href="javascript:addRow()" onclick="addRow('properties'); return false" title="Add new row">Add row</a><br/>
<br/>
- - - -
// add row to html table, read html from row template
function addRow(sTableId) {
// find destination and template tables, find first <tr>
// in template. Wrap inner html around <tr> tags.
// Keep track of counter to give unique field names.
var table = $("#"+sTableId);
var template = $("#"+sTableId+"_rowtemplate");
var htmlCode = "<tr>"+template.find("tr:first").html()+"</tr>";
var id = parseInt(template.data("counter"),10)+1;
template.data("counter", id);
htmlCode = htmlCode.replace(/\${counter}/g, id);
table.find("tbody:last").append(htmlCode);
}
// delete <TR> row, childElem is any element inside row
function deleteRow(childElem) {
var row = $(childElem).closest("tr"); // find <tr> parent
row.remove();
}
PS: I give all credits to the jQuery team; they deserve everything. JavaScript programming without jQuery - I don't even want think about that nightmare.
A: <tr id="tablerow"></tr>
$('#tablerow').append('<tr>...</tr><tr>...</tr>');
A: I Guess i have done in my project , here it is:
html
<div class="container">
<div class = "row">
<div class = "span9">
<div class = "well">
<%= form_for (@replication) do |f| %>
<table>
<tr>
<td>
<%= f.label :SR_NO %>
</td>
<td>
<%= f.text_field :sr_no , :id => "txt_RegionName" %>
</td>
</tr>
<tr>
<td>
<%= f.label :Particular %>
</td>
<td>
<%= f.text_area :particular , :id => "txt_Region" %>
</td>
</tr>
<tr>
<td>
<%= f.label :Unit %>
</td>
<td>
<%= f.text_field :unit ,:id => "txt_Regio" %>
</td>
</tr>
<tr>
<td>
<%= f.label :Required_Quantity %>
</td>
<td>
<%= f.text_field :quantity ,:id => "txt_Regi" %>
</td>
</tr>
<tr>
<td></td>
<td>
<table>
<tr><td>
<input type="button" name="add" id="btn_AddToList" value="add" class="btn btn-primary" />
</td><td><input type="button" name="Done" id="btn_AddToList1" value="Done" class="btn btn-success" />
</td></tr>
</table>
</td>
</tr>
</table>
<% end %>
<table id="lst_Regions" style="width: 500px;" border= "2" class="table table-striped table-bordered table-condensed">
<tr>
<td>SR_NO</td>
<td>Item Name</td>
<td>Particular</td>
<td>Cost</td>
</tr>
</table>
<input type="button" id= "submit" value="Submit Repication" class="btn btn-success" />
</div>
</div>
</div>
</div>
js
$(document).ready(function() {
$('#submit').prop('disabled', true);
$('#btn_AddToList').click(function () {
$('#submit').prop('disabled', true);
var val = $('#txt_RegionName').val();
var val2 = $('#txt_Region').val();
var val3 = $('#txt_Regio').val();
var val4 = $('#txt_Regi').val();
$('#lst_Regions').append('<tr><td>' + val + '</td>' + '<td>' + val2 + '</td>' + '<td>' + val3 + '</td>' + '<td>' + val4 + '</td></tr>');
$('#txt_RegionName').val('').focus();
$('#txt_Region').val('');
$('#txt_Regio').val('');
$('#txt_Regi').val('');
$('#btn_AddToList1').click(function () {
$('#submit').prop('disabled', false).addclass('btn btn-warning');
});
});
});
A: if you have another variable you can access in <td> tag like that try.
This way I hope it would be helpful
var table = $('#yourTableId');
var text = 'My Data in td';
var image = 'your/image.jpg';
var tr = (
'<tr>' +
'<td>'+ text +'</td>'+
'<td>'+ text +'</td>'+
'<td>'+
'<img src="' + image + '" alt="yourImage">'+
'</td>'+
'</tr>'
);
$('#yourTableId').append(tr);
A: <table id=myTable>
<tr><td></td></tr>
<style="height=0px;" tfoot></tfoot>
</table>
You can cache the footer variable and reduce access to DOM (Note: may be it will be better to use a fake row instead of footer).
var footer = $("#mytable tfoot")
footer.before("<tr><td></td></tr>")
A: As i have also got a way too add row at last or any specific place so i think i should also share this:
First find out the length or rows:
var r=$("#content_table").length;
and then use below code to add your row:
$("#table_id").eq(r-1).after(row_html);
A: To add a good example on the topic, here is working solution if you need to add a row at specific position.
The extra row is added after the 5th row, or at the end of the table if there are less then 5 rows.
var ninja_row = $('#banner_holder').find('tr');
if( $('#my_table tbody tr').length > 5){
$('#my_table tbody tr').filter(':nth-child(5)').after(ninja_row);
}else{
$('#my_table tr:last').after(ninja_row);
}
I put the content on a ready (hidden) container below the table ..so if you(or the designer) have to change it is not required to edit the JS.
<table id="banner_holder" style="display:none;">
<tr>
<td colspan="3">
<div class="wide-banner"></div>
</td>
</tr>
</table>
A: $('#myTable').append('<tr><td>my data</td><td>more data</td></tr>');
will add a new row to the first TBODY of the table, without depending of any THEAD or TFOOT present.
(I didn't find information from which version of jQuery .append() this behavior is present.)
You may try it in these examples:
<table class="t"> <!-- table with THEAD, TBODY and TFOOT -->
<thead>
<tr><th>h1</th><th>h2</th></tr>
</thead>
<tbody>
<tr><td>1</td><td>2</td></tr>
</tbody>
<tfoot>
<tr><th>f1</th><th>f2</th></tr>
</tfoot>
</table><br>
<table class="t"> <!-- table with two TBODYs -->
<thead>
<tr><th>h1</th><th>h2</th></tr>
</thead>
<tbody>
<tr><td>1</td><td>2</td></tr>
</tbody>
<tbody>
<tr><td>3</td><td>4</td></tr>
</tbody>
<tfoot>
<tr><th>f1</th><th>f2</th></tr>
</tfoot>
</table><br>
<table class="t"> <!-- table without TBODY -->
<thead>
<tr><th>h1</th><th>h2</th></tr>
</thead>
</table><br>
<table class="t"> <!-- table with TR not in TBODY -->
<tr><td>1</td><td>2</td></tr>
</table>
<br>
<table class="t">
</table>
<script>
$('.t').append('<tr><td>a</td><td>a</td></tr>');
</script>
In which example a b row is inserted after 1 2, not after 3 4 in second example. If the table were empty, jQuery creates TBODY for a new row.
A: If you are using Datatable JQuery plugin you can try.
oTable = $('#tblStateFeesSetup').dataTable({
"bScrollCollapse": true,
"bJQueryUI": true,
...
...
//Custom Initializations.
});
//Data Row Template of the table.
var dataRowTemplate = {};
dataRowTemplate.InvoiceID = '';
dataRowTemplate.InvoiceDate = '';
dataRowTemplate.IsOverRide = false;
dataRowTemplate.AmountOfInvoice = '';
dataRowTemplate.DateReceived = '';
dataRowTemplate.AmountReceived = '';
dataRowTemplate.CheckNumber = '';
//Add dataRow to the table.
oTable.fnAddData(dataRowTemplate);
Refer Datatables fnAddData Datatables API
A: In a simple way:
$('#yourTableId').append('<tr><td>your data1</td><td>your data2</td><td>your data3</td></tr>');
A: Try this : very simple way
$('<tr><td>3</td></tr><tr><td>4</td></tr>').appendTo("#myTable tbody");
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
<table id="myTable">
<tbody>
<tr><td>1</td></tr>
<tr><td>2</td></tr>
</tbody>
</table>
A: The answers above are very helpful, but when student refer this link to add data from form they often require a sample.
I want to contribute an sample get input from from and use .after() to insert tr to table using string interpolation.
function add(){
let studentname = $("input[name='studentname']").val();
let studentmark = $("input[name='studentmark']").val();
$('#student tr:last').after(`<tr><td>${studentname}</td><td>${studentmark}</td></tr>`);
}
function add(){
let studentname = $("input[name='studentname']").val();
let studentmark = $("input[name='studentmark']").val();
$('#student tr:last').after(`<tr><td>${studentname}</td><td>${studentmark}</td></tr>`);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<!DOCTYPE html>
<html>
<head>
<style>
table {
font-family: arial, sans-serif;
border-collapse: collapse;
width: 100%;
}
td, th {
border: 1px solid #dddddd;
text-align: left;
padding: 8px;
}
tr:nth-child(even) {
background-color: #dddddd;
}
</style>
</head>
<body>
<form>
<input type='text' name='studentname' />
<input type='text' name='studentmark' />
<input type='button' onclick="add()" value="Add new" />
</form>
<table id='student'>
<thead>
<th>Name</th>
<th>Mark</th>
</thead>
</table>
</body>
</html>
A: If you want to add row before the <tr> first child.
$("#myTable > tbody").prepend("<tr><td>my data</td><td>more data</td></tr>");
If you want to add row after the <tr> last child.
$("#myTable > tbody").append("<tr><td>my data</td><td>more data</td></tr>");
A: I have tried the most upvoted one, but it did not work for me, but below works well.
$('#mytable tr').last().after('<tr><td></td></tr>');
Which will work even there is a tobdy there.
A: )Daryl:
You can append it to the tbody using the appendTo method like this:
$(() => {
$("<tr><td>my data</td><td>more data</td></tr>").appendTo("tbody");
});
You'll probably want to use the latest JQuery and ECMAScript. Then you can use a back-end language to add your data to the table. You can also wrap it in a variable like so:
$(() => {
var t_data = $('<tr><td>my data</td><td>more data</td></tr>');
t_data.appendTo('tbody');
});
A: I recommend
$('#myTable > tbody:first').append('<tr>...</tr><tr>...</tr>');
as opposed to
$('#myTable > tbody:last').append('<tr>...</tr><tr>...</tr>');
The first and last keywords work on the first or last tag to be started, not closed. Therefore, this plays nicer with nested tables, if you don't want the nested table to be changed, but instead add to the overall table. At least, this is what I found.
<table id=myTable>
<tbody id=first>
<tr><td>
<table id=myNestedTable>
<tbody id=last>
</tbody>
</table>
</td></tr>
</tbody>
</table>
A: In my opinion the fastest and clear way is
//Try to get tbody first with jquery children. works faster!
var tbody = $('#myTable').children('tbody');
//Then if no tbody just select your table
var table = tbody.length ? tbody : $('#myTable');
//Add row
table.append('<tr><td>hello></td></tr>');
here is demo Fiddle
Also I can recommend a small function to make more html changes
//Compose template string
String.prototype.compose = (function (){
var re = /\{{(.+?)\}}/g;
return function (o){
return this.replace(re, function (_, k){
return typeof o[k] != 'undefined' ? o[k] : '';
});
}
}());
If you use my string composer you can do this like
var tbody = $('#myTable').children('tbody');
var table = tbody.length ? tbody : $('#myTable');
var row = '<tr>'+
'<td>{{id}}</td>'+
'<td>{{name}}</td>'+
'<td>{{phone}}</td>'+
'</tr>';
//Add row
table.append(row.compose({
'id': 3,
'name': 'Lee',
'phone': '123 456 789'
}));
Here is demo
Fiddle
A: So things have changed ever since @Luke Bennett answered this question. Here is an update.
jQuery since version 1.4(?) automatically detects if the element you are trying to insert (using any of the append(), prepend(), before(), or after() methods) is a <tr> and inserts it into the first <tbody> in your table or wraps it into a new <tbody> if one doesn't exist.
So yes your example code is acceptable and will work fine with jQuery 1.4+. ;)
$('#myTable').append('<tr><td>my data</td><td>more data</td></tr>');
A: This can be done easily using the "last()" function of jQuery.
$("#tableId").last().append("<tr><td>New row</td></tr>");
A: The approach you suggest is not guaranteed to give you the result you're looking for - what if you had a tbody for example:
<table id="myTable">
<tbody>
<tr>...</tr>
<tr>...</tr>
</tbody>
</table>
You would end up with the following:
<table id="myTable">
<tbody>
<tr>...</tr>
<tr>...</tr>
</tbody>
<tr>...</tr>
</table>
I would therefore recommend this approach instead:
$('#myTable tr:last').after('<tr>...</tr><tr>...</tr>');
You can include anything within the after() method as long as it's valid HTML, including multiple rows as per the example above.
Update: Revisiting this answer following recent activity with this question. eyelidlessness makes a good comment that there will always be a tbody in the DOM; this is true, but only if there is at least one row. If you have no rows, there will be no tbody unless you have specified one yourself.
DaRKoN_ suggests appending to the tbody rather than adding content after the last tr. This gets around the issue of having no rows, but still isn't bulletproof as you could theoretically have multiple tbody elements and the row would get added to each of them.
Weighing everything up, I'm not sure there is a single one-line solution that accounts for every single possible scenario. You will need to make sure the jQuery code tallies with your markup.
I think the safest solution is probably to ensure your table always includes at least one tbody in your markup, even if it has no rows. On this basis, you can use the following which will work however many rows you have (and also account for multiple tbody elements):
$('#myTable > tbody:last-child').append('<tr>...</tr><tr>...</tr>');
A: I solved it this way.
Using jquery
$('#tab').append($('<tr>')
.append($('<td>').append("text1"))
.append($('<td>').append("text2"))
.append($('<td>').append("text3"))
.append($('<td>').append("text4"))
)
Snippet
$('#tab').append($('<tr>')
.append($('<td>').append("text1"))
.append($('<td>').append("text2"))
.append($('<td>').append("text3"))
.append($('<td>').append("text4"))
)
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<table id="tab">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
<th>City</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
<td>New York</td>
</tr>
</table>
A: Add tabe row using JQuery:
if you want to add row after last of table's row child, you can try this
$('#myTable tr:last').after('<tr>...</tr><tr>...</tr>');
if you want to add row 1st of table's row child, you can try this
$('#myTable tr').after('<tr>...</tr><tr>...</tr>');
A: Pure JS is quite short in your case
myTable.firstChild.innerHTML += '<tr><td>my data</td><td>more data</td></tr>'
function add() {
myTable.firstChild.innerHTML+=`<tr><td>date</td><td>${+new Date}</td></tr>`
}
td {border: 1px solid black;}
<button onclick="add()">Add</button><br>
<table id="myTable"><tbody></tbody> </table>
(if we remove <tbody> and firstChild it will also works but wrap every row with <tbody>)
A: *
*Using jQuery .append()
*Using jQuery .appendTo()
*Using jQuery .after()
*Using Javascript .insertRow()
*Using jQuery - add html row
Try This:
// Using jQuery - append
$('#myTable > tbody').append('<tr><td>3</td><td>Smith Patel</td></tr>');
// Using jQuery - appendTo
$('<tr><td>4</td><td>J. Thomson</td></tr>').appendTo("#myTable > tbody");
// Using jQuery - add html row
let tBodyHtml = $('#myTable > tbody').html();
tBodyHtml += '<tr><td>5</td><td>Patel S.</td></tr>';
$('#myTable > tbody').html(tBodyHtml);
// Using jQuery - after
$('#myTable > tbody tr:last').after('<tr><td>6</td><td>Angel Bruice</td></tr>');
// Using JavaScript - insertRow
const tableBody = document.getElementById('myTable').getElementsByTagName('tbody')[0];
const newRow = tableBody.insertRow(tableBody.rows.length);
newRow.innerHTML = '<tr><td>7</td><td>K. Ashwin</td></tr>';
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<table id="myTable">
<thead>
<tr>
<th>Id</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John Smith</td>
</tr>
<tr>
<td>2</td>
<td>Tom Adam</td>
</tr>
</tbody>
</table>
A: What if you had a <tbody> and a <tfoot>?
Such as:
<table>
<tbody>
<tr><td>Foo</td></tr>
</tbody>
<tfoot>
<tr><td>footer information</td></tr>
</tfoot>
</table>
Then it would insert your new row in the footer - not to the body.
Hence the best solution is to include a <tbody> tag and use .append, rather than .after.
$("#myTable > tbody").append("<tr><td>row content</td></tr>");
A: I'm using this way when there is not any row in the table, as well as, each row is quite complicated.
style.css:
...
#templateRow {
display:none;
}
...
xxx.html
...
<tr id="templateRow"> ... </tr>
...
$("#templateRow").clone().removeAttr("id").appendTo( $("#templateRow").parent() );
...
A: For the best solution posted here, if there's a nested table on the last row, the new row will be added to the nested table instead of the main table. A quick solution (considering tables with/without tbody and tables with nested tables):
function add_new_row(table, rowcontent) {
if ($(table).length > 0) {
if ($(table + ' > tbody').length == 0) $(table).append('<tbody />');
($(table + ' > tr').length > 0) ? $(table).children('tbody:last').children('tr:last').append(rowcontent): $(table).children('tbody:last').append(rowcontent);
}
}
Usage example:
add_new_row('#myTable','<tr><td>my new row</td></tr>');
A: I was having some related issues, trying to insert a table row after the clicked row. All is fine except the .after() call does not work for the last row.
$('#traffic tbody').find('tr.trafficBody).filter(':nth-child(' + (column + 1) + ')').after(insertedhtml);
I landed up with a very untidy solution:
create the table as follows (id for each row):
<tr id="row1"> ... </tr>
<tr id="row2"> ... </tr>
<tr id="row3"> ... </tr>
etc ...
and then :
$('#traffic tbody').find('tr.trafficBody' + idx).after(html);
A: You can use this great jQuery add table row function. It works great with tables that have <tbody> and that don't. Also it takes into the consideration the colspan of your last table row.
Here is an example usage:
// One table
addTableRow($('#myTable'));
// add table row to number of tables
addTableRow($('.myTables'));
A: // Create a row and append to table
var row = $('<tr />', {})
.appendTo("#table_id");
// Add columns to the row. <td> properties can be given in the JSON
$('<td />', {
'text': 'column1'
}).appendTo(row);
$('<td />', {
'text': 'column2',
'style': 'min-width:100px;'
}).appendTo(row);
A: My solution:
//Adds a new table row
$.fn.addNewRow = function (rowId) {
$(this).find('tbody').append('<tr id="' + rowId + '"> </tr>');
};
usage:
$('#Table').addNewRow(id1);
A: This could also be done :
$("#myTable > tbody").html($("#myTable > tbody").html()+"<tr><td>my data</td><td>more data</td></tr>")
A: To add a new row at the last of current row, you can use like this
$('#yourtableid tr:last').after('<tr>...</tr><tr>...</tr>');
You can append multiple row as above. Also you can add inner data as like
$('#yourtableid tr:last').after('<tr><td>your data</td></tr>');
in another way you can do like this
let table = document.getElementById("tableId");
let row = table.insertRow(1); // pass position where you want to add a new row
//then add cells as you want with index
let cell0 = row.insertCell(0);
let cell1 = row.insertCell(1);
let cell2 = row.insertCell(2);
let cell3 = row.insertCell(3);
//add value to added td cell
cell0.innerHTML = "your td content here";
cell1.innerHTML = "your td content here";
cell2.innerHTML = "your td content here";
cell3.innerHTML = "your td content here";
A: Here, You can Just Click on button then you will get Output. When You Click on Add row button then one more row Added.
I hope It is very helpful.
<html>
<head>
<script src=
"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js">
</script>
<style>
table {
margin: 25px 0;
width: 200px;
}
table th, table td {
padding: 10px;
text-align: center;
}
table, th, td {
border: 1px solid;
}
</style>
</head>
<body>
<b>Add table row in jQuery</b>
<p>
Click on the button below to
add a row to the table
</p>
<button class="add-row">
Add row
</button>
<table>
<thead>
<tr>
<th>Rows</th>
</tr>
</thead>
<tbody>
<tr>
<td>This is row 0</td>
</tr>
</tbody>
</table>
<!-- Script to add table row -->
<script>
let rowno = 1;
$(document).ready(function () {
$(".add-row").click(function () {
rows = "<tr><td>This is row "
+ rowno + "</td></tr>";
tableBody = $("table tbody");
tableBody.append(rows);
rowno++;
});
});
</script>
</body>
</html>
A: var html = $('#myTableBody').html();
html += '<tr><td>my data</td><td>more data</td></tr>';
$('#myTableBody').html(html);
or
$('#myTableBody').html($('#myTableBody').html() + '<tr><td>my data</td><td>more data</td></tr>');
A: TIP: Inserting rows in html table via innerHTML or .html() is not valid in some browsers (similar IE9), and using .append("<tr></tr>") is not good suggestion in any browser. best and fastest way is using the pure javascript codes.
for combine this way with jQuery, only add new plugin similar this to jQuery:
$.fn.addRow=function(index/*-1: add to end or any desired index*/, cellsCount/*optional*/){
if(this[0].tagName.toLowerCase()!="table") return null;
var i=0, c, r = this[0].insertRow((index<0||index>this[0].rows.length)?this[0].rows.length:index);
for(;i<cellsCount||0;i++) c = r.insertCell(); //you can use c for set its content or etc
return $(r);
};
And now use it in whole the project similar this:
var addedRow = $("#myTable").addRow(-1/*add to end*/, 2);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2632"
} |
Q: Any Substitute API for Win32API? I need an API that can make menus like Win32API,
But something simpler and better than WinAPI.
That can make GUI, with text zones, buttons etc...like an average Windows program.(Windows GUI Style)
Can anyone recommand on one like it? Thanks.
A: wxWidgets:
wxWidgets lets developers create applications for Win32, Mac OS X, GTK+, X11, Motif, WinCE, and more using one codebase. It can be used from languages such as C++, Python, Perl, and C#/.NET. Unlike other cross-platform toolkits, wxWidgets applications look and feel native. This is because wxWidgets uses the platform's own native controls rather than emulating them. It's also extensive, free, open-source, and mature.
A: Though I have never used it, QT seems like a pretty decent framework and is open source as well, multiplatform and has a mobile api as well and can be coded with Java or C++.
http://trolltech.com/downloads
A: Take a look at http://www.wxwidgets.org/ - check out some of their tutorials to get a feel for it.
See also
*
*GUI Programming APIs
*What is a good GUI / Widget toolkit for windows?
(EDIT: I wrote this answer in Oct 2008, since then I've become a convert to Qt - it makes high-quality cross platform development a breeze!)
A: GTK!
Though it will require Windows users install a GTK library package in order to use your program (thumbs down on that) its got a beautiful code structure, especially when paired with Python.
A: If you really are using Win32 directly, you might try MFC.
Or if you want something more modern, WTL
http://wtl.sourceforge.net/
Windows Template Library (WTL) is a
C++ library for developing Windows
applications and UI components. It
extends ATL (Active Template Library)
and provides a set of classes for
controls, dialogs, frame windows, GDI
objects, and more.
A: WinForms and WPF are an alternatives for .NET programming. You won't be able to do everything in the Win32 API, but there is a substantial subset.
A: Delphi's VCL if you are targeting win32 platform.
A: *
*SmartWin - Win32 Only
*Ultimate++ - Cross platform
*FLTK - Cross platform
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Creating a stage environment on network with port 80 blocked I currently use my local web server to allow costumers to preview some applications and also to allow downloads of "nightly builds" of my open source library.
Problem is I changed my ISP and now my port 80 is blocked.
Altough I know I could easily change the port on the Apache server, I'd like to avoid that unless there's no alternative.
Do you know any third party service (free or paid) that would do a port forward to my website, making it transparent to someone accessing it?
One other idea I heard about was using mod rewrite from my current webhost to rewrite to my domain, but I'd also prefer not go this path. Besides that, do you know any .htaccess examples that actually work? I have tried this:
RewriteEngine on
RewriteRule ^/(.*) http://www.example.com:8080/$1
But it doesn't seem to be working.
A: "and back again to the costumer on a transparent way"....will be taken care of by NAT so that shouldn't be a problem.
To handle the request translation from one string to another, well that's an issue since you need to transform the request before it hits the server. Look into some kind of URL forwarding service
http://www.dnsexit.com/Direct.sv?cmd=webforward
Also you can setup a separate site on a provider server and have it forward requests to the specific address/link on your server.
Hope this helps!
A: With port 80 being blocked, routing through Dynamic Service win't help, unless the client specifies the new port in the domain.
Have your local router "port-forward" traffic from a new port (say 8080) to port 80. Leave everything the same on your end.
Create an account with DynDNS.org and set up your dynamic service. Then have your client do http://mydomain.com:8080
That should do the trick
Still look into Rolf's suggestion as they are not real ISP,....seriously.
Thanks
A: If you can't get your ISP to open up port 80 for you, and you can't switch ISPs, then use the htacccess redirect directive:
Redirect 301 / http://yourserver.com:8000/
The users may notice the redirect, but they probably won't care.
A:
What I'd like is for the costumer to
type
http://myaddress.com/hello/there?a=1&b=2
and it get translated to
http://mylocalserver.com:8080/hello/there?a=1&b=2
and back again to the costumer on a
transparent way.
I believe this is the Apache RewriteRule you're looking for to redirect any URL:
RewriteRule ^(.*)$ http://mylocalserver.com:8080$1 [R]
From then on the customer will be browsing mylocalserver.com:8080 and that's what they'll see in the address bar. If what you mean by "and back again" is that they still think they're browsing myaddress.com, then what you're talking about is a rewriting proxy server.
By this, I mean you would have to rewrite all URLs not only in HTTP headers but in your HTML content as well (i.e. do a regex search/replace on the HTML), and decode, rewrite and resend all GET, POST, PUT data, too. I once wrote such a proxy, and let me tell you it's not a trivial exercise, although the principle may seem simple.
I would say, just be happy if you can get the redirect to work and let them browse mylocalserver.com:8080 from that point on.
A: Well, even though I appreciated the answers, I wasn't quite satisfied with the final result. I wanted my ISP changes to be transparent to my costumers and I think I managed to make it work.
Here's what I did:
I hired a cheap VPS server - VPSLink - and chose its cheapest plan: 64Mb RAM, 2Gb HD and 1Gb monthly traffic. After a lifetime 10% discount it was only US$ 7.16 per month, pretty affordable for the job and you get a sandbox VPS server as a bonus. The hosting seems so far so good - no problems. If you want to give it a shot you can either signup from its site, indicated above or through a referral code. There are a bunch available on the internet, you just need to search. Also, I can easily create one for you if you want, just leave a comment on this answer: you'll get 10% off and I a month free. I won't post the directly here because it may seem that was the intention behind this post - which was not.
This account is unmanaged but it provides root access. I then configured apache to act as a Proxy to my port 80 requests, transparently forwarding them to my local website on port 8081.
Below are some snippets of my Apache's httpd.conf config files.
VPS Server configuration:
<VirtualHost *:80>
ServerName mydomain.com
ServerAlias www.mydomain.com *.mydomain.com
RewriteEngine On
RewriteCond %{HTTP_HOST} (.*)\.mydomain\.com [NC]
RewriteRule (.*) http://mylocalserverdns.mydomain.com:8081/%1$1 [P]
</VirtualHost>
This makes request like http://subdomain1.mydomain.com/script?a=b to be transparently forwarded on server side to http://mylocalserverdns.mydomain.com:8081/subdomain1/script?a=b, so I can do whatever I want from there.
On my local server, I did the same thing to distribute my subdomains handler. I have, for instance two Java server applications that runs on ports 8088 and 8089 locally. All I had to do was another proxy forward, now internally
Local Server configuration:
<VirtualHost *:8081>
ServerName mylocalserverdns.mydomain.com
ProxyPass /app1 http://127.0.0.1:8088
ProxyPassReverse /app1 http://127.0.0.1:8088
ProxyPassReverse /app1 http://mylocalserverdns.mydomain.com:8088/app1
ProxyPass /app2 http://127.0.0.1:8089
ProxyPassReverse /app2 http://127.0.0.1:8089
ProxyPassReverse /app2 http://mylocalserverdns.mydomain.com:8089/app2
</VirtualHost>
Hope this is worth if someone else is looking for the same alternative.
A: I think most DynamicDNS services allow port-forwarding.
A: Ask your ISP why this is, and if you don't get an answer, switch ISPs again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Adobe Air and .NET I Adobe Air completly JavaScript? Can't I use a .NET language like VB.NET or C#.NET?
Thanks
A: As the official answer states, Adobe Air does not support .Net languages. If you are looking for something similar for Windows desktop programming that does support .Net I would suggest WPF. The WPF Unleashed book is pretty good if you want to come up to speed quickly.
WPF is build into .Net 3.0 and 3.5, and runs on WinXP SP2, Vista, and Win2k3 and Win2k8 Servers.
Note: This was written when another answer was marked as the accepted "official" answer.
A: You cant use .net language directly in Adobe AIR apps (at least not yet). The best solution is to proxy calls between the AIR app and .net code:
http://www.mikechambers.com/blog/2008/01/17/commandproxy-net-air-integration-proof-of-concept/
mike chambers
mesh@adobe.com
A: No, you cannot use .NET languages for Adobe Air.
A: Yes you can use .NET with Adobe AIR,
visit : tutorial
Also stay tuned to Adobe's site as they can announce a plug in for Visual Studio 2008 any time (that supports Flex)
meanwhile you can use this another supported plugin
A: No you cannot use .Net to write Adobe AIR apps directly, you can only consume(provide data to AIR apps) any backend services that uses .NET, PHP, Java, or Coldfusion. You can only create adobe Adobe AIR apps with Flex, Actionscript in Flash/Flex Builder or Javascript/HTML/CSS(Ajax) in Flash Pro.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: AJAX and Relative Path Scope I'm trying to lightbox a page containing a SWF via the nifty AJAX feature in Facebox (for jQuery). The trouble is that the paths now work relative to the main index page, not the directory that houses the flash page. Here's a directory breakdown:
./
- index.html (loads projects/projectName/index.html)
+ js/
+ jquery/
- facebox.js
- jquery.js
+ swfobject/
- swfobject.js
+ projects
+ projectName
- index.html (works when viewed by itself with relative paths to JS)
+ swf/
Could anyone tell me if there's some way of preserving the scope of relative paths via jQuery (or any Javascript really)?
Thanks!
A: You don't want to use relative paths the way that you are, which work their way back up the folder tree. Try using paths that begin at the Web root, and instead work their way down the folder tree. So instead of this:
../../images/image.gif
Or this:
./images/image.gif
You should try this:
/images/image.gif
A: I'm not sure I understand the question completly (do you have source code / url example).
The Base HREF tag can let you set the relative path start in your iFrame, and you should be able to use relative paths from there.
A: A bit more info might be helpful. Is the main index.html redirecting to the one under projectName, or opening it in a FRAME/IFRAME? Which one has the .facebox() call in it?
A: Sorry for not uploading! Had to clear out some projects that can't be shown yet.
http://www.kevinsweeneydesign.com/help/
You can see here that the paths are all correct when viewing the file by itself (it just gets screwy when I try to facebox the content):
http://www.kevinsweeneydesign.com/help/projects/propod/
The facebox call is from the ./help/index.html file. It tries to load the ./help/projects/propod/index.html in the facebox call. However, ./help/projects/propod/index.html file uses JS that's located in ./help/js/swfobject/
Update: I tried setting the base tag (set it to ./projects/propod)...that worked, but only for images. I'm still tinkering with the 'base' parameter to give the SWF but to no avail...I don't see any "_ base_href" getting attached when I look at the object tag in Firebug like I did when linking to an image =/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: onmousemove in firefox How do you implement onmousemove in Firefox? I have it working in IE7 but no alert pops in Firefox. Is is not supported or done differently?
<esri:Map ID="Map1" runat="server" MapResourceManager="MapResourceManager1"
Height="100%" Width="100%" VirtualDirectory=""
PrimaryMapResource="ESRI_Imagery_World_2D" Extent="-130,37,-117,46"
onmousemove="alert()" >
</esri:Map>
A: I thinkk the problem is that you need to put something inside your alert(), such as:
<esri:Map ID="Map1" runat="server" MapResourceManager="MapResourceManager1"
Height="100%" Width="100%" VirtualDirectory=""
PrimaryMapResource="ESRI_Imagery_World_2D" Extent="-130,37,-117,46"
onmousemove="alert('test');" >
</esri:Map>
I just tried this and it worked in FF:
<div onmousemove="alert('test');" style="height:100px; width:300px; background-color:#f00;"></div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I configure Microsoft ADAM to be similar to Active Directory? I want to put users into an instance of ADAM so that ADAM looks similar to a typical, real, Active Directory server.
I'm developing an application that integrates with LDAP. I've tested with OpenLDAP and its core.schema. Now I'd like to test with with Active Directory, but the closest I can get to that using my equipment is by testing with Microsoft ADAM.
I don't know exactly how to begin with ADAM. Zero experience with it and Active Directory. I'm guessing I need to import the MS-AdamSchemaW2K3.LDF because I see "sAMAccountName" in there, and I think I want that to be like Active Directory?
Added after reading a couple answers...
The answers so far aren't specific enough for what I'm looking for. I did get ADAM to work and my app can talk to it, but what I want to do is to have ADAM working the way a typical (if there is such a thing) Active Directory installation would work, same schema, authentication, even though I'm just using ADAM in a workgroup network, on Windows XP.
A: ADAM isn't really a complete replacement for Active Directory. For example, ADAM doesn't understand different group types, and doesn't include a RootDSE by default. You could test against ADAM but you may run into slight differences in your query structures.
If you are developing an application that will depend on Active Directory then you really should be building your application against an Active Directory. I have been able to get several Domain Controllers running just fine in Virtual PC (free) using only 300mb of memory and a free evaluation version of Windows Server.
If, however, you are building an application that simply needs an LDAP directory and isn't going to be using Active Directory than ADAM may work out just fine. The schema extension file you mentioned (MS-AdamSchemaW2K3.LDF) would work just fine but you would want to setup RootDSE for easier binds.
Lastly, Microsoft AD/AM isn't really Admin friendly, especially in terms of troubleshooting. I ended up writing an application to help troubleshoot AD/AM issues that you may find useful.
A: I am only aware of importing MS-Users file. I see ther is a step-step guide
http://www.microsoft.com/downloads/details.aspx?FamilyID=5163b97a-7df3-4b41-954e-0f7c04893e83&DisplayLang=en
A: I'm not sure what you mean by not being able to use your equipment to run an Active Directory instance instead of mucking around with ADAM. I've run test AD servers in virtual machines with as little as 256 MB of RAM. Seems to me that ADAM is never going to be an adequate test depending on what your doing.
I'd spend time trying to get an proper AD up and running instead.
A: The best approach would be to install Windows 2003 server with Active Directory loaded as a domain controller. You cannot 100% duplicate AD characteristics using ADAM alone.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I change the "level" of validation that Ant's XMLValidate task provides? I am attempting to use Ant's XMLValidate task to validate an XML document against a DTD. The problem is not that it doesn't work, but that it works too well. My DTD contains an xref element with an "@linkend" attribute of type IDREF. Most of these reference IDs outside of the current document. Because of this, my build fails, since the parser complains that the ID that the IDREF is referencing doesn't exist. So, is there any way that I can validate my XML document against the DTD, but ignore errors of this type?
A few things I've tried: Setting the "lenient" option on XMLValidate makes the task only check the document's well-formedness, not it's validity against a DTD. The XMLValidate task in the Ant manual lists some JAXP and SAX options you can set, but none seem applicable.
Here's my code:
<target name="validate">
<echo message="Validating ${input}"/>
<xmlvalidate file="${input}" failonerror="yes"
classname="org.apache.xml.resolver.tools.ResolvingXMLReader">
<classpath refid="xslt.processor.classpath"/>
</xmlvalidate>
</target>
As you can see, I'm using ResolvingXMLReader to resolve the DTD against a catalog of public identifiers. However, I get the same behavior if I specify the DTD directly using a nested xmlcatalog element.
A: Your problem derives from the difference between two interpretations of the DTD: yours, and the spec's :-). IDREFs must refer to ids in the same document, whereas yours refer to elements across documents.
My suggestion is to create your own version of the DTD that specifies NMTOKEN instead of IDREF for that attribute, and use it to perform your validation. This will ensure that the contents will be valid xml id values.
A: Not sure if this helps, but could you try this workaround?
Create a temporary file, merge all your XMLs, and do the validation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Sensible HTTP POST timeout values to use when programmatically issuing requests? When programmatically issuing HTTP POST requests, what timeout values would be sensible?
In my case, I'm looking to set 'sensible' timeout values when making POST requests in PHP, however this applies to any language.
I need to be able to issue a set of requests, each to a user-specified URL. If I do need to process requests consecutively instead of concurrently, I'd like to specify a sensible time beyond which a request is deemed to have timed out.
PHP's default socket timeout is 60 seconds. This seems an unnecessarily long time to wait before deciding a request is not going to be completed.
As these are POST requests they should complete quickly - there's no data to be retrieved and returned as with a GET request.
We should be able to assume, most of the time, that the failure to issue a response to a request within X seconds means the host is unlikely to issue a response within a reasonable time for values of X significantly less than 60.
Surely hosts rarely take more than 60 seconds to respond to a simple POST request. Do they even rarely take more than 10 seconds? 5 seconds?
What might be sensible values for X in practice? Justifications accompanying suggestions would be extremely beneficial.
A: I would recommend setting up a test, as there are too many factors involved to give a value that will always be sensible.
A POST request sends data to be processed. How long with the processing take? This will be application/data specific.
Where is the host? The user is supplying the URL, so that will be unknown. We can't know what traffic is like between your application and the host. We can't know the server load of the host.
Essentially, there is no universal sensible timeout. You have to use your own best judgment based on your specific needs. Set up a test and use that to determine your limits.
A: Most libraries have a connect timeout and a read timeout. That is, the timeout between trying to connect to the remote server, and the timeout after sending the request, that they should wait for a response.
If this is a local web service, I would set the connect timeout low, 1 second, or less if your library supports it. If the remote service you are connecting to is unavailable IMHO its better to return a response to the user immediately, than to allow all your worker threads to block on that remote service, causing other upstream errors.
As for the read timeout, that is trickier, you need it to be low, so you don't exhaust your pool of workers who are waiting for the remote service to return, but you also don't want it so low that it closes the connection before reading a response. That is something you'll have to test, then track as a metric when your system is in production.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Allowing users to post pictures to web application via mobile phone? There are several web applications which allow you to send photos from your mobile phone and post them to web site/application. How do these work and what sort of technologies would one use to implement such a feature? Is it an MMS server or does one need to get into socket programming? For example, some applications provide you with an email address to email your photos to via mobile phone. I'm curious to know how these things work. I can somewhat figure out on my own how they work, but I would like to know this feature is normally implemented.
Thanks
A: I think you have three options to support this:
*
*MMS - you would need an MMS gateway to recieve MMS messages. This can be software/hardware that you run yourself, or there are services were you pay per message. You would likely need to write your own handling of the message at the application end.
*Email - you just provide users with an email address to send picture to and you either interface with the email server via POP/IMAP or with the message store directly.
*Web form - you implement a web form specifically designed for mobile devices that lets them upload pictures. In truth I have no idea if and how many devices support <input type="file" />, so this may not actually be an option at all.
You can use whatever server-side technology you prefer for processing each of these.
A: many popular phones including the iPhone unfortunately do not allow from the browser due to sandboxing, so you would either have to use a native application or one of the other methods
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Standard connection libraries for MySQL, MSSQL, and Oracle in PHP I'm looking for a standard way to connect to databases in PHP. We've all been there - first start with some rudimentary code to connect/query/iterate/insert/disconnect, then the code grew as the program grew, and it ended up with a mess that's hardly reusable.
I know there are many PEAR, PECL, and other PHP libraries/classes out there that can fit my description - but which ones are maintained, used, and have proven to be bug-free and efficient?
A: if you're using PHP 5 try out PDO
A: try Object Relational Mapping libraries such as Propel and Doctrine, both uses PDO as database abstraction layer so they pretty much work on all engine.
A: I'm supprised Zend_Db hasn't been mentioned yet...
*
*PEAR's MDB2: very stable, also provides a layer that implements all MDB2-supported features in all databases where it can at least be simulated. I've used this one for years with much success.
*Zend Framework's Zend_Db: I've just started using the higher levels of Zend's entire DB infrastructure, but it seems to be quite stable and extremely well thought out.
*PHP5's native PDO: I've not used it at all, but I believe it is the simplest of all of these. In fact, both MDB2 and Zend_Db can use PDO as an underlying layer.
All of the above implement prepare and execute. Of the above, MDB2 is the most mature, as it's been around for a long time and is based on DB and MDB. Zend_Db appears to be the most well thought out. I know there are others, but I don't have experience or any knowledge about any of them.
A: These two are the best in my opinion. I can't claim to have tried them all though :)
*
*MDB2 (PEAR DB's successor)
*AdoDB
If on Linux you'll need FreeTDS to connect to MSSQL, regardless of the library you end up choosing.
A: When looking at new DB connection/query objects while trying to decide who to about writing or using new libraries, we decided it was best to write our own. In the end it probably doesn't have the flexibility that many other libraries have, but we have added features such as GetAll() which retrieves all of the rows in a keyed array or GetAllKeyed() which returns a keyed array with the ID as the key. Another great one is GetOne() for use when your select only has 1 column. These have all reduced the amount of code greatly.
One other feature is that when you do a query, it determines what type of query it is (INSERT, UPDATE, SELECT, DELETE, etc) and then returns appropriate information (such as INSERT, the last insert id or DELETE the number of rows deleted).
But we have also copied features such as Prepare & Execute from PEAR:DB.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Integrating AJAX and PHP I've been writing PHP web applications for some time, and have come across very nice Javascript frameworks, such as JQuery, ExtJS, Scriptaculous, etc. I can't say the same about the PHP side - I always coded that part of the client-server dialog from scratch.
I've used CodeIgniter (http://codeigniter.com/) and it is nice, but doesn't deal with AJAX as a whole - rather providing input checking, image manipulation, and some output helpers.
Is there a standard PHP library/class/framework out there that deals/integrates with Javascript frameworks? Something that can catch users' responses/requests, validate identity and input, provide progress status, keep track of sessions, be aware of asynchronous events, etc.
A: The Zend Framework is integrated with Dojo Toolkit. I haven't used the latest Zend Framework yet, but I do know that in the past, it has proven to be reliable.
A: There might be one but I can't imagine why. An AJAX request looks and acts just like an HTTP request from the perspective of the server. You can get and set cookies. All the environment variables that you would expect from an HTTP request are there. All of the HTTP verbs work as do any of the header fields.
A: In the next major release 1.5 CakePHP will come with jQuery.
A: Sajax is one of a number of libraries that provide an easy way to link callbacks from client-side (JS) to server-side (PHP). Another library which does something similar is JPSpan however I am not sure if it is still actively supported. I have only done minor experiments with these two libraries so your mileage may vary.
A: Using a library is fine as a convenience once you understand the concept, and you probably do, but for others reading this I suggest doing it by hand a few time first and really understanding it. I also recommend the book Bulletproof AJAX. It's fairly short, well written and describes not only how to use AJAX, using PHP as a programming language, but also how to create pages which take advantage of AJAX but still work OK if the user has JavaScript turned off.
A: The only difference in what I do when I'm returning JavaScript or HTML to a browser for AJAX is to not output the headers or any extra data. (The error handling I use outputs errors when in debug, so I have disable this as well.)
A: Yes, PHP can output XML and JSON for Ajax but not all PHP frameworks support JSON/XML equally well.
For example: I ran into an problem in Drupal (4.7) where the PHP sessions would be deleted after outputting a JSON response. (The HTML output code was explicitly closing the session, which was required or the session would be erased.)
I would also love know about PHP frameworks that make it easier to manage to javascript code. Even something basic such as including jQuery only on the pages that require it. Or helping to manage minimizing/packing javascript code.
A: Pardon for posting on the old question, but the relatively new framework Agile Toolkit is the perfect answer to OP.
It allows you to create fully object-oriented Web-GUI without going into HTML/JavaScript.
A: I would highly recommend you use Cjax Framework 100% PHP Side Ajax Framework.
You will never will you see a line of JavaScript.
Cjax lets you do, ajaxy stuff, most of the time with one single line of code.
Also Cjax integrates into CodeIgniter, like your finger integrates into a ring!
This is a thread in the CodeIgniter forums: http://forum.codeigniter.com/thread-65967.html
Cjax is not exclusive to CodeIgniter as any website or application can use it, but it has built-in support for it.
There is also great deal of documentation: (From CodeIgniter wiki): https://github.com/bcit-ci/CodeIgniter/wiki/ajax-framework-for-codeigniter
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Autostart spring app So is there a way to initialize and start a command line Spring app without writing a main method. It seems like all such main methods have the same form
public static void main(final String[] args) throws Exception {
ApplicationContext ctx = new ClassPathXmlApplicationContext("context.xml", Boot.class);
FooService fooService = (FooService) ctx.getBean("fooService");
fooService.bar();
}
I suppose that's not complicated, but has someone found a way to provide a way to just specify the context.xml at the command line or, better yet, in a manifest file?
The goal here is to simplify the creation of spring applications as executable jars. I hope that I can specify some utility class as the Main-Class in the manifest. I suppose I would also need to specify the starting point for the app, a bean and a method on it where begins the process.
A: I'll try to answer the question as I understand it:
How to package a jar containing a spring configuration such as I just need to use java -jar myjar.jar?
The code snippet you have in your question simply works. You don't have to parameterise the context.xml. You just need to bundle your code and its dependencies (spring, etc.) in a single jar with a proper manifest entry for the main class in a jar file.
I personaly use maven 2 and here is a pom.xml I would use that do just that:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.stackoverflow</groupId>
<artifactId>stackoverflow-autostart-spring-app</artifactId>
<version>0.1</version>
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring</artifactId>
<version>2.5.2</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass>com.stackoverflow.spring.autostart.Autostart</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
This is assuming some package name for the java code, the source code being in the src/main/java directory and the file context.xml in the src/main/resources directory.
So in this pom.xml there are several important points:
*
*the spring dependency (speaks for itself I believe)
*the configuration of the maven jar plugin, that adds the main class as a manifest entry
*the maven shade plugin, which is the plugin responsible for gathering all the dependencies/classes and packaging them into one single jar.
The executable jar will be available at target\stackoverflow-autostart-spring-app-0.1.jar when running mvn package.
I have this code all working on my box but just realised that I can't attach a zip file here. Anyone know of place I could do so and link here?
I created a git repository at github with the code related to this question if you want to check it out.
Hope this helps.
A: Yes. Write a simple SpringMain which takes an arbitrary number of xml and properties files as the arguments. You can then (in the main method) initialize an application from these files. Starting your program is then simply a matter of:
java -cp myapp.jar util.SpringMain context.xml
You then use the lifecycle attributes (init-method) on your relevant beans to kick-start the application
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I flip a bit in SQL Server? I'm trying to perform a bitwise NOT in SQL Server. I'd like to do something like this:
update foo
set Sync = NOT @IsNew
Note: I started writing this and found out the answer to my own question before I finished. I still wanted to share with the community, since this piece of documentation was lacking on MSDN (until I added it to the Community Content there, too).
A: Bitwise NOT: ~
Bitwise AND: &
Bitwise OR: |
Bitwise XOR: ^
A: For the sake of completeness:
SELECT b, 1 - b
FROM
(SELECT cast(1 AS BIT) AS b
UNION ALL
SELECT cast(0 AS BIT) AS b) sampletable
A: ~ operator will work only with BIT,
try:
~ CAST(@IsNew AS BIT)
A: Yes, the ~ operator will work.
update foo
set Sync = ~@IsNew
A: Lacking on MSDN?
http://msdn.microsoft.com/en-us/library/ms173468(SQL.90).aspx
~: Performs a bitwise logical NOT operation on an integer value.
The ~ bitwise operator performs a bitwise logical NOT for the expression, taking each bit in turn. If expression has a value of 0, the bits in the result set are set to 1; otherwise, the bit in the result is cleared to a value of 0. In other words, ones are changed to zeros and zeros are changed to ones.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "87"
} |
Q: Has anyone here worked on KOFAX-SharePoint 2007 integration? We want to be able to advance our business by utilizing both of the power of KOFAX and SharePoint 2007. Any pointers Development-wise?
A: We looked into doing something like this a couple of years ago. We were thinking of having printed documents scanned in, converted with OCR, and organized digitally in a type of repository.
We actually did an analysis of OCR tools and repository / collaboration tools and which would be the easiest to integrate. We checked out Kofax, OCR for AnyDoc, and a couple of others on the OCR side, and SharePoint WSS, SharePoint MOSS, Hyland OnBase, SAP Collaboration Manager, and Documentum on the repository / collaboration side.
Your idea is good, and there are variants of it in use in banking and other industries. As for the integration, in my experience if you're using SharePoint, it should be pretty easy with Kofax if you leverage the API's on both sides and get creative with simple web parts and iFrames.
KA
A: We have done it by building a custom release script from Kofax.
Key question is how much flexibility to you want in the integration for expanding in the future. It is easy to build a one-off release script that handles a single document library or list. It is a lot more to build something that is configurable.
OnBase is a nice option if you are looking for workflow and other ECM benefits. You can scan to OnBase and then OnBase will push links directly into a library or list in SP automatically.
You also need to understand that searching and viewing of a high-volume of document images in SP can be cumbersome without some other tools.
E-mail me at sales@imagesoftinc.com if you'd like to discuss.
Scott
A: PSIGEN makes a great onramp for SharePoint
Scanning and Capture for SharePoint
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Looking for the suffix tree implementation in C#? I've implemented a basic search for a research project. I'm trying to make the search more efficient by building a suffix tree. I'm interested in a C# implementation of the Ukkonen algorith. I don't want to waste time rolling my own if such implementation exists.
A: Hei, just finished implementing .NET (c#) library containing different trie implementations. Among them:
*
*Classical trie
*Patricia trie
*Suffix trie
*A trie using Ukkonen's algorithm
I tried to make source code easy readable. Usage is also very straight forward:
using Gma.DataStructures.StringSearch;
...
var trie = new UkkonenTrie<int>(3);
//var trie = new SuffixTrie<int>(3);
trie.Add("hello", 1);
trie.Add("world", 2);
trie.Add("hell", 3);
var result = trie.Retrieve("hel");
The library is well tested and also published as TrieNet NuGet package.
See github.com/gmamaladze/trienet
A: Here is an implementation of a suffix tree that is reasonably efficient. I haven't studied Ukkonen's implementation, but the running time of this algorithm I believe is quite reasonable, approximately O(N Log N). Note the number of internal nodes in the tree created is equal to the number of letters in the parent string.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using NUnit.Framework;
namespace FunStuff
{
public class SuffixTree
{
public class Node
{
public int Index = -1;
public Dictionary<char, Node> Children = new Dictionary<char, Node>();
}
public Node Root = new Node();
public String Text;
public void InsertSuffix(string s, int from)
{
var cur = Root;
for (int i = from; i < s.Length; ++i)
{
var c = s[i];
if (!cur.Children.ContainsKey(c))
{
var n = new Node() {Index = from};
cur.Children.Add(c, n);
// Very slow assertion.
Debug.Assert(Find(s.Substring(from)).Any());
return;
}
cur = cur.Children[c];
}
Debug.Assert(false, "It should never be possible to arrive at this case");
throw new Exception("Suffix tree corruption");
}
private static IEnumerable<Node> VisitTree(Node n)
{
foreach (var n1 in n.Children.Values)
foreach (var n2 in VisitTree(n1))
yield return n2;
yield return n;
}
public IEnumerable<int> Find(string s)
{
var n = FindNode(s);
if (n == null) yield break;
foreach (var n2 in VisitTree(n))
yield return n2.Index;
}
private Node FindNode(string s)
{
var cur = Root;
for (int i = 0; i < s.Length; ++i)
{
var c = s[i];
if (!cur.Children.ContainsKey(c))
{
// We are at a leaf-node.
// What we do here is check to see if the rest of the string is at this location.
for (var j=i; j < s.Length; ++j)
if (cur.Index + j >= Text.Length || Text[cur.Index + j] != s[j])
return null;
return cur;
}
cur = cur.Children[c];
}
return cur;
}
public SuffixTree(string s)
{
Text = s;
for (var i = s.Length - 1; i >= 0; --i)
InsertSuffix(s, i);
Debug.Assert(VisitTree(Root).Count() - 1 == s.Length);
}
}
[TestFixture]
public class TestSuffixTree
{
[Test]
public void TestBasics()
{
var s = "banana";
var t = new SuffixTree(s);
var results = t.Find("an").ToArray();
Assert.AreEqual(2, results.Length);
Assert.AreEqual(1, results[0]);
Assert.AreEqual(3, results[1]);
}
}
}
A: Hard question. Here's the closest to match I could find: http://www.codeproject.com/KB/recipes/ahocorasick.aspx, which is an implementation of the Aho-Corasick string matching algorithm. Now, the algorithm uses a suffix-tree-like structure per: http://en.wikipedia.org/wiki/Aho-Corasick_algorithm
Now, if you want a prefix tree, this article claims to have an implementation for you: http://www.codeproject.com/KB/recipes/prefixtree.aspx
<HUMOR> Now that I did your homework, how about you mow my lawn. (Reference: http://flyingmoose.org/tolksarc/homework.htm) </HUMOR>
Edit: I found a C# suffix tree implementation that was a port of a C++ one posted on a blog: http://code.google.com/p/csharsuffixtree/source/browse/#svn/trunk/suffixtree
Edit: There is a new project at Codeplex that is focused on suffix trees: http://suffixtree.codeplex.com/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Java maximum memory on Windows XP I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
A: The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
A: Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
A: This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
A: I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
A: The Java heap size limits for Windows are:
*
*maximum possible heap size on 32-bit Java: 1.8 GB
*recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
A: I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
A: Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
A: Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
A: sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
A: Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
A: Here is how to increase the Paging size
*
*right click on mycomputer--->properties--->Advanced
*in the performance section click settings
*click Advanced tab
*in Virtual memory section, click change. It will show ur current paging
size.
*Select Drive where HDD space is available.
*Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
A: **There are numerous ways to change heap size like,
*
*file->setting->build, exceution, deployment->compiler here you will find heap size
*file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
*
*Set proper appropriate JAVA_HOME path incase you java got updated.
*create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
A: First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "105"
} |
Q: How to block running two instances of the same program? I need to make sure that user can run only one instance of my program at a time.
Which means, that I have to check programatically, whether the same program is already running, and quit in such case.
The first thing that came to my mind was to create a file somewhere, when the program starts. Then, each other instance of the program would check for this file and exit if it found it.
The trouble is, that the program must always exit gracefully and be able to delete the file it created, for this to work.
In case of, say, power outage, the lock file remains in place and the program can't be started again.
To solve this, I decided to store the first program's process ID into the lock file and when another instance starts, it checks if the PID from the file is attached to some running process.
If the file doesn't exist, is empty, or the PID doesn't correspond to any existing process, the program continues to run and writes its own PID to the file.
This seems to work quite fine - even after an unexpected shutdown, the chance that the (now obsolete) process ID will be associated with some other program, seems to be quite low.
But it still doesn't feel right (there is a chance of getting locked by some unrelated process) and working with process IDs seems to go beyond the standard C++ and probably isn't very portable either.
So, is there another (more clean and secure) way of doing this? Ideally one that would work with the ISO 98 C++ standard and on Windows and *nix alike.
If it cannot be done platform-independently, Linux/Unix is a priority for me.
A: Your method of writing the process pid to a file is a common one that is used in many different established applications. In fact, if you look in your /var/run directory right now I bet you'll find several *.pid files already.
As you say, it's not 100% robust because there is chance of the pids getting confused. I have heard of programs using flock() to lock an application-specific file that will automatically be unlocked by the OS when the process exits, but this method is more platform-specific and less transparent.
A: I actually use exactly the process you describe, and it works fine except for the edge case that happens when you suddenly run out of disk space and can no longer create files.
The "correct" way to do this is probably to use shared memory: http://www.cs.cf.ac.uk/Dave/C/node27.html
A: It is very un-unix to prohibit multiple instances of a program to run.
If the program is, say, a network daemon, it doesn't need to actively prohibit multiple instances--only the first instance gets to listen to the socket, so subsequent instances bomb out automatically. If it is, say, an RDBMS, it doesn't need to actively prohibit multiple instances--only the first instance gets to open and lock the files. etc.
A: There are several methods you can use to accomplish only allowing one instance of your application:
Method 1: Global synchronization object or memory
It's usually done by creating a named global mutex or event. If it is already created, then you know the program is already running.
For example in windows you could do:
#define APPLICATION_INSTANCE_MUTEX_NAME "{BA49C45E-B29A-4359-A07C-51B65B5571AD}"
//Make sure at most one instance of the tool is running
HANDLE hMutexOneInstance(::CreateMutex( NULL, TRUE, APPLICATION_INSTANCE_MUTEX_NAME));
bool bAlreadyRunning((::GetLastError() == ERROR_ALREADY_EXISTS));
if (hMutexOneInstance == NULL || bAlreadyRunning)
{
if(hMutexOneInstance)
{
::ReleaseMutex(hMutexOneInstance);
::CloseHandle(hMutexOneInstance);
}
throw std::exception("The application is already running");
}
Method 2: Locking a file, second program can't open the file, so it's open
You could also exclusively open a file by locking it on application open. If the file is already exclusively opened, and your application cannot receive a file handle, then that means the program is already running. On windows you'd simply not specify sharing flags FILE_SHARE_WRITE on the file you're opening with CreateFile API. On linux you'd use flock.
Method 3: Search for process name:
You could enumerate the active processes and search for one with your process name.
A: If you want something that's bog standard, then using a file as a 'lock' is pretty much the way to go. It does have the drawback that you mentioned (if your app doesn't clean up, restarting can be an issue).
This method is used by quite a few applications, but the only one I can recall off the top of my head is VMware. And yes, there are times when you have to go in and delete the '*.lck' when things get wedged.
Using a global mutex or other system object as mentioned by Brian Bondy is a better way to go, but these are platform specific, (unless you use some other library to abstract the platform specifics away).
A: I don't have a good solution, but two thoughts:
*
*You could add a ping capability to query the other process and make sure it's not an unrelated process. Firefox does something similar on Linux and doesn't start a new instance when one is already running.
*If you use a signal handler, you can ensure the pid file is deleted on all but a kill -9
A: I scan the process list looking for the name of my apps executable with matching command line parameters then exit if there is a match.
My app can run more than once, but I don't want it running the same config file at the same time.
Obviously, this is Windows specific, but the same concept is pretty easy on any *NIX system even without specific libraries simply by opening the shell command 'ps -ef' or a variation and looking for your app.
'*************************************************************************
' Sub: CheckForProcess()
' Author: Ron Savage
' Date: 10/31/2007
'
' This routine checks for a running process of this app with the same
' command line parameters.
'*************************************************************************
Private Function CheckForProcess(ByVal processText As String) As Boolean
Dim isRunning As Boolean = False
Dim search As New ManagementObjectSearcher("SELECT * FROM Win32_process")
Dim info As ManagementObject
Dim procName As String = ""
Dim procId As String = ""
Dim procCommandLine As String = ""
For Each info In search.Get()
If (IsNothing(info.Properties("Name").Value)) Then procName = "NULL" Else procName = Split(info.Properties("Name").Value.ToString, ".")(0)
If (IsNothing(info.Properties("ProcessId").Value)) Then procId = "NULL" Else procId = info.Properties("ProcessId").Value.ToString
If (IsNothing(info.Properties("CommandLine").Value)) Then procCommandLine = "NULL" Else procCommandLine = info.Properties("CommandLine").Value.ToString
If (Not procId.Equals(Me.processId) And procName.Equals(processName) And procCommandLine.Contains(processText)) Then
isRunning = True
End If
Next
Return (isRunning)
End Function
A: I found a cross platform way to do this using boost/interprocess/sync/named_mutex. Please refer to this answer https://stackoverflow.com/a/42882353/4806882
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: PHP access to iTunes tags in an RSS feed I need to get access to the iTunes tags in an RSS feed using PHP. I've used simplepie before for podcast feeds, but I'm not sure how to get the iTunes tags using it. Is there a way to use simplepie to do it or is there a better way?
Okay I tried Simple XML.
All this (the code below) seems to work
$feed = simplexml_load_file('http://sbhosting.com/feed/');
$channel = $feed->channel;
$channel_itunes = $channel->children('http://www.itunes.com/dtds/podcast-1.0.dtd');
$summary = $channel_itunes->summary;
$subtitle = $channel_itunes->subtitle;
$category = $channel_itunes->category;
$owner = $channel_itunes->owner->name;
Now I need to get the itunes categories. The seem to be represented in several ways.
In this case I get the follow XML:
<itunes:category text="Technology"/>
<itunes:category text="Technology">
<itunes:category text="Software How-To"/>
</itunes:category>
I would expect to be able to get the category with something like this:
$category_text = $channel_itunes->category['text'];
But that does not seem to work.
I've seen other ways to represent the category that I really don't know who to get.
For example:
Technology
Business
Education
Is this a media thing or a itunes thing or both?
Thanks For Your Help.
G
A: SimplePie has a get_item_tags() function that should let you access them.
A: This code works for me:
//$pie is a SimplePie object
$iTunesCategories=$pie->get_channel_tags(SIMPLEPIE_NAMESPACE_ITUNES,'category');
if ($iTunesCategories) {
foreach ($iTunesCategories as $iTunesCategory) {
$category=$iTunesCategory['attribs']['']['text'];
$subcat=$iTunesCategory['child']["http://www.itunes.com/dtds/podcast-1.0.dtd"]['category'][0]['attribs']['']['text'];
if ($subcat) {
$category.=":$subcat";
}
//do something with $category
}
}
A: To get the attribute with SimpleXML, instead:
$category_text = $channel_itunes->category['text'];
Use:
$category_text = $channel_itunes->category->attributes()->text;
A: <?php echo $feed_item->children('itunes', true)->image->attributes()->href;?>
A: If you have PHP5, using Simple XML can help in parsing the info you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Apache Webserver security and optimization tips I'm about to deal with managing and running my first Internet connected Apache webserver and I was wondering if there are any sys admins and developers out there that would like to share some of their knowledge regarding security and optimization tips for running Apache webserver.
Maybe you can share your top five (or ten) list of things you immediately do after installing Apache webserver (on a Linux box).
Any help very much appreciated.
A: Basic
*
*Be sure to have the latest stable version installed. Running old or unstable version of Apache could expose your system to security flaws or untested solutions
*Be sure only the intended requests are actually processed. You should consider who has to access the web resources exposed by Apache and how.
*Avoid running Apache as root. This is a must.
*Handle your logs. Logs tend to become bigger and bigger; consider to setup logrotate or to clean your log periodically.
*Monitor Apache health with a monitoring system. I like to couple munin and monit, both easy to setuo and to maintain. Nagios and others are worth a look.
*If Apache is serving web apps (i.e. PHP, Perl, Rails) be sure the requests are handled by the right module in the right order.
*Write a nice 404 and 500 message. Sooner or later your visitors will catch an error.
*Stop and restart Apache, so you can be sure both the shoutdown and start procedure is working flawlessy.
*Use mod_security
Security
*
*Protect Apache against DOS.
*Load only the modules really needed.
*Monitor your log to figure out if something strange is happening.
Performance
*
*If you are compiling Apache from source code, be sure to use MPM (Multi-Processing Modules).
*Load only the modules really needed.
*Check the MaxClients setting so that your server does not spawn so many children it starts swapping.
*Use the mod_deflate module, it provides the DEFLATE output filter that allows output from your server to be compressed before being sent to the client over the network.
A: *
*Ensure the Apache process isn't running as root.
*Be sure to be on the latest stable release
*If the box is
directly connected to the internet
ensure you have thought about all
other services, like ssh.
*Carefully inspect your local firewall rules, tighten it down. (See iptables)
*Don't turn on options you don't understand or don't plan to use
*Consider subscribing to an Apache security mailing list so you'll learn right away of any critical patches
A: *
*Chroot the webserver
*Disable any module you aren't going to need
*One you instead need is mod_security
*Set up a file integrity checker for your webroot
*Secure everything else on the same server and switch off anything not used
*Run tests against your server with tools like nmap or Metasploit
A: I'm going to interpret "after installing Apache on a box" as "Preparing a new server installation for production use", because of course this would all be done on a development server and committed to SCM or built into an automated install.
Everything you do to optimise must be done based on real measurments. Set up a test environment with your actual application you intend to run, as realistically as possible. Some points to consider are:
*
*Don't set MaxClients too high. You can use up a lot of RAM, particularly with prefork servers with a large application embedded in them (e.g. mod_perl, PHP etc). Using too much memory is counter-productive. It's better for clients to wait for a successful service than be served an error.
*Consider carefully whether you have Keepalives on. These can both speed up and slow down depending on your environment. If you choose to have them on, you should think about your keepalive timeout based on the actual use case.
*Do performance testing with HTTPS enabled if you're using HTTPS in production
*Set "Last-modified" and "Expires" headers appropriately on objects which change infrequently (to maximise client side caching). Test client side caching in a variety of browsers.
*Make sure your application uses HTTPS correctly, not in a way which causes browsers to generate security warnings (this is another good reason you need to use HTTPS during testing)
A: If you're running a standard LAMP (Linux, Apache, MySQL, PHP/PEARL/PYTHON) environment: Put MySQL on another machine than Apache. Will be a little slower with only a few concurrent processes (due to network latency), but will be MUCH faster with many concurrent processes.
A: Make sure you have configured it to detect DOS (Denial Of Service) attacks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Programming on the Asus EEE Pc in Visual Studio Has anybody tried programming on the EEE Pc in Visual Studio?
I'm considering buying one so I can show some apps on the fly, but also make small changes to them if necessary, without the inconvenience of a large laptop.
Some key points I'm after:
*
*How fast it is
*Would it suit the needs of a developer making small changes to code?
It sounds like the specs would get completely owned, but I've heard/seen strangely good things about the EEE Pc, like how it launches Word 2007 super quick on a nLite'd XP install. :)
A: I own an eeepc 900 and have successfully installed Visual Studio 2008, the MSDN library and SQL Server 2005 developer edition.
The biggest issue was fitting it all in the 4GB solid state C Drive. In short, you can't. Therefore using the 16GB secondary internal flash drive is essential.
The utility nlite was all I needed to do this. In summary nlite lets you create a more compact version of windows XP with just the components you need. However most important for the eeePC was it allowed me to easily tell windows to use D:\ instead of C:\ as the destination for "Program Files" and "Document and Settings".
Then you re-install windows from the nlite windows image, with the required paths automatically set as required. (I strongly recommend this approach over trying to change the paths of an existing/running windows install due to numerous issues it may cause for application compatibility etc).
Unfortunately (on the eepc900 at least) the D:\ drive is slower in general use then the solid state flash drive. For Visual Studio this means the startup time can be slower than ideal (ie 30 seconds). But I have 2GB of RAM and have completely disabled the windows swap file, so once the data has been loaded into RAM, Visual Studio runs nicely.
Overall I use Visual Studio on my eeePC for smaller projects and it is ideal for creating proof of concept type apps while on the move. While it is never going to be ideal as a main development machine, I can completely recommend installing Visual Studio etc on it.
To help resolve possible confusion:
The eeePC 9 series (900, 901) have an 8.9 inch screen, resolution 1024 * 600 and a total of 20GB internal storage, RAM can be upgraded to 2GB.
The older eeePC 7 series have 7 inch screens with 800* 480 resolution and a total of 4GB built internal storage (RAM up to 2GB?). As a development machine, the 7 series are not really up to the job, however the 9 series certainly are.
[Update]
I now own an eeePC 900HA, 1.6Ghz Atom, 2G RAM, 160GB hard drive. Great little machine for proof of concepts and smaller programs. The biggest performance improvement is in the standard 160GB HDD, much better then a pretend solid state drive, much cheaper then an equivalent real SSD.
A: More or less like Ash, I have an EEE PC 901, installed with VS2008 without SP1, Resharper and MSDN library. I didn't install SQL Server as I use MySQL most of the time. I install all my "important" tools, which is VS2008 on C:, the rest of the stuffs in D:, as I prefer to have maximum performance for my VS2008. Like the others had mentioned, screen size is quite a limiting factor, so I use ProFont at 8, shrinked the default window's UI, not forgetting to turn off the theme too.
Performance wise, CPU is doing ok, but the SSD read/write speed is a factor. I benchmarked and get around 30MB/s read, slightly more than 10MB/s write. When I try to load multiple apps, or when VS2008 is busy with something, it will take a much longer time to even load notepad, so I kinda practice to be patient and load 1 thing at a time (on my desktop, I can never wait to load everything in 1 shot). I had 2GB of RAM, had been trying to allocate more ram for disk cache, but still haven't achieve anything.
I used it to do onsite troubleshooting and minor touch up, or whenever I go outstation, plus watching my favourite CSI when I'm traveling :P. Anyway, the main reason I got this is because of it's battery runtime, 7 hours. I doubt you can find another decent notebook that can match it. It produce so little heat so it can play nice on my laps and the standby also quite seamless. I use the standby extensively and even leave it on standby for days. Battery only drop about 10% per day. I can be seated and working on my program and next minute close my notebook and move to the next location without worrying that it won't go into standby(even if it doesn't standby, it can still last until the next time it's opened up, and not burning the pouch along the way)
I did look into Acer AspireOne before I got the EEE PC, AspireOne did have a wider keyboard, much easier to type, but the touchpad and battery puts my off. I had been considering various 12" notebook too before decided on EEE PC, as I used to have a 12" for 4 years. But 12 incher doesn't have that much juice for me to work for more than 2 hours. Those that can run for 4 hours is just too pricey.
There's one time when I came into my client's office earlier then usual, in the morning at 9, started working on my notebook, left it on standby when I go for lunch, then worked until 5 in the evening, when everyone left, I still had 20% left on my battery. Knowing this, I can even leave the power adaptor in the hotel and just go around with a pouch. Way to go ASUS
EDIT: Sorry for the mis-information guys, I didn't realized that I only had VS2008 without SP1 on my Eee PC. Didn't realized the "difficulty" until Menelmacar as me about it.
A: I would recommend something other than the Asus EEE, they're too small of a "netbook" and the screen resolution is terrible.
The HP Mini Note has a nice 8.9" display, practically full size keyboard and best of all has a display that can do 1280 x 768, though you might need to bump your font sizes a bit. :)
You also have the option of the Acer Aspire One which appears to be a much better netbook with a low price point.
If you Google any other those netbooks you will find many reviews and if you hit up YouTube you can find lots of hands on video reviews.
A: I think the 700 series would just be a dog. The 900 series would be a far better choice with a bigger screen and faster RAM (but the same processor), but it's still not well-intended for Visual Studio 2008. I find VS cramped on my 12" tablet.
Take a look at the Dell Inspiron Mini.
A: I managed to install Visual Web Developer installed on the XP that came with my eee pc 901 and i've still got 1.3Gb left on the C Drive.
*
*First I got the required 1.4gb free on the C drive that VWD needs to do the install, I did this by following the instructions here... http://forum.eeeuser.com/viewtopic.php?id=40356 (the 'creating junctions' step for the windows installer/microsoft.net directories saves a lot of space)
*I downloaded the "Offline" Visual Studio ISO from (available at the bottom of the download page) here... http://www.microsoft.com/express/download/
*I then installed VWD from this ISO, remember to choose a install location other than C drive!
Once the install complete it turns out only about 200mb-300mb is actually used on the C drive.
A: I think it largely depends on the size of your project. A small project might not have too much trouble. But a large project would probably bring the thing to it's knees. I've seen my work project on VS.Net 2008 eat up to 350 MB pf RAM all by itself. Not counting loading the OS and actually running the project. Also, you might be using up a lot of hard disk space by installing visual studio on it. There isn't a lot of space on the EEE, unless you plan on using some kind of external USB hard disk.
Personally, I would recommend a more real laptop. You could get something cheap and small, and you'd probably be a lot happier in the end.
A: http://www.hardforum.com/showthread.php?t=1303682
It seems that other people have tried it, all have complained about the screen resolution, but surprisingly not the CPU. Needless to say, I didn't want to have all the panels open / want to use it primarily for a development machine, I just wanted the option to do so if possible.
I'm looking at a 700 series, if it works it's a bonus, if it doesn't, I'll just have to look into using SharpDevelop maybe (I'm a student without much money, so it really needs to be budget.
A: I am just trying to install SP1 and it seems that I will not be successfull. So you think that pointing Program Files to the D drive will force the installer to use drive D: for service pack installation? Currently, I have 1 GB free on drive C but the installation needs 1,9 although Visual Studio is installed to the D drive.
You can see details about the installation here: http://blogs.msdn.com/heaths/archive/2008/07/24/why-windows-installer-may-require-so-much-disk-space.aspx .
A: Wow, I've just installed .net 3.5 and the disk requirements dropped to 1090MB. Hopefully, I will be able to install SP1 without the reinstalling-and-changing-programfiles-path gymnastics.
A: Well, it works!
So - if you are short of disk space (you need 1,9GB) while applying VS2008 SP1, try installing .net 3.5 first. I would also recommend to install it from the ISO package (ie., you don't need to download the installer files).
I was really surprised about the performance - I compiled a web site with five DLL projects and also started the SQL and developer web server and it was really good.
A: Just a thought or alternative suggestion that might be applicable...
I regularly use Visual Studio without any issues on my eeePC. The trick is that I simply access another machine running Visual Studio remotely in order to do this. This lets me have the convenience and portability of the netbook, along with the full-scale computing power of a real development environment.
Obviously this won't work if you don't have connectivity, but for me its an ideal setup..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I merge properties of two JavaScript objects dynamically? I need to be able to merge two (very simple) JavaScript objects at runtime. For example I'd like to:
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog' }
obj1.merge(obj2);
//obj1 now has three properties: food, car, and animal
Is there a built in way to do this? I do not need recursion, and I do not need to merge functions, just methods on flat objects.
A: In MooTools, there's Object.merge():
Object.merge(obj1, obj2);
A: var firstObject = {
key1 : 'value1',
key2 : 'value2'
};
var secondObject={
...firstObject,
key3 : 'value3',
key4 : 'value4',
key5 : 'value5'
}
console.log(firstObject);
console.log(secondObject);
A: Similar to jQuery extend(), you have the same function in AngularJS:
// Merge the 'options' object into the 'settings' object
var settings = {validate: false, limit: 5, name: "foo"};
var options = {validate: true, name: "bar"};
angular.extend(settings, options);
A: In Ext JS 4 it can be done as follows:
var mergedObject = Ext.Object.merge(object1, object2)
// Or shorter:
var mergedObject2 = Ext.merge(object1, object2)
See merge( object ) : Object.
A: var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog' }
// result
result: {food: "pizza", car: "ford", animal: "dog"}
Using jQuery.extend() - Link
// Merge obj1 & obj2 to result
var result1 = $.extend( {}, obj1, obj2 );
Using _.merge() - Link
// Merge obj1 & obj2 to result
var result2 = _.merge( {}, obj1, obj2 );
Using _.extend() - Link
// Merge obj1 & obj2 to result
var result3 = _.extend( {}, obj1, obj2 );
Using Object.assign() ECMAScript 2015 (ES6) - Link
// Merge obj1 & obj2 to result
var result4 = Object.assign( {}, obj1, obj2 );
Output of all
obj1: { animal: 'dog' }
obj2: { food: 'pizza', car: 'ford' }
result1: {food: "pizza", car: "ford", animal: "dog"}
result2: {food: "pizza", car: "ford", animal: "dog"}
result3: {food: "pizza", car: "ford", animal: "dog"}
result4: {food: "pizza", car: "ford", animal: "dog"}
A: Based on Markus' and vsync' answer, this is an expanded version. The function takes any number of arguments. It can be used to set properties on DOM nodes and makes deep copies of values. However, the first argument is given by reference.
To detect a DOM node, the isDOMNode() function is used (see Stack Overflow question JavaScript isDOM — How do you check if a JavaScript Object is a DOM Object?)
It was tested in Opera 11, Firefox 6, Internet Explorer 8 and Google Chrome 16.
Code
function mergeRecursive() {
// _mergeRecursive does the actual job with two arguments.
var _mergeRecursive = function (dst, src) {
if (isDOMNode(src) || typeof src !== 'object' || src === null) {
return dst;
}
for (var p in src) {
if (!src.hasOwnProperty(p))
continue;
if (src[p] === undefined)
continue;
if ( typeof src[p] !== 'object' || src[p] === null) {
dst[p] = src[p];
} else if (typeof dst[p]!=='object' || dst[p] === null) {
dst[p] = _mergeRecursive(src[p].constructor===Array ? [] : {}, src[p]);
} else {
_mergeRecursive(dst[p], src[p]);
}
}
return dst;
}
// Loop through arguments and merge them into the first argument.
var out = arguments[0];
if (typeof out !== 'object' || out === null)
return out;
for (var i = 1, il = arguments.length; i < il; i++) {
_mergeRecursive(out, arguments[i]);
}
return out;
}
Some examples
Set innerHTML and style of a HTML Element
mergeRecursive(
document.getElementById('mydiv'),
{style: {border: '5px solid green', color: 'red'}},
{innerHTML: 'Hello world!'});
Merge arrays and objects. Note that undefined can be used to preserv values in the lefthand array/object.
o = mergeRecursive({a:'a'}, [1,2,3], [undefined, null, [30,31]], {a:undefined, b:'b'});
// o = {0:1, 1:null, 2:[30,31], a:'a', b:'b'}
Any argument not beeing a JavaScript object (including null) will be ignored. Except for the first argument, also DOM nodes are discarded. Beware that i.e. strings, created like new String() are in fact objects.
o = mergeRecursive({a:'a'}, 1, true, null, undefined, [1,2,3], 'bc', new String('de'));
// o = {0:'d', 1:'e', 2:3, a:'a'}
If you want to merge two objects into a new (without affecting any of the two) supply {} as first argument
var a={}, b={b:'abc'}, c={c:'cde'}, o;
o = mergeRecursive(a, b, c);
// o===a is true, o===b is false, o===c is false
Edit (by ReaperSoon):
To also merge arrays
function mergeRecursive(obj1, obj2) {
if (Array.isArray(obj2)) { return obj1.concat(obj2); }
for (var p in obj2) {
try {
// Property in destination object set; update its value.
if ( obj2[p].constructor==Object ) {
obj1[p] = mergeRecursive(obj1[p], obj2[p]);
} else if (Array.isArray(obj2[p])) {
obj1[p] = obj1[p].concat(obj2[p]);
} else {
obj1[p] = obj2[p];
}
} catch(e) {
// Property in destination object not set; create it and set its value.
obj1[p] = obj2[p];
}
}
return obj1;
}
A: It seems like this should be all you need:
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog' }
var obj3 = { ...obj1, ...obj2 }
After that obj3 should now have the following value:
{food: "pizza", car: "ford", animal: "dog"}
Try it out here:
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog' }
var obj3 = { ...obj1, ...obj2 }
console.log(obj3);
A: I need to merge objects today, and this question (and answers) helped me a lot. I tried some of the answers, but none of them fit my needs, so I combined some of the answers, added something myself and came up with a new merge function. Here it is:
var merge = function() {
var obj = {},
i = 0,
il = arguments.length,
key;
for (; i < il; i++) {
for (key in arguments[i]) {
if (arguments[i].hasOwnProperty(key)) {
obj[key] = arguments[i][key];
}
}
}
return obj;
};
Some example usages:
var t1 = {
key1: 1,
key2: "test",
key3: [5, 2, 76, 21]
};
var t2 = {
key1: {
ik1: "hello",
ik2: "world",
ik3: 3
}
};
var t3 = {
key2: 3,
key3: {
t1: 1,
t2: 2,
t3: {
a1: 1,
a2: 3,
a4: [21, 3, 42, "asd"]
}
}
};
console.log(merge(t1, t2));
console.log(merge(t1, t3));
console.log(merge(t2, t3));
console.log(merge(t1, t2, t3));
console.log(merge({}, t1, { key1: 1 }));
A: You can use the object spread syntax to achieve this. It's a part of ES2018 and beyond.
const obj1 = { food: 'pizza', car: 'ford' };
const obj2 = { animal: 'dog' };
const obj3 = { ...obj1, ...obj2 };
console.log(obj3);
A: With Underscore.js, to merge an array of objects do:
var arrayOfObjects = [ {a:1}, {b:2, c:3}, {d:4} ];
_(arrayOfObjects).reduce(function(memo, o) { return _(memo).extend(o); });
It results in:
Object {a: 1, b: 2, c: 3, d: 4}
A: You should use lodash's defaultsDeep
_.defaultsDeep({ 'user': { 'name': 'barney' } }, { 'user': { 'name': 'fred', 'age': 36 } });
// → { 'user': { 'name': 'barney', 'age': 36 } }
A:
let obj1 = {a:1, b:2};
let obj2 = {c:3, d:4};
let merged = {...obj1, ...obj2};
console.log(merged);
A: The given solutions should be modified to check source.hasOwnProperty(property) in the for..in loops before assigning - otherwise, you end up copying the properties of the whole prototype chain, which is rarely desired...
A: Merge properties of N objects in one line of code
An Object.assign method is part of the ECMAScript 2015 (ES6) standard and does exactly what you need. (IE not supported)
var clone = Object.assign({}, obj);
The Object.assign() method is used to copy the values of all enumerable own properties from one or more source objects to a target object.
Read more...
The polyfill to support older browsers:
if (!Object.assign) {
Object.defineProperty(Object, 'assign', {
enumerable: false,
configurable: true,
writable: true,
value: function(target) {
'use strict';
if (target === undefined || target === null) {
throw new TypeError('Cannot convert first argument to object');
}
var to = Object(target);
for (var i = 1; i < arguments.length; i++) {
var nextSource = arguments[i];
if (nextSource === undefined || nextSource === null) {
continue;
}
nextSource = Object(nextSource);
var keysArray = Object.keys(nextSource);
for (var nextIndex = 0, len = keysArray.length; nextIndex < len; nextIndex++) {
var nextKey = keysArray[nextIndex];
var desc = Object.getOwnPropertyDescriptor(nextSource, nextKey);
if (desc !== undefined && desc.enumerable) {
to[nextKey] = nextSource[nextKey];
}
}
}
return to;
}
});
}
A: It's worth mentioning that the version from the 140byt.es collection is solving the task within minimum space and is worth a try for this purpose:
Code:
function m(a,b,c){for(c in b)b.hasOwnProperty(c)&&((typeof a[c])[0]=='o'?m(a[c],b[c]):a[c]=b[c])}
Usage for your purpose:
m(obj1,obj2);
Here's the original Gist.
A: ES2018/TypeScript: Many answers are OK but I've come up with a more elegant solution to this problem when you need to merge two objects without overwriting overlapping object keys.
My function also accepts unlimited number of objects to merge as function arguments:
(I'm using TypeScript notation here, feel free to delete the :object[] type in the function argument if you're using plain JavaScript).
const merge = (...objects: object[]) => {
return objects.reduce((prev, next) => {
Object.keys(prev).forEach(key => {
next[key] = { ...next[key], ...prev[key] }
})
return next
})
}
A: You can use Object.assign method. For example:
var result = Object.assign(obj1, obj2);
Also, note that it creates a shallow copy of the object.
A:
Use Spread operator which follows the ES6 version
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog' }
let result = {...obj1,...obj2};
console.log(result)
output { food: 'pizza', car: 'ford', animal: 'dog' }
A: ECMAScript 2018 Standard Method
You would use object spread:
let merged = {...obj1, ...obj2};
merged is now the union of obj1 and obj2. Properties in obj2 will overwrite those in obj1.
/** There's no limit to the number of objects you can merge.
* Later properties overwrite earlier properties with the same name. */
const allRules = {...obj1, ...obj2, ...obj3};
Here is also the MDN documentation for this syntax. If you're using babel you'll need the @babel/plugin-proposal-object-rest-spread plugin for it to work (This plugin is included in @babel/preset-env, in ES2018).
ECMAScript 2015 (ES6) Standard Method
/* For the case in question, you would do: */
Object.assign(obj1, obj2);
/** There's no limit to the number of objects you can merge.
* All objects get merged into the first object.
* Only the object in the first argument is mutated and returned.
* Later properties overwrite earlier properties with the same name. */
const allRules = Object.assign({}, obj1, obj2, obj3, etc);
(see MDN JavaScript Reference)
Method for ES5 and Earlier
for (var attrname in obj2) { obj1[attrname] = obj2[attrname]; }
Note that this will simply add all attributes of obj2 to obj1 which might not be what you want if you still want to use the unmodified obj1.
If you're using a framework that craps all over your prototypes then you have to get fancier with checks like hasOwnProperty, but that code will work for 99% of cases.
Example function:
/**
* Overwrites obj1's values with obj2's and adds obj2's if non existent in obj1
* @param obj1
* @param obj2
* @returns obj3 a new object based on obj1 and obj2
*/
function merge_options(obj1,obj2){
var obj3 = {};
for (var attrname in obj1) { obj3[attrname] = obj1[attrname]; }
for (var attrname in obj2) { obj3[attrname] = obj2[attrname]; }
return obj3;
}
A: The Harmony ECMAScript 2015 (ES6) specifies Object.assign which will do this.
Object.assign(obj1, obj2);
Current browser support is getting better, but if you're developing for browsers that don't have support, you can use a polyfill.
A: The following two are probably a good starting point. lodash also has a customizer function for those special needs!
_.extend (http://underscorejs.org/#extend)
_.merge (https://lodash.com/docs#merge)
A: Here's my stab which
*
*Supports deep merge
*Does not mutate arguments
*Takes any number of arguments
*Does not extend the object prototype
*Does not depend on another library (jQuery, MooTools, Underscore.js, etc.)
*Includes check for hasOwnProperty
*Is short :)
/*
Recursively merge properties and return new object
obj1 <- obj2 [ <- ... ]
*/
function merge () {
var dst = {}
,src
,p
,args = [].splice.call(arguments, 0)
;
while (args.length > 0) {
src = args.splice(0, 1)[0];
if (toString.call(src) == '[object Object]') {
for (p in src) {
if (src.hasOwnProperty(p)) {
if (toString.call(src[p]) == '[object Object]') {
dst[p] = merge(dst[p] || {}, src[p]);
} else {
dst[p] = src[p];
}
}
}
}
}
return dst;
}
Example:
a = {
"p1": "p1a",
"p2": [
"a",
"b",
"c"
],
"p3": true,
"p5": null,
"p6": {
"p61": "p61a",
"p62": "p62a",
"p63": [
"aa",
"bb",
"cc"
],
"p64": {
"p641": "p641a"
}
}
};
b = {
"p1": "p1b",
"p2": [
"d",
"e",
"f"
],
"p3": false,
"p4": true,
"p6": {
"p61": "p61b",
"p64": {
"p642": "p642b"
}
}
};
c = {
"p1": "p1c",
"p3": null,
"p6": {
"p62": "p62c",
"p64": {
"p643": "p641c"
}
}
};
d = merge(a, b, c);
/*
d = {
"p1": "p1c",
"p2": [
"d",
"e",
"f"
],
"p3": null,
"p5": null,
"p6": {
"p61": "p61b",
"p62": "p62c",
"p63": [
"aa",
"bb",
"cc"
],
"p64": {
"p641": "p641a",
"p642": "p642b",
"p643": "p641c"
}
},
"p4": true
};
*/
A: Just by the way, what you're all doing is overwriting properties, not merging...
This is how JavaScript objects area really merged: Only keys in the to object which are not objects themselves will be overwritten by from. Everything else will be really merged. Of course you can change this behaviour to not overwrite anything which exists like only if to[n] is undefined, etc...:
var realMerge = function (to, from) {
for (n in from) {
if (typeof to[n] != 'object') {
to[n] = from[n];
} else if (typeof from[n] == 'object') {
to[n] = realMerge(to[n], from[n]);
}
}
return to;
};
Usage:
var merged = realMerge(obj1, obj2);
A: Object.assign(TargetObject, Obj1, Obj2, ...);
A: I googled for code to merge object properties and ended up here. However since there wasn't any code for recursive merge I wrote it myself. (Maybe jQuery extend is recursive BTW?) Anyhow, hopefully someone else will find it useful as well.
(Now the code does not use Object.prototype :)
Code
/*
* Recursively merge properties of two objects
*/
function MergeRecursive(obj1, obj2) {
for (var p in obj2) {
try {
// Property in destination object set; update its value.
if ( obj2[p].constructor==Object ) {
obj1[p] = MergeRecursive(obj1[p], obj2[p]);
} else {
obj1[p] = obj2[p];
}
} catch(e) {
// Property in destination object not set; create it and set its value.
obj1[p] = obj2[p];
}
}
return obj1;
}
An example
o1 = { a : 1,
b : 2,
c : {
ca : 1,
cb : 2,
cc : {
cca : 100,
ccb : 200 } } };
o2 = { a : 10,
c : {
ca : 10,
cb : 20,
cc : {
cca : 101,
ccb : 202 } } };
o3 = MergeRecursive(o1, o2);
Produces object o3 like
o3 = { a : 10,
b : 2,
c : {
ca : 10,
cb : 20,
cc : {
cca : 101,
ccb : 202 } } };
A: For not-too-complicated objects you could use JSON:
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog', car: 'chevy'}
var objMerge;
objMerge = JSON.stringify(obj1) + JSON.stringify(obj2);
// {"food": "pizza","car":"ford"}{"animal":"dog","car":"chevy"}
objMerge = objMerge.replace(/\}\{/, ","); // \_ replace with comma for valid JSON
objMerge = JSON.parse(objMerge); // { food: 'pizza', animal: 'dog', car: 'chevy'}
// Of same keys in both objects, the last object's value is retained_/
Mind you that in this example "}{" must not occur within a string!
A: Object.assign()
ECMAScript 2015 (ES6)
This is a new technology, part of the ECMAScript 2015 (ES6) standard.
This technology's specification has been finalized, but check the compatibility table for usage and implementation status in various browsers.
The Object.assign() method is used to copy the values of all enumerable own properties from one or more source objects to a target object. It will return the target object.
var o1 = { a: 1 };
var o2 = { b: 2 };
var o3 = { c: 3 };
var obj = Object.assign(o1, o2, o3);
console.log(obj); // { a: 1, b: 2, c: 3 }
console.log(o1); // { a: 1, b: 2, c: 3 }, target object itself is changed.
A: Use:
//Takes any number of objects and returns one merged object
var objectMerge = function(){
var out = {};
if(!arguments.length)
return out;
for(var i=0; i<arguments.length; i++) {
for(var key in arguments[i]){
out[key] = arguments[i][key];
}
}
return out;
}
It was tested with:
console.log(objectMerge({a:1, b:2}, {a:2, c:4}));
It results in:
{ a: 2, b: 2, c: 4 }
A: gossi's extension of David Coallier's method:
Check these two lines:
from = arguments[i];
Object.getOwnPropertyNames(from).forEach(function (name) {
One need to check "from" against null object... If for example merging an object that comes from an Ajax response, previously created on a server, an object property can have a value of "null", and in that case the above code generates an error saying:
"from" is not a valid object
So for example, wrapping the "...Object.getOwnPropertyNames(from).forEach..." function with an "if (from != null) { ... }" will prevent that error occurring.
A:
function extend(o, o1, o2){
if( !(o instanceof Object) ) o = {};
copy(o, o1);
if( o2 )
copy(o, o2)
function isObject(obj) {
var type = Object.prototype.toString.call(obj);
return obj === Object(obj) && type != '[object Array]' && type != '[object Function]';
};
function copy(a,b){
// copy o2 to o
for( var key in b )
if( b.hasOwnProperty(key) ){
if( isObject(b[key]) ){
if( !isObject(a[key]) )
a[key] = Object.assign({}, b[key]);
else copy(a[key], b[key])
}
else
a[key] = b[key];
}
}
return o;
};
var o1 = {a:{foo:1}, b:1},
o2 = {a:{bar:2}, b:[1], c:()=>{}},
newMerged = extend({}, o1, o2);
console.log( newMerged )
console.log( o1 )
console.log( o2 )
A: My way:
function mergeObjects(defaults, settings) {
Object.keys(defaults).forEach(function(key_default) {
if (typeof settings[key_default] == "undefined") {
settings[key_default] = defaults[key_default];
} else if (isObject(defaults[key_default]) && isObject(settings[key_default])) {
mergeObjects(defaults[key_default], settings[key_default]);
}
});
function isObject(object) {
return Object.prototype.toString.call(object) === '[object Object]';
}
return settings;
}
:)
A: I use the following which is in pure JavaScript. It starts from the right-most argument and combines them all the way up to the first argument. There is no return value, only the first argument is modified and the left-most parameter (except the first one) has the highest weight on properties.
var merge = function() {
var il = arguments.length;
for (var i = il - 1; i > 0; --i) {
for (var key in arguments[i]) {
if (arguments[i].hasOwnProperty(key)) {
arguments[0][key] = arguments[i][key];
}
}
}
};
A: merge two object using Object.assign and spread operator.
Wrong way(Modify original object because targeting o1)
var o1 = { X: 10 };
var o2 = { Y: 20 };
var o3 = { Z: 30 };
var merge = Object.assign(o1, o2, o3);
console.log(merge) // {X:10, Y:20, Z:30}
console.log(o1) // {X:10, Y:20, Z:30}
Right ways
*
*Object.assign({}, o1, o2, o3) ==> targeting new object
*{...o1, ...o2, ...o3} ==> spreading objects
var o1 = { X: 10 };
var o2 = { Y: 20 };
var o3 = { Z: 30 };
console.log('Does not modify original objects because target {}');
var merge = Object.assign({}, o1, o2, o3);
console.log(merge); // { X: 10, Y: 20, Z: 30 }
console.log(o1)
console.log('Does not modify original objects')
var spreadMerge = {...o1, ...o2, ...o3};
console.log(spreadMerge);
console.log(o1);
A: shallow
var obj = { name : "Jacob" , address : ["America"] }
var obj2 = { name : "Shaun" , address : ["Honk Kong"] }
var merged = Object.assign({} , obj,obj2 ); //shallow merge
obj2.address[0] = "new city"
result.address[0] is changed to "new city" , i.e merged object is also changed. This is the problem with shallow merge.
deep
var obj = { name : "Jacob" , address : ["America"] }
var obj2 = { name : "Shaun" , address : ["Honk Kong"] }
var result = Object.assign({} , JSON.parse(JSON.stringify(obj)),JSON.parse(JSON.stringify(obj2)) )
obj2.address[0] = "new city"
result.address[0] is not changed
A: Note that underscore.js's extend-method does this in a one-liner:
_.extend({name : 'moe'}, {age : 50});
=> {name : 'moe', age : 50}
A: There's a library called deepmerge on GitHub: That seems to be getting some traction. It's a standalone, available through both the npm and bower package managers.
I would be inclined to use or improve on this instead of copy-pasting code from answers.
A: The best way for you to do this is to add a proper property that is non-enumerable using Object.defineProperty.
This way you will still be able to iterate over your objects properties without having the newly created "extend" that you would get if you were to create the property with Object.prototype.extend.
Hopefully this helps:
Object.defineProperty(Object.prototype, "extend", {
enumerable: false,
value: function(from) {
var props = Object.getOwnPropertyNames(from);
var dest = this;
props.forEach(function(name) {
if (name in dest) {
var destination = Object.getOwnPropertyDescriptor(from, name);
Object.defineProperty(dest, name, destination);
}
});
return this;
}
});
Once you have that working, you can do:
var obj = {
name: 'stack',
finish: 'overflow'
}
var replacement = {
name: 'stock'
};
obj.extend(replacement);
I just wrote a blog post about it here: http://onemoredigit.com/post/1527191998/extending-objects-in-node-js
A: Prototype has this:
Object.extend = function(destination,source) {
for (var property in source)
destination[property] = source[property];
return destination;
}
obj1.extend(obj2) will do what you want.
A: You can simply use jQuery extend
var obj1 = { val1: false, limit: 5, name: "foo" };
var obj2 = { val2: true, name: "bar" };
jQuery.extend(obj1, obj2);
Now obj1 contains all the values of obj1 and obj2
A: Wow.. this is the first StackOverflow post I've seen with multiple pages. Apologies for adding another "answer"
ES5 & Earlier
This method is for ES5 & Earlier - there are plenty of other answers addressing ES6.
I did not see any "deep" object merging utilizing the arguments property. Here is my answer - compact & recursive, allowing unlimited object arguments to be passed:
function extend() {
for (var o = {}, i = 0; i < arguments.length; i++) {
// Uncomment to skip arguments that are not objects (to prevent errors)
// if (arguments[i].constructor !== Object) continue;
for (var k in arguments[i]) {
if (arguments[i].hasOwnProperty(k)) {
o[k] = arguments[i][k].constructor === Object
? extend(o[k] || {}, arguments[i][k])
: arguments[i][k];
}
}
}
return o;
}
Example
/**
* Extend objects
*/
function extend() {
for (var o = {}, i = 0; i < arguments.length; i++) {
for (var k in arguments[i]) {
if (arguments[i].hasOwnProperty(k)) {
o[k] = arguments[i][k].constructor === Object
? extend(o[k] || {}, arguments[i][k])
: arguments[i][k];
}
}
}
return o;
}
/**
* Example
*/
document.write(JSON.stringify(extend({
api: 1,
params: {
query: 'hello'
}
}, {
params: {
query: 'there'
}
})));
// outputs {"api": 1, "params": {"query": "there"}}
This answer is now but a drop in the ocean ...
A: Just if anyone is using Google Closure Library:
goog.require('goog.object');
var a = {'a': 1, 'b': 2};
var b = {'b': 3, 'c': 4};
goog.object.extend(a, b);
// Now object a == {'a': 1, 'b': 3, 'c': 4};
Similar helper function exists for array:
var a = [1, 2];
var b = [3, 4];
goog.array.extend(a, b); // Extends array 'a'
goog.array.concat(a, b); // Returns concatenation of array 'a' and 'b'
A: **Merging objects is simple using Object.assign or the spread ... operator **
var obj1 = { food: 'pizza', car: 'ford' }
var obj2 = { animal: 'dog', car: 'BMW' }
var obj3 = {a: "A"}
var mergedObj = Object.assign(obj1,obj2,obj3)
// or using the Spread operator (...)
var mergedObj = {...obj1,...obj2,...obj3}
console.log(mergedObj);
The objects are merged from right to left, this means that objects which have identical properties as the objects to their right will be overriden.
In this example obj2.car overrides obj1.car
A: jQuery also has a utility for this: http://api.jquery.com/jQuery.extend/.
Taken from the jQuery documentation:
// Merge options object into settings object
var settings = { validate: false, limit: 5, name: "foo" };
var options = { validate: true, name: "bar" };
jQuery.extend(settings, options);
// Now the content of settings object is the following:
// { validate: true, limit: 5, name: "bar" }
The above code will mutate the existing object named settings.
If you want to create a new object without modifying either argument, use this:
var defaults = { validate: false, limit: 5, name: "foo" };
var options = { validate: true, name: "bar" };
/* Merge defaults and options, without modifying defaults */
var settings = $.extend({}, defaults, options);
// The content of settings variable is now the following:
// {validate: true, limit: 5, name: "bar"}
// The 'defaults' and 'options' variables remained the same.
A: I extended David Coallier's method:
*
*Added the possibility to merge multiple objects
*Supports deep objects
*override parameter (that's detected if the last parameter is a boolean)
If override is false, no property gets overridden but new properties will be added.
Usage:
obj.merge(merges... [, override]);
Here is my code:
Object.defineProperty(Object.prototype, "merge", {
enumerable: false,
value: function () {
var override = true,
dest = this,
len = arguments.length,
props, merge, i, from;
if (typeof(arguments[arguments.length - 1]) === "boolean") {
override = arguments[arguments.length - 1];
len = arguments.length - 1;
}
for (i = 0; i < len; i++) {
from = arguments[i];
if (from != null) {
Object.getOwnPropertyNames(from).forEach(function (name) {
var descriptor;
// nesting
if ((typeof(dest[name]) === "object" || typeof(dest[name]) === "undefined")
&& typeof(from[name]) === "object") {
// ensure proper types (Array rsp Object)
if (typeof(dest[name]) === "undefined") {
dest[name] = Array.isArray(from[name]) ? [] : {};
}
if (override) {
if (!Array.isArray(dest[name]) && Array.isArray(from[name])) {
dest[name] = [];
}
else if (Array.isArray(dest[name]) && !Array.isArray(from[name])) {
dest[name] = {};
}
}
dest[name].merge(from[name], override);
}
// flat properties
else if ((name in dest && override) || !(name in dest)) {
descriptor = Object.getOwnPropertyDescriptor(from, name);
if (descriptor.configurable) {
Object.defineProperty(dest, name, descriptor);
}
}
});
}
}
return this;
}
});
Examples and TestCases:
function clone (obj) {
return JSON.parse(JSON.stringify(obj));
}
var obj = {
name : "trick",
value : "value"
};
var mergeObj = {
name : "truck",
value2 : "value2"
};
var mergeObj2 = {
name : "track",
value : "mergeObj2",
value2 : "value2-mergeObj2",
value3 : "value3"
};
assertTrue("Standard", clone(obj).merge(mergeObj).equals({
name : "truck",
value : "value",
value2 : "value2"
}));
assertTrue("Standard no Override", clone(obj).merge(mergeObj, false).equals({
name : "trick",
value : "value",
value2 : "value2"
}));
assertTrue("Multiple", clone(obj).merge(mergeObj, mergeObj2).equals({
name : "track",
value : "mergeObj2",
value2 : "value2-mergeObj2",
value3 : "value3"
}));
assertTrue("Multiple no Override", clone(obj).merge(mergeObj, mergeObj2, false).equals({
name : "trick",
value : "value",
value2 : "value2",
value3 : "value3"
}));
var deep = {
first : {
name : "trick",
val : "value"
},
second : {
foo : "bar"
}
};
var deepMerge = {
first : {
name : "track",
anotherVal : "wohoo"
},
second : {
foo : "baz",
bar : "bam"
},
v : "on first layer"
};
assertTrue("Deep merges", clone(deep).merge(deepMerge).equals({
first : {
name : "track",
val : "value",
anotherVal : "wohoo"
},
second : {
foo : "baz",
bar : "bam"
},
v : "on first layer"
}));
assertTrue("Deep merges no override", clone(deep).merge(deepMerge, false).equals({
first : {
name : "trick",
val : "value",
anotherVal : "wohoo"
},
second : {
foo : "bar",
bar : "bam"
},
v : "on first layer"
}));
var obj1 = {a: 1, b: "hello"};
obj1.merge({c: 3});
assertTrue(obj1.equals({a: 1, b: "hello", c: 3}));
obj1.merge({a: 2, b: "mom", d: "new property"}, false);
assertTrue(obj1.equals({a: 1, b: "hello", c: 3, d: "new property"}));
var obj2 = {};
obj2.merge({a: 1}, {b: 2}, {a: 3});
assertTrue(obj2.equals({a: 3, b: 2}));
var a = [];
var b = [1, [2, 3], 4];
a.merge(b);
assertEquals(1, a[0]);
assertEquals([2, 3], a[1]);
assertEquals(4, a[2]);
var o1 = {};
var o2 = {a: 1, b: {c: 2}};
var o3 = {d: 3};
o1.merge(o2, o3);
assertTrue(o1.equals({a: 1, b: {c: 2}, d: 3}));
o1.b.c = 99;
assertTrue(o2.equals({a: 1, b: {c: 2}}));
// checking types with arrays and objects
var bo;
a = [];
bo = [1, {0:2, 1:3}, 4];
b = [1, [2, 3], 4];
a.merge(b);
assertTrue("Array stays Array?", Array.isArray(a[1]));
a = [];
a.merge(bo);
assertTrue("Object stays Object?", !Array.isArray(a[1]));
a = [];
a.merge(b);
a.merge(bo);
assertTrue("Object overrides Array", !Array.isArray(a[1]));
a = [];
a.merge(b);
a.merge(bo, false);
assertTrue("Object does not override Array", Array.isArray(a[1]));
a = [];
a.merge(bo);
a.merge(b);
assertTrue("Array overrides Object", Array.isArray(a[1]));
a = [];
a.merge(bo);
a.merge(b, false);
assertTrue("Array does not override Object", !Array.isArray(a[1]));
My equals method can be found here: Object comparison in JavaScript
A: I'm kind of getting started with JavaScript, so correct me if I'm wrong.
But wouldn't it be better if you could merge any number of objects? Here's how I do it using the native Arguments object.
The key to is that you can actually pass any number of arguments to a JavaScript function without defining them in the function declaration. You just can't access them without using the Arguments object.
function mergeObjects() (
var tmpObj = {};
for(var o in arguments) {
for(var m in arguments[o]) {
tmpObj[m] = arguments[o][m];
}
}
return tmpObj;
}
A: In YUI Y.merge should get the job done:
Y.merge(obj1, obj2, obj3....)
A: I've used Object.create() to keep the default settings (utilising __proto__ or Object.getPrototypeOf() ).
function myPlugin( settings ){
var defaults = {
"keyName": [ "string 1", "string 2" ]
}
var options = Object.create( defaults );
for (var key in settings) { options[key] = settings[key]; }
}
myPlugin( { "keyName": ["string 3", "string 4" ] } );
This way I can always 'concat()' or 'push()' later.
var newArray = options['keyName'].concat( options.__proto__['keyName'] );
Note: You'll need to do a hasOwnProperty check before concatenation to avoid duplication.
A: For those using Node.js, there's an NPM module: node.extend
Install:
npm install node.extend
Usage:
var extend = require('node.extend');
var destObject = extend(true, {}, sourceObject);
// Where sourceObject is the object whose properties will be copied into another.
A: You can merge objects through following my method
var obj1 = { food: 'pizza', car: 'ford' };
var obj2 = { animal: 'dog' };
var result = mergeObjects([obj1, obj2]);
console.log(result);
document.write("result: <pre>" + JSON.stringify(result, 0, 3) + "</pre>");
function mergeObjects(objectArray) {
if (objectArray.length) {
var b = "", i = -1;
while (objectArray[++i]) {
var str = JSON.stringify(objectArray[i]);
b += str.slice(1, str.length - 1);
if (objectArray[i + 1]) b += ",";
}
return JSON.parse("{" + b + "}");
}
return {};
}
A: The Merge Of JSON Compatible JavaScript Objects
I encourage the use and utilization of nondestructive methods that don't modify the original source, 'Object.assign' is a destructive method and it also happens to be not so production friendly because it stops working on earlier browsers and you have no way of patching it cleanly, with an alternative.
Merging JS Objects will always be out of reach, or incomplete, whatever the solution. But merging JSON compliant compatible objects is just one step away from being able to write a simple and portable piece of code of a nondestructive method of merging series of JS Objects into a returned master containing all the unique property-names and their corresponding values synthesized in a single master object for the intended purpose.
Having in mind that MSIE8 is the first browser to have added a native support for the JSON object is a great relief, and reusing the already existing technology, is always a welcomed opportunity.
Restricting your code to JSON complant standard objects, is more of an advantage, than a restriction - since these objects can also be transmitted over the Internet. And of course for those who would like a deeper backward compatibility there's always a json plug., whose methods can easily be assigned to a JSON variable in the outer code without having to modify or rewrite the method in use.
function Merge( ){
var a = [].slice.call( arguments ), i = 0;
while( a[i] )a[i] = JSON.stringify( a[i++] ).slice( 1,-1 );
return JSON.parse( "{"+ a.join() +"}" );
}
(Of course one can always give it a more meaningful name, which I haven't decided yet; should probably name it JSONmerge)
The use case:
var master = Merge( obj1, obj2, obj3, ...objn );
Now, contrary to the Object.assign this leaves all objects untouched and in their original state (in case you've done something wrong and need to reorder the merging objects or be able to use them separately for some other operation before merging them again).
Tthe number of the Merge arguments is also limited only by the arguments length limit [which is huge].
The natively supported JSON parse / stringify is already machine optimized, meaning: it should be faster than any scripted form of JS loop.
The iteration over given arguments, is being done using the while - proven to be the fastest loop in JS.
It doesn't harm to briefly mention the fact we already know that duplicate properties of the unique object labels (keys) will be overwritten by the later object containing the same key label, which means you are in control of which property is taking over the previous by simply ordering or reordering the arguments list. And the benefit of getting a clean and updated master object with no dupes as a final output.
;
var obj1 = {a:1}, obj2 = {b:2}, obj3 = {c:3}
;
function Merge( ){
var a = [].slice.call( arguments ), i = 0;
while( a[i] )a[i] = JSON.stringify( a[i++] ).slice( 1,-1 );
return JSON.parse( "{"+ a.join() +"}" );
}
;
var master = Merge( obj1, obj2, obj3 )
;
console.log( JSON.stringify( master ) )
;
A: ES5 compatible native one-liner:
var merged = [obj1, obj2].reduce(function(a, o) { for(k in o) a[k] = o[k]; return a; }, {})
A: With the following helper, you can merge two objects into one new object:
function extend(obj, src) {
for (var key in src) {
if (src.hasOwnProperty(key)) obj[key] = src[key];
}
return obj;
}
// example
var a = { foo: true }, b = { bar: false };
var c = extend(a, b);
console.log(c);
// { foo: true, bar: false }
This is typically useful when merging an options dict with the default settings in a function or a plugin.
If support for IE 8 is not required, you may use Object.keys for the same functionality instead:
function extend(obj, src) {
Object.keys(src).forEach(function(key) { obj[key] = src[key]; });
return obj;
}
This involves slightly less code and is a bit faster.
A: There are different ways to achieve this:
Object.assign(targetObj, sourceObj);
targetObj = {...targetObj, ...sourceObj};
A: Use this
var t=merge({test:123},{mysecondTest:{blub:{test2:'string'},args:{'test':2}}})
console.log(t);
function merge(...args) {
return Object.assign({}, ...args);
}
A: The correct implementation in Prototype should look like this:
var obj1 = {food: 'pizza', car: 'ford'}
var obj2 = {animal: 'dog'}
obj1 = Object.extend(obj1, obj2);
A: This merges obj into a "default" def. obj has precedence for anything that exists in both, since obj is copied into def. Note also that this is recursive.
function mergeObjs(def, obj) {
if (typeof obj == 'undefined') {
return def;
} else if (typeof def == 'undefined') {
return obj;
}
for (var i in obj) {
if (obj[i] != null && obj[i].constructor == Object) {
def[i] = mergeObjs(def[i], obj[i]);
} else {
def[i] = obj[i];
}
}
return def;
}
a = {x : {y : [123]}}
b = {x : {z : 123}}
console.log(mergeObjs(a, b));
// {x: {y : [123], z : 123}}
A: A={a:1,b:function(){alert(9)}}
B={a:2,c:3}
A.merge = function(){for(var i in B){A[i]=B[i]}}
A.merge()
Result is: {a:2,c:3,b:function()}
A: You could assign every object a default merge (perhaps 'inherit' a better name) method:
It should work with either objects or instantiated functions.
The below code handles overriding the merged values if so desired:
Object.prototype.merge = function(obj, override) {
// Don't override by default
for (var key in obj) {
var n = obj[key];
var t = this[key];
this[key] = (override && t) ? n : t;
};
};
Test data is below:
var Mammal = function () {
this.eyes = 2;
this.thinking_brain = false;
this.say = function () {
console.log('screaming like a mammal')};
}
var Human = function () {
this.thinking_brain = true;
this.say = function() {console.log('shouting like a human')};
}
john = new Human();
// Extend mammal, but do not override from mammal
john.merge(new Mammal());
john.say();
// Extend mammal and override from mammal
john.merge(new Mammal(), true);
john.say();
A: This solution creates a new object and is able to handle multiple objects.
Furthermore, it is recursive and you can chose weather you want to overwrite Values and Objects.
function extendObjects() {
var newObject = {};
var overwriteValues = false;
var overwriteObjects = false;
for ( var indexArgument = 0; indexArgument < arguments.length; indexArgument++ ) {
if ( typeof arguments[indexArgument] !== 'object' ) {
if ( arguments[indexArgument] == 'overwriteValues_True' ) {
overwriteValues = true;
} else if ( arguments[indexArgument] == 'overwriteValues_False' ) {
overwriteValues = false;
} else if ( arguments[indexArgument] == 'overwriteObjects_True' ) {
overwriteObjects = true;
} else if ( arguments[indexArgument] == 'overwriteObjects_False' ) {
overwriteObjects = false;
}
} else {
extendObject( arguments[indexArgument], newObject, overwriteValues, overwriteObjects );
}
}
function extendObject( object, extendedObject, overwriteValues, overwriteObjects ) {
for ( var indexObject in object ) {
if ( typeof object[indexObject] === 'object' ) {
if ( typeof extendedObject[indexObject] === "undefined" || overwriteObjects ) {
extendedObject[indexObject] = object[indexObject];
}
extendObject( object[indexObject], extendedObject[indexObject], overwriteValues, overwriteObjects );
} else {
if ( typeof extendedObject[indexObject] === "undefined" || overwriteValues ) {
extendedObject[indexObject] = object[indexObject];
}
}
}
return extendedObject;
}
return newObject;
}
var object1 = { a : 1, b : 2, testArr : [888, { innArr : 1 }, 777 ], data : { e : 12, c : { lol : 1 }, rofl : { O : 3 } } };
var object2 = { a : 6, b : 9, data : { a : 17, b : 18, e : 13, rofl : { O : 99, copter : { mao : 1 } } }, hexa : { tetra : 66 } };
var object3 = { f : 13, g : 666, a : 333, data : { c : { xD : 45 } }, testArr : [888, { innArr : 3 }, 555 ] };
var newExtendedObject = extendObjects( 'overwriteValues_False', 'overwriteObjects_False', object1, object2, object3 );
Contents of newExtendedObject:
{"a":1,"b":2,"testArr":[888,{"innArr":1},777],"data":{"e":12,"c":{"lol":1,"xD":45},"rofl":{"O":3,"copter":{"mao":1}},"a":17,"b":18},"hexa":{"tetra":66},"f":13,"g":666}
Fiddle: http://jsfiddle.net/o0gb2umb/
A: Another method:
function concat_collection(obj1, obj2) {
var i;
var arr = new Array();
var len1 = obj1.length;
for (i=0; i<len1; i++) {
arr.push(obj1[i]);
}
var len2 = obj2.length;
for (i=0; i<len2; i++) {
arr.push(obj2[i]);
}
return arr;
}
var ELEMENTS = concat_collection(A,B);
for(var i = 0; i < ELEMENTS.length; i++) {
alert(ELEMENTS[i].value);
}
A: If you are using Dojo Toolkit then the best way to merge two object is using a mixin.
Below is the sample for Dojo Toolkit mixin:
// Dojo 1.7+ (AMD)
require(["dojo/_base/lang"], function(lang){
var a = { b:"c", d:"e" };
lang.mixin(a, { d:"f", g:"h" });
console.log(a); // b:c, d:f, g:h
});
// Dojo < 1.7
var a = { b:"c", d:"e" };
dojo.mixin(a, { d:"f", g:"h" });
console.log(a); // b:c, d:f, g:h
For more details, please mixin.
A: A possible way to achieve this is the following.
if (!Object.prototype.merge){
Object.prototype.merge = function(obj){
var self = this;
Object.keys(obj).forEach(function(key){
self[key] = obj[key]
});
}
};
I don't know if it's better then the other answers. In this method you add the merge function to Objects prototype. This way you can call obj1.merge(obj2);
Note : you should validate your argument to see if its an object and 'throw' a proper Error. If not Object.keys will 'throw' an 'Error'
A: Here what I used in my codebase to merge.
function merge(to, from) {
if (typeof to === 'object' && typeof from === 'object') {
for (var pro in from) {
if (from.hasOwnProperty(pro)) {
to[pro] = from[pro];
}
}
}
else{
throw "Merge function can apply only on object";
}
}
A: We can crate a empty object, and combine them by for-loop:
var obj1 = {
id: '1',
name: 'name'
}
var obj2 = {
c: 'c',
d: 'd'
}
var obj3 = {}
for (var attrname in obj1) { obj3[attrname] = obj1[attrname]; }
for (var attrname in obj2) { obj3[attrname] = obj2[attrname]; }
console.log( obj1, obj2, obj3)
A: Three ways you can do that:-
Approach 1:-
// using spread ...
let obj1 = {
...obj2
};
Approach2:-
// using Object.assign() method
let obj1 = Object.assign({}, obj2);
Approach3:-
// using JSON
let obj1 = JSON.parse(JSON.stringify(obj2));
A: Try this way using jQuery library
let obj1 = { food: 'pizza', car: 'ford' }
let obj2 = { animal: 'dog' }
console.log(jQuery.extend(obj1, obj2))
A: If you need a deep merge that will also "merge" arrays by concatenating them in the result, then this ES6 function might be what you need:
function deepMerge(a, b) {
// If neither is an object, return one of them:
if (Object(a) !== a && Object(b) !== b) return b || a;
// Replace remaining primitive by empty object/array
if (Object(a) !== a) a = Array.isArray(b) ? [] : {};
if (Object(b) !== b) b = Array.isArray(a) ? [] : {};
// Treat arrays differently:
if (Array.isArray(a) && Array.isArray(b)) {
// Merging arrays is interpreted as concatenation of their deep clones:
return [...a.map(v => deepMerge(v)), ...b.map(v => deepMerge(v))];
} else {
// Get the keys that exist in either object
var keys = new Set([...Object.keys(a),...Object.keys(b)]);
// Recurse and assign to new object
return Object.assign({}, ...Array.from(keys,
key => ({ [key]: deepMerge(a[key], b[key]) }) ));
}
}
// Sample data for demo:
var a = {
groups: [{
group: [{
name: 'John',
age: 12
},{
name: 'Mary',
age: 20
}],
groupName: 'Pair'
}],
config: {
color: 'blue',
range: 'far'
}
};
var b = {
groups: [{
group: [{
name: 'Bill',
age: 15
}],
groupName: 'Loner'
}],
config: {
range: 'close',
strength: 'average'
}
};
var merged = deepMerge(a, b);
console.log(merged);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Note that if only one argument is passed to this function, it acts as a deep clone function.
A: You can do the following in EcmaScript2016
Correction: it's a stage 3 proposal, still it has always worked for me
const objA = {
attrA: 'hello',
attrB: true
}
const objB = {
attrC: 2
}
const mergedObj = {...objA, ...objB}
A: function extend()
{
var o = {};
for (var i in arguments)
{
var s = arguments[i];
for (var i in s)
{
o[i] = s[i];
}
}
return o;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3179"
} |
Q: Can you use javascript to modify onmousedown of div at runtime I want to write some JavaScript that will change the onmousedown of a div at runtime. So on load a mouse down on the div will do one thing and if a JavaScript function is called a mouse down on the div will do something else. Is this possible?
A: You can just assign the onMouseDown property.
document.getElementById('myDiv').onmousedown = function() {
alert('New mouse down handler.');
};
Keep in mind that javascript properties are case-sensitive. "onmousedown" is all lower-case.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: UNIX shell written in a reasonable language? Has anyone ever heard of a UNIX shell written in a reasonable language, like Python?
A: From all appearances, Python IS a shell. It runs with #! and it can run interactively. Between the os and shutil packages you have all of the features of standard Unix shells.
Since you can do anything in Python with simple, powerful scripts, you don't really need to spend any time messing with the other shells.
A: Well, there's emacs, which is arguably a shell written in lisp :)
Seriously though, are you looking for a reimplementation of an existing shell design in a different language such as Python? Or are you looking for a new implementation of a shell language that looks similar to your language of choice?
A: There is xon now:
http://xon.sh/
http://xon.sh/tutorial.html#running-commands
PyCon video - https://www.youtube.com/watch?v=uaje5I22kgE
A: Tclsh is pretty nice (assuming you like Tcl, of course).
A: *
*Eshell is a Bash-like shell in Emacs Lisp.
*IPython can be used as a system shell, though the syntax is a bit weird (supporting all of Python plus basic sh constructs).
*fish has a core written in C, but much of its functionality is implemented in itself. Unlike many rare shells, it can be used as your login shell.
*Hotwire deserves another mention. Its basic design appears to be "PowerShell in Python," but it also does some clever things with UI. The last release was in 2008.
*Zoidberg is written in Perl and uses Perl syntax. A nice-looking project, shame it seems to have stalled.
*Scsh would be a pain to use as a login shell (an example command from the docs: (run/strings (find "." -name *.c -print))), but it looks like a good "Perl in Scheme."
A: iPython (Python) and Rush (Ruby) are shells that are designed for more advanced languages. There's also Hotwire, which is sort of a weird integrated shell/terminal emulator.
A: Try rash. It's a shell language written in Racket. It has a nice interactive-friendly syntax. You can embed Rash inside any normal Racket file as well as embedding normal Racket inside Rash. It's extensible and you can define new pipeline operators. It's still alpha quality at the moment, but it's pretty cool. Full disclosure: I wrote it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Silverlight: Empty Storyboard vs BackgroundWorker Which one performs better to run a game main loop?
A: There's a new option in RC0, it's the CompositionTarget.Rendering event. It seems to have promise for being the best option.
As for other game loop methods, it's generally accepted that the empty Storyboard is best. So I'd recommend the new Rendering event first, and if that seems to have issues, go with the empty Storyboard.
I've got a bit here on the new Rendering event:
http://silverlightrocks.com/cs/blogs/silverlight_games_101/archive/2008/09/26/a-much-cleaner-game-loop-in-silverlight-2-rc0.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I convert a character code back to a character? How to I get the Fixnum returned by the following:
"abc"[2]
Back into a character?
A: Be careful because Ruby 1.9 and later will return a single-character string for "abc"[2], which will not respond to the chr method. You could do this instead:
"abc"[2,1]
Be sure to read up on the powerful and multifaceted String#[] method.
A: This will do it (if n is an integer):
n.chr
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: merb_auth_password_slice does not maintain the session Having integrated merb_auth_password_slice as per the README, I can successfully login as redirect_after_login is being triggered, although session.authenticated? returns false.
Just trying the basic auth strategy for now (password form), can't seem to get it working, any ideas?
My init file:
require 'dm-validations'
dependencies "merb-more", "merb_helpers", "merb-slices", "merb_auth_password_slice"
Merb::BootLoader.before_app_loads do
DataMapper.setup(:default, "sqlite3://config/dev.db")
end
Merb::BootLoader.after_app_loads do
# have already done this
# raise "You must specify a valid openid in Merb.root/config/open_id to use this example app" unless File.exists?(Merb.root / "config" / "open_id")
# # DataMapper.auto_migrate!
# User.create(:login => "admin",
# :password => "password", :password_confirmation => "password",
# :email => "admin@example.com",
# :identity_url => File.read(Merb.root / "config" / "open_id"))
end
Merb::Config.use do |c|
c[:session_secret_key] = 'my key'
c[:session_store] = 'cookie'
end
Setup.rb
class Authentication
def store_user(user)
return nil unless user
user.id
end
def fetch_user(session_info)
User.get(session_info)
end
end # Authentication
A: # before(nil, :only => [:update, :destroy]) { session.abandon! }
This is the culprit in the slice's session controller
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Logging entry and exit of methods along with parameters automagically? Is there a way for me to add logging so that entering and exiting methods gets logged along with parameters automatically somehow for tracing purposes? How would I do so?
I am using Log4Net.
A: The best way to achieve this sort of thing is by using interception, There are a couple of ways to do this, though they all tend to be somewhat invasive. One would be to derive all your objects from ContextBoundObject. Here is an example of using this sort of approach. The other approach would be to use one of the existing AOP libraries to achieve this. Something like DynamicProxy from the Castle Project is at the core of many of these. Here are a few links:
Spring.Net
PostSharp
Cecil
There are probably several others, and I know Castle Windsor, and Ninject both provide AOP capabilities on top of the IoC functionality.
Once AOP is in place you would simply write an interceptor class that would write the information about the method calls out to log4net.
I actually wouldn't be surprised if one of the AOP frameworks would give you that sort of functionality out of the box.
A: I'm not sure what your actual needs are, but here's a low-rent option. It's not exactly "automatic", but you could use StackTrace to peel off the information you're looking for in a manner that wouldn't demand passing arguments - similar to ckramer's suggestion regarding interception:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace TracingSample
{
class Program
{
static void Main(string[] args)
{
DoSomething();
}
static void DoSomething()
{
LogEnter();
Console.WriteLine("Executing DoSomething");
LogExit();
}
static void LogEnter()
{
StackTrace trace = new StackTrace();
if (trace.FrameCount > 1)
{
string ns = trace.GetFrame(1).GetMethod().DeclaringType.Namespace;
string typeName = trace.GetFrame(1).GetMethod().DeclaringType.Name;
Console.WriteLine("Entering {0}.{1}.{2}", ns, typeName, trace.GetFrame(1).GetMethod().Name);
}
}
static void LogExit()
{
StackTrace trace = new StackTrace();
if (trace.FrameCount > 1)
{
string ns = trace.GetFrame(1).GetMethod().DeclaringType.Namespace;
string typeName = trace.GetFrame(1).GetMethod().DeclaringType.Name;
Console.WriteLine("Exiting {0}.{1}.{2}", ns, typeName, trace.GetFrame(1).GetMethod().Name);
}
}
}
}
You could combine something like the above example with inheritance, using a non-virtual public member in the base type to signify the action method, then calling a virtual member to actually do the work:
public abstract class BaseType
{
public void SomeFunction()
{
LogEnter();
DoSomeFunction();
LogExit();
}
public abstract void DoSomeFunction();
}
public class SubType : BaseType
{
public override void DoSomeFunction()
{
// Implementation of SomeFunction logic here...
}
}
Again - there's not much "automatic" about this, but it would work on a limited basis if you didn't require instrumentation on every single method invocation.
Hope this helps.
A: Have a look at:
How do I intercept a method call in C#?
PostSharp - il weaving - thoughts
Also search SO for 'AOP' or 'Aspect Oriented Programming' and PostSharp...you get some interesting results.
A: You could use a post-compiler like Postsharp. The sample from the website talks about setting up a tracer for entering/exiting a method, which is very similar to what you want.
A: You can use open source framework CInject on CodePlex that has a LogInjector to mark entry and exit of a method call.
Or you can follow the steps mentioned on this article on Intercepting Method Calls using IL and create your own interceptor using Reflection.Emit classes in C#.
A: Functions
To expand on Jared's answer, avoiding code repetition and including options for arguments:
private static void LogEnterExit(bool isEnter = true, params object[] args)
{
StackTrace trace = new StackTrace(true); // need `true` for getting file and line info
if (trace.FrameCount > 2)
{
string ns = trace.GetFrame(2).GetMethod().DeclaringType.Namespace;
string typeName = trace.GetFrame(2).GetMethod().DeclaringType.Name;
string args_string = args.Length == 0 ? "" : "\narguments: [" + args.Aggregate((current, next) => string.Format("{0},{1};", current, next))+"]";
Console.WriteLine("{0} {1}.{2}.{3}{4}", isEnter ? "Entering" : "Exiting", ns, typeName, trace.GetFrame(2).GetMethod().Name, args_string );
}
}
static void LogEnter(params object[] args)
{
LogEnterExit(true, args);
}
static void LogExit(params object[] args)
{
LogEnterExit(false, args);
}
Usage
static void DoSomething(string arg1)
{
LogEnter(arg1);
Console.WriteLine("Executing DoSomething");
LogExit();
}
Output
In the console, this would be the output if DoSomething were run with "blah" as arg1
Entering Program.DoSomething
arguments: [blah]
Executing DoSomething
Exiting Program.DoSomething
A: This does not apply to all C# applications but I am doing the following in an Asp.Net Core application to add logging to controller method calls by implementing the following interfaces:
IActionFilter, IAsyncActionFilter.
public void OnActionExecuting(ActionExecutingContext context)
public void OnActionExecuted(ActionExecutedContext context)
public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What's the maximum number of keys for an array in php I'm writing a php script where I call
$lines = file('base_list.txt');
to break a file up into an array. The file has over 100,000 lines in it, which should be 100,000 elements in the array, but when I run
print_r($lines);
exit;
the array only contains 7280 elements.
So I'm curious, WTF? Is there a limit on the amount of keys an array can have? I'm running this locally on a dual-core 2.0Ghz with 2GB of RAM (Vista & IIS though); so I'm a little confused how a 4MB file could throw results like this.
Edit:
I have probably should have mentioned that I had previously set memory_limit to 512MB in php.ini as well.
A: Darryl Hein,
Yeah, there isn't anything in the error logs. I even increased error reporting and still nothing relevant to print_r().
In response to Jay:
I ran
echo count($lines);
and I get a result of 105,546 but still print_r() only displays 7280.
Taking Rob Walker's advice I looped over all the elements in the array and it actually contained all the results. This leads me to believe the issue is with print_r() itself instead of a limit to array size.
Another weird thing is that I tried it on one of my REHL servers and the result was as it should be. Now I don't want to blame this on Windows/IIS but there you go.
With the above in mind I think this question should be re-titled as it's no longer relevant to arrays but print_r.
A: Edit: As Rob said, if your application is running out of memory, chances are it won't even get to the print_r line. That's kinda my hunch as well, but if it is a memory issue, the following might help.
Take a look in the php.ini file for this line, and perhaps increase it to 16 or more.
memory_limit = 8M
If you don't have access to php.ini (for example if this was happening on a shared server) you can fix it with a .htaccess file like this
php_value memory_limit 16M
Apparently some hosts don't allow you to do this though.
A: Is it possible there is an inherent limit on the output from print_r. I'd suggest looking for the first and last line in the file to see if they are in the array. If you were hitting a memory limit inserting into the array you would never have gotten to the print_r line.
A: Two suggestions:
*
*Count the actual number of items in the array and see whether or not the array is the correct number of entries (therefore eliminating or identifying print_r() as the culprit)
*Verify the input... any chance that the line endings in the file are causing a problem? For example, is there a mix of different types of line endings? See the manual page for file() and the note about the auto_detect_line_endings setting as well, though it's unlikely this is related to mac line endings.
A: I believe it is based on the amount of available memory as set in the php.ini file.
A: every time ive run out of memory in PHP i've received an error message stating that fact. So, I'd also say that if you're running out of memory then the script wouldn't get to the print_r()
try enabling auto_detect_line_endings in php.ini or by using
ini_set('auto_detect_line_endings', 1). There may be some line endings that windows doesn't understand & this ini option could help. more info about this ini option can be found here
A: You should use count to count the number of items in an array, not print_r. What if output of this large array was aborted because of timeouts or something else? Or some bug/feature in print_r?
A: PHP's print_r function does have limitations. However, even though you don't "see" the entire array printed, it is all there. I've struggled with this same issue when printing large data objects.
It makes debugging difficult, if you must see the entire array you could create a loop to print every line.
foreach ($FileLines as $Line) echo $Line;
That should let you see all the lines without limitation.
A: I'm gonna agree with Cory. I'm thinking your PHP is probably configured default memory of 8MB, which 4MB x 2 is already more. The reason for the x2 is because you have to load the file, then to create the array you need to have the file in memory again. I'm just guessing, but that would make sense.
Are you sure PHP isn't logging an error?
A: If you're outputting to something like Internet Explorer, you might want to make sure it can display all the information you're trying to put there. I know there's a limit to an html page, but I'm not sure what it is.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Combine Fluent and XML mapping for NHibnernate I just fell in love with NHibernate and the fluent interface. The latter enables very nice mappings with refactoring support (no more need for xml files).
But nobody is perfect, so I am missing the many-to-any mapping in fluent. Does anybody know if it is already there? If so, a simple line of code would be nice.
But to stick to the header of the question, is there any way to combine fluent and normal NHibernate mapping.
Currently I use the following lines for my test setup WITH fluent, and the second code block for my test WITHOUT fluent (with XML mappings). How can I tell fluent to use fluent IF AVAILABLE and XML otherwise...
var cfg = new Configuration();
cfg.AddProperties(MsSqlConfiguration.MsSql2005.ConnectionString.Is(_testConnectionstring).ToProperties());
cfg.AddMappingsFromAssembly(typeof(CatMap).Assembly);
new SchemaExport(cfg).Create(true, true);
var persistenceModel = new PersistenceModel();
persistenceModel.addMappingsFromAssembly(typeof(CatMap).Assembly);
IDictionary<string, string> properties = MsSqlConfiguration.MsSql2005.UseOuterJoin().ShowSql().ConnectionString.Is(_testConnectionstring).ToProperties();
properties.Add("command_timeout", "340");
session = new SessionSource(properties, persistenceModel).CreateSession();
Without Fluent...
config = new Configuration();
IDictionary props = new Hashtable();
props["connection.provider"] = "NHibernate.Connection.DriverConnectionProvider";
props["dialect"] = "NHibernate.Dialect.MsSql2005Dialect";
props["connection.driver_class"] = "NHibernate.Driver.SqlClientDriver";
props["connection.connection_string"] = "Server=localhost;initial catalog=Debug;Integrated Security=SSPI";
props["show_sql"] = "true";
foreach (DictionaryEntry de in props)
{
config.SetProperty(de.Key.ToString(), de.Value.ToString());
}
config.AddAssembly(typeof(CatMap).Assembly);
SchemaExport se = new SchemaExport(config);
se.Create(true, true);
factory = config.BuildSessionFactory();
session = factory.OpenSession();
That's it...
Chris
PS: I really like this site, the GUI is perfect, and the quality of all articles is incredible. I think it will be huge :-) Have to register...
A: Mapping from Foo to Baa:
HasManyToMany< Baa > ( x => Baas )
.AsBag ( ) //can also be .AsSet()
.WithTableName ( "foobar" )
.WithParentKeyColumn ( "fooId" )
.WithChildKeyColumn ( "barId" ) ;
Check out the examples in ClassMapXmlCreationTester - they also show what the default column names are.
A: ManyToAny's currently aren't implemented (as of time of writing).
Regarding your setup for fluent and non-fluent mappings, you're almost there with your first example.
var cfg = MsSqlConfiguration.MsSql2005
.ConnectionString.Is(_testConnectionstring)
.ConfigureProperties(new Configuration());
cfg.AddMappingsFromAssembly(typeof(CatMap).Assembly); // loads hbm.xml files
var model = new PersistenceModel();
model.addMappingsFromAssembly(typeof(CatMap).Assembly); // loads fluent mappings
mode.Configure(cfg);
new SchemaExport(cfg).Create(true, true);
The main difference is that the SchemaExport is last. I assume your first example was actually loading the fluent mappings, but it'd already created the schema by that point.
A: You can do exactly what you want to do entirely within Fluent NHibernate.
The following code will use Fluent NHibernate syntax to fluently configure a session factory that looks for HBM (xml) mapping files, fluent mappings, and conventions from multiple possible assemblies.
var _mappingAssemblies = new Assembly[] { typeof(CatMap).Assembly };
var _autoPersistenceModel = CreateAutoPersistenceModel();
Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2005.ConnectionString(_testConnectionstring))
.Mappings(m =>
{
foreach (var assembly in _mappingAssemblies)
{
m.HbmMappings.AddFromAssembly(assembly);
m.FluentMappings.AddFromAssembly(assembly)
.Conventions.AddAssembly(assembly);
}
m.AutoMappings.Add(_autoPersistenceModel );
})
.ExposeConfiguration(c => c.SetProperty("command_timeout", "340"))
.BuildSessionFactory();
There are many other options available to you as well: Fluent NHibernate Database Configuration
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What's the fastest way to divide an integer by 3? int x = n / 3; // <-- make this faster
// for instance
int a = n * 3; // <-- normal integer multiplication
int b = (n << 1) + n; // <-- potentially faster multiplication
A: Depending on your platform and depending on your C compiler, a native solution like just using
y = x / 3
Can be fast or it can be awfully slow (even if division is done entirely in hardware, if it is done using a DIV instruction, this instruction is about 3 to 4 times slower than a multiplication on modern CPUs). Very good C compilers with optimization flags turned on may optimize this operation, but if you want to be sure, you are better off optimizing it yourself.
For optimization it is important to have integer numbers of a known size. In C int has no known size (it can vary by platform and compiler!), so you are better using C99 fixed-size integers. The code below assumes that you want to divide an unsigned 32-bit integer by three and that you C compiler knows about 64 bit integer numbers (NOTE: Even on a 32 bit CPU architecture most C compilers can handle 64 bit integers just fine):
static inline uint32_t divby3 (
uint32_t divideMe
) {
return (uint32_t)(((uint64_t)0xAAAAAAABULL * divideMe) >> 33);
}
As crazy as this might sound, but the method above indeed does divide by 3. All it needs for doing so is a single 64 bit multiplication and a shift (like I said, multiplications might be 3 to 4 times faster than divisions on your CPU). In a 64 bit application this code will be a lot faster than in a 32 bit application (in a 32 bit application multiplying two 64 bit numbers take 3 multiplications and 3 additions on 32 bit values) - however, it might be still faster than a division on a 32 bit machine.
On the other hand, if your compiler is a very good one and knows the trick how to optimize integer division by a constant (latest GCC does, I just checked), it will generate the code above anyway (GCC will create exactly this code for "/3" if you enable at least optimization level 1). For other compilers... you cannot rely or expect that it will use tricks like that, even though this method is very well documented and mentioned everywhere on the Internet.
Problem is that it only works for constant numbers, not for variable ones. You always need to know the magic number (here 0xAAAAAAAB) and the correct operations after the multiplication (shifts and/or additions in most cases) and both is different depending on the number you want to divide by and both take too much CPU time to calculate them on the fly (that would be slower than hardware division). However, it's easy for a compiler to calculate these during compile time (where one second more or less compile time plays hardly a role).
A: This is the fastest as the compiler will optimize it if it can depending on the output processor.
int a;
int b;
a = some value;
b = a / 3;
A: For 64 bit numbers:
uint64_t divBy3(uint64_t x)
{
return x*12297829382473034411ULL;
}
However this isn't the truncating integer division you might expect.
It works correctly if the number is already divisible by 3, but it returns a huge number if it isn't.
For example if you run it on for example 11, it returns 6148914691236517209. This looks like a garbage but it's in fact the correct answer: multiply it by 3 and you get back the 11!
If you are looking for the truncating division, then just use the / operator. I highly doubt you can get much faster than that.
Theory:
64 bit unsigned arithmetic is a modulo 2^64 arithmetic.
This means for each integer which is coprime with the 2^64 modulus (essentially all odd numbers) there exists a multiplicative inverse which you can use to multiply with instead of division. This magic number can be obtained by solving the 3*x + 2^64*y = 1 equation using the Extended Euclidean Algorithm.
A: What if you really don't want to multiply or divide? Here is is an approximation I just invented. It works because (x/3) = (x/4) + (x/12). But since (x/12) = (x/4) / 3 we just have to repeat the process until its good enough.
#include <stdio.h>
void main()
{
int n = 1000;
int a,b;
a = n >> 2;
b = (a >> 2);
a += b;
b = (b >> 2);
a += b;
b = (b >> 2);
a += b;
b = (b >> 2);
a += b;
printf("a=%d\n", a);
}
The result is 330. It could be made more accurate using b = ((b+2)>>2); to account for rounding.
If you are allowed to multiply, just pick a suitable approximation for (1/3), with a power-of-2 divisor. For example, n * (1/3) ~= n * 43 / 128 = (n * 43) >> 7.
This technique is most useful in Indiana.
A: There is a faster way to do it if you know the ranges of the values, for example, if you are dividing a signed integer by 3 and you know the range of the value to be divided is 0 to 768, then you can multiply it by a factor and shift it to the left by a power of 2 to that factor divided by 3.
eg.
Range 0 -> 768
you could use shifting of 10 bits, which multiplying by 1024, you want to divide by 3 so your multiplier should be 1024 / 3 = 341,
so you can now use (x * 341) >> 10
(Make sure the shift is a signed shift if using signed integers), also make sure the shift is an actually shift and not a bit ROLL
This will effectively divide the value 3, and will run at about 1.6 times the speed as a natural divide by 3 on a standard x86 / x64 CPU.
Of course the only reason you can make this optimization when the compiler cant is because the compiler does not know the maximum range of X and therefore cannot make this determination, but you as the programmer can.
Sometime it may even be more beneficial to move the value into a larger value and then do the same thing, ie. if you have an int of full range you could make it an 64-bit value and then do the multiply and shift instead of dividing by 3.
I had to do this recently to speed up image processing, i needed to find the average of 3 color channels, each color channel with a byte range (0 - 255). red green and blue.
At first i just simply used:
avg = (r + g + b) / 3;
(So r + g + b has a maximum of 768 and a minimum of 0, because each channel is a byte 0 - 255)
After millions of iterations the entire operation took 36 milliseconds.
I changed the line to:
avg = (r + g + b) * 341 >> 10;
And that took it down to 22 milliseconds, its amazing what can be done with a little ingenuity.
This speed up occurred in C# even though I had optimisations turned on and was running the program natively without debugging info and not through the IDE.
A: I don't know if it's faster but if you want to use a bitwise operator to perform binary division you can use the shift and subtract method described at this page:
*
*Set quotient to 0
*Align leftmost digits in dividend and divisor
*Repeat:
*
*If that portion of the dividend above the divisor is greater than or equal to the divisor:
*
*Then subtract divisor from that portion of the dividend and
*Concatentate 1 to the right hand end of the quotient
*Else concatentate 0 to the right hand end of the quotient
*Shift the divisor one place right
*Until dividend is less than the divisor:
*quotient is correct, dividend is remainder
*STOP
A: See How To Divide By 3 for an extended discussion of more efficiently dividing by 3, focused on doing FPGA arithmetic operations.
Also relevant:
*
*Optimizing integer divisions with Multiply Shift in C#
A: The guy who said "leave it to the compiler" was right, but I don't have the "reputation" to mod him up or comment. I asked gcc to compile int test(int a) { return a / 3; } for an ix86 and then disassembled the output. Just for academic interest, what it's doing is roughly multiplying by 0x55555556 and then taking the top 32 bits of the 64 bit result of that. You can demonstrate this to yourself with eg:
$ ruby -e 'puts(60000 * 0x55555556 >> 32)'
20000
$ ruby -e 'puts(72 * 0x55555556 >> 32)'
24
$
The wikipedia page on Montgomery division is hard to read but fortunately the compiler guys have done it so you don't have to.
A: If you really want to see this article on integer division, but it only has academic merit ... it would be an interesting application that actually needed to perform that benefited from that kind of trick.
A: For really large integer division (e.g. numbers bigger than 64bit) you can represent your number as an int[] and perform division quite fast by taking two digits at a time and divide them by 3. The remainder will be part of the next two digits and so forth.
eg. 11004 / 3 you say
11/3 = 3, remaineder = 2 (from 11-3*3)
20/3 = 6, remainder = 2 (from 20-6*3)
20/3 = 6, remainder = 2 (from 20-6*3)
24/3 = 8, remainder = 0
hence the result 3668
internal static List<int> Div3(int[] a)
{
int remainder = 0;
var res = new List<int>();
for (int i = 0; i < a.Length; i++)
{
var val = remainder + a[i];
var div = val/3;
remainder = 10*(val%3);
if (div > 9)
{
res.Add(div/10);
res.Add(div%10);
}
else
res.Add(div);
}
if (res[0] == 0) res.RemoveAt(0);
return res;
}
A: Easy computation ... at most n iterations where n is your number of bits:
uint8_t divideby3(uint8_t x)
{
uint8_t answer =0;
do
{
x>>=1;
answer+=x;
x=-x;
}while(x);
return answer;
}
A: A lookup table approach would also be faster in some architectures.
uint8_t DivBy3LU(uint8_t u8Operand)
{
uint8_t ai8Div3 = [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, ....];
return ai8Div3[u8Operand];
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: How do I capture PHP output into a variable? I'm generating a ton of XML that is to be passed to an API as a post variable when a user click on a form button. I also want to be able to show the user the XML before hand.
The code is sorta like the following in structure:
<?php
$lots of = "php";
?>
<xml>
<morexml>
<?php
while(){
?>
<somegeneratedxml>
<?php } ?>
<lastofthexml>
<?php ?>
<html>
<pre>
The XML for the user to preview
</pre>
<form>
<input id="xml" value="theXMLagain" />
</form>
</html>
My XML is being generated with a few while loops and stuff. It then needs to be shown in the two places (the preview and the form value).
My question is. How do I capture the generated XML in a variable or whatever so I only have to generate it once and then just print it out as apposed to generating it inside the preview and then again inside the form value?
A: Put this at your start:
ob_start();
And to get the buffer back:
$value = ob_get_contents();
ob_end_clean();
See http://us2.php.net/manual/en/ref.outcontrol.php and the individual functions for more information.
A: When using frequently, a little helper could be helpful:
class Helper
{
/**
* Capture output of a function with arguments and return it as a string.
*/
public static function captureOutput(callable $callback, ...$args): string
{
ob_start();
$callback(...$args);
$output = ob_get_contents();
ob_end_clean();
return $output;
}
}
A: You could try this:
<?php
$string = <<<XMLDoc
<?xml version='1.0'?>
<doc>
<title>XML Document</title>
<lotsofxml/>
<fruits>
XMLDoc;
$fruits = array('apple', 'banana', 'orange');
foreach($fruits as $fruit) {
$string .= "\n <fruit>".$fruit."</fruit>";
}
$string .= "\n </fruits>
</doc>";
?>
<html>
<!-- Show XML as HTML with entities; saves having to view source -->
<pre><?=str_replace("<", "<", str_replace(">", ">", $string))?></pre>
<textarea rows="8" cols="50"><?=$string?></textarea>
</html>
A: <?php ob_start(); ?>
<xml/>
<?php $xml = ob_get_clean(); ?>
<input value="<?php echo $xml ?>" />͏͏͏͏͏͏
A: It sounds like you want PHP Output Buffering
ob_start();
// make your XML file
$out1 = ob_get_contents();
//$out1 now contains your XML
Note that output buffering stops the output from being sent, until you "flush" it. See the Documentation for more info.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: How can I increase the key repeat rate beyond the OS's limit? I have a bad habit of using the cursor keys of my keyboard to navigate source code. It's something I've done for 15 years and this of course means that my navigating speed is limited by the speed of the keyboard. On both Vista and OS X (I dual boot a MacBook), I have my key repeat rate turned all the way up. But in Visual Studio, and other apps, the rate is still much slower than I would prefer.
How can I make the key repeat rate faster in Visual Studio and other text editors?
A: In Windows you can set this with a system call (SystemParametersInfo(SPI_SETFILTERKEYS,...)).
I wrote a utility for myself: keyrate <delay> <repeat>.
Github repository.
Full source in case that link goes away:
#include <windows.h>
#include <stdlib.h>
#include <stdio.h>
BOOL parseDword(const char* in, DWORD* out)
{
char* end;
long result = strtol(in, &end, 10);
BOOL success = (errno == 0 && end != in);
if (success)
{
*out = result;
}
return success;
}
int main(int argc, char* argv[])
{
FILTERKEYS keys = { sizeof(FILTERKEYS) };
if (argc == 1)
{
puts ("No parameters given: disabling.");
}
else if (argc != 3)
{
puts ("Usage: keyrate <delay ms> <repeat ms>\nCall with no parameters to disable.");
return 0;
}
else if (parseDword(argv[1], &keys.iDelayMSec)
&& parseDword(argv[2], &keys.iRepeatMSec))
{
printf("Setting keyrate: delay: %d, rate: %d\n", (int) keys.iDelayMSec, (int) keys.iRepeatMSec);
keys.dwFlags = FKF_FILTERKEYSON|FKF_AVAILABLE;
}
if (!SystemParametersInfo (SPI_SETFILTERKEYS, 0, (LPVOID) &keys, 0))
{
fprintf (stderr, "System call failed.\nUnable to set keyrate.");
}
return 0;
}
A: For Windows, open regedit.exe and navigate to HKEY_CURRENT_USER\Control Panel\Keyboard. Change KeyboardSpeed to your liking.
A: Visual Assist has an option to double your effective key movements in Visual Studio which I use all the time.
A: I'm using KeyboardKing on my PC. It's freeware and it can increase the repeat rate up to 200 which is quite enough. I recommend to set the process priority to High for even smoother moves and less "repeat locks" which happen sometime and are very annoying. With high priority, it works perfectly.
No one understands why we navigate by arrows. It's funny.
A:
As mentioned by the hyperlogic, on Mac OS X, internally, there are two parameters dealing with the keyboard speed: KeyRepeat and InitialKeyRepeat. In the System Preferences they are mapped to the Key Repeat Rate and the Delay Until Repeat sliders. The slider ranges and the associated internal parameter values (in parenthesis) are show below. They seem to be multipliers of the 15 ms keyboard sampling interval.
Key Repeat Rate (KeyRepeat) Delay Until Repeat (InitialKeyRepeat)
|--------------------------------| |----------------------|-----------------|
slow (120) fast (2) off (30000) long (120) short (25)
0.5 char/s 33 char/s
Fortunately, these parameters can be set beyond the predefined limits directly in the ~/Library/Preferences/.GlobalPreferences.plist file. I found the following values most convenient for myself:
KeyRepeat = 1 --> 1/(1*15 ms) = 66.7 char/s
InitialKeyRepeat = 15 --> 15*15 ms = 225 ms
Note that in the latest Mac OS X revisions the sliders are named slightly differently.
A: On Mac OS X, open the Global Preferences plist
open ~/Library/Preferences/.GlobalPreferences.plist
Then change the KeyRepeat field. Smaller numbers will speed up your cursor rate. The settings dialog will only set it to a minimum of 2, so if you go to 0 or 1, you'll get a faster cursor.
I had to reboot for this to take effect.
A: Many times I want to center a function in my window. Scrolling is the only way.
Also, Ctrl-left/right can still be slow in code where there are a lot of non-word characters.
I use keyboardking also. It has a couple of isssues for me though. One, it sometimes uses the default speed instead of the actual value I set. The other is sometimes it ignores the initial delay. I still find it very useful though. They said 4 years ago they would release the source in 6 months... :(
Ok, on the suggestion of someone that modified HCU\...\Keyboard Response, this works well for me.
[HKEY_CURRENT_USER\Control Panel\Accessibility\Keyboard Response]
"AutoRepeatDelay"="250"
"AutoRepeatRate"="13"
"BounceTime"="0"
"DelayBeforeAcceptance"="0"
"Flags"="59"
Windows standard AutoRepeat delay.
13 ms (77 char/sec) repeat rate.
flags turns on FilterKeys?
These values are read at login. Remember to log out and back in for this to take effect.
A: I don't know how to accelerate beyond the limit, but I know how to skip further in a single press. My knowledge is only in Windows, as I have no Mac to do this in. Ctrl + Arrow skips a word, and depending on the editor it may just skip to the next section of whitespace. You can also use Ctrl + Shift + Arrow to select a word in any direction.
A: I do like to work on the keyboard alone. Why? Because when you use the mouse you have to grab it. A time loss.
On the other hand sometimes it seems that every application has its own keyboard type-rates built in. Not to speak from BIOS-properties or OS-settings. So I gathered shortkeys which can be pretty fast (i.e. you are faster typing Ctrl + right(arrow)-right-right than keeping your finger on the right(arrow) key :).
Here are some keyboard shortcuts I find most valuable (it works on Windows; I am not sure about OS X):
ctrl-right: Go to the end of the previous/the next word (stated before)
ctrl-left: Go to the beginning of the previous/the word before (stated before)
ctrl-up: Go to the beginning of this paragraph
(or to the next paragraph over this)
ctrl-down: Go to the end of this paragraph
(or to the next paragraph after this)
ctrl-pos1: Go to the beginning of the file
ctrl-end: Go to the end of the file
All these may be combined with the shift-key, so that the text is selected while doing so. Now let's go for more weird stuff:
alt-esc: Get the actual application into the background
ctrl-esc: This is like pressing the "start-button" in Windows: You can
navigate with arrow keys or shortcuts to start programs from here
ctrl-l: While using Firefox this accesses the URL-entry-field to simply
type URLs (does not work on Stack Overflow :)
ctrl-tab,
ctrl-pageup
ctrl-pagedwn Navigate through tabs (even in your development environment)
So these are the most used shortcuts I need while programming.
A: For OS X, the kernel extension KeyRemap4MacBook will allow you to fine tune all sorts of keyboard parameters, among which the key repeat rate (I set mine to 15 ms, and it works nice).
A: On Mac, it's option-arrow to skip a word and ⌥+Shift+Arrow to select. ⌘+Arrow skips to the end or beginning of a line or the end or beginning of a document. There are also the page up, page down, home and end keys ;) Holding shift selects with those too.
A: Seems that you can't do this easily on Windows 7.
When you press and hold the button, the speed is controlled by Windows registry key : HCU->Control Panel->Keyboard->Keyboard Delay.
By setting this param to 0 you get maximum repeat rate. The drama is that you can't go below 0 if the repeat speed is still slow for you. 0-delay means that repeat delay is 250ms. But, 250ms delay is still SLOW as hell. See this : http://technet.microsoft.com/en-us/library/cc978658.aspx
You still can go to Accesibility, but you should know that those options are to help disabled people to use their keyboard, not give help for fast-typing geeks. They WON'T help. Use Linux, they tell you.
I bieleve the solution for Windows lies in hardware control. Look for special drivers for your keyboards or try to tweak existing ones.
A: Although the question is several years old, I still come across the same issue from time to time in several different developer sites. So I thought I may contribute an alternative solution, which I use for my everyday-developer-work (since the Windows registry settings never worked for me).
The following is my small Autorepeat-Script (~ 125 lines), which can be run via AutoHotkey_L (the downside is, it only runs under Windows, at least Vista, 7, 8.1):
; ====================================================================
; DeveloperTools - Autorepeat Key Script
;
; This script provides a mechanism to do key-autorepeat way faster
; than the Windows OS would allow. There are some registry settings
; in Windows to tweak the key-repeat-rate, but according to widely
; spread user feedback, the registry-solution does not work on all
; systems.
;
; See the "Hotkeys" section below. Currently (Version 1.0), there
; are only the arrow keys mapped for faster autorepeat (including
; the control and shift modifiers). Feel free to add your own
; hotkeys.
;
; You need AutoHotkey (http://www.autohotkey.com) to run this script.
; Tested compatibility: AutoHotkey_L, Version v1.1.08.01
;
; (AutoHotkey Copyright © 2004 - 2013 Chris Mallet and
; others - not me!)
;
; @author Timo Rumland <timo.rumland ${at} gmail.com>, 2014-01-05
; @version 1.0
; @updated 2014-01-05
; @license The MIT License (MIT)
; (http://opensource.org/licenses/mit-license.php)
; ====================================================================
; ================
; Script Settings
; ================
#NoEnv
#SingleInstance force
SendMode Input
SetWorkingDir %A_ScriptDir%
; Instantiate the DeveloperTools defined below the hotkey definitions
developerTools := new DeveloperTools()
; ========
; Hotkeys
; ========
; -------------------
; AutoRepeat Hotkeys
; -------------------
~$UP::
~$DOWN::
~$LEFT::
~$RIGHT::
DeveloperTools.startAutorepeatKeyTimer( "" )
return
~$+UP::
~$+DOWN::
~$+LEFT::
~$+RIGHT::
DeveloperTools.startAutorepeatKeyTimer( "+" )
return
~$^UP::
~$^DOWN::
~$^LEFT::
~$^RIGHT::
DeveloperTools.startAutorepeatKeyTimer( "^" )
return
; -------------------------------------------------------
; Jump label used by the hotkeys above. This is how
; AutoHotkey provides "threads" or thread-like behavior.
; -------------------------------------------------------
DeveloperTools_AutoRepeatKey:
SetTimer , , Off
DeveloperTools.startAutorepeatKey()
return
; ========
; Classes
; ========
class DeveloperTools
{
; Configurable by user
autoRepeatDelayMs := 180
autoRepeatRateMs := 40
; Used internally by the script
repeatKey := ""
repeatSendString := ""
keyModifierBaseLength := 2
; -------------------------------------------------------------------------------
; Starts the autorepeat of the current captured hotkey (A_ThisHotKey). The given
; 'keyModifierString' is used for parsing the real key (without hotkey modifiers
; like "~" or "$").
; -------------------------------------------------------------------------------
startAutorepeatKeyTimer( keyModifierString )
{
keyModifierLength := this.keyModifierBaseLength + StrLen( keyModifierString )
this.repeatKey := SubStr( A_ThisHotkey, keyModifierLength + 1 )
this.repeatSendString := keyModifierString . "{" . this.repeatKey . "}"
SetTimer DeveloperTools_AutoRepeatKey, % this.autoRepeatDelayMs
}
; ---------------------------------------------------------------------
; Starts the loop which repeats the key, resulting in a much faster
; autorepeat rate than Windows provides. Internally used by the script
; ---------------------------------------------------------------------
startAutorepeatKey()
{
while ( GetKeyState( this.repeatKey, "P" ) )
{
Send % this.repeatSendString
Sleep this.autoRepeatRateMs
}
}
}
*
*Save the code above in a text file (UTF-8), for example named "AutorepeatScript.ahk"
*Install AutoHotkey_L
*Double click on "AutorepeatScript.ahk" to enjoy much fast arrow-keys (or put the file into your autostart-folder)
(You can adjust the repeat delay and rate in the script, see '; Configurable by user').
Hope this helps!
A: Well, it might be obvious, but:
*
*For horizontal navigation, Home (line start), End (line end), Ctrl-Left (word left), Ctrl-Right (word right) work in all editors I know
*For vertical navigation, Page Up, Page Down, Ctrl-Home (text start), Ctrl-End (text end) do too
Also (on a side note), I would like to force my Backspace and Delete keys to non-repeat, so that the only way to delete (or replace) text would be to first mark it, then delete it (or type the replacement text).
A: Don't navigate character-by-character.
In Vim (see ViEmu for Visual Studio):
*
*bw -- prev/next word
*() -- prev/next sentence (full stop-delimited text)
*{} -- prev/next paragraph (blank-line delimited sections of text)
*/? -- move the cursor to the prev/next occurence the text found (w/ set incsearch)
Moreover, each of the movements takes a number as prefix that lets you specify how many times to repeat the command, e.g.:
*
*20j -- jump 20 lines down
*3} -- three paragraphs down
*4w -- move 4 words forward
*40G -- move to (absolute) line number 40
There are most likely equivalent ways to navigate through text in your editor. If not, you should consider switching to a better one.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: Accessing internal members via System.Reflection? I'm trying to Unit Test a class that has many internal functions. These obviously need testing too, but my Tests project is seperate, mainly because it covers many small, related projects. What I have so far is:
FieldInfo[] _fields =
typeof(ButtonedForm.TitleButton).GetFields(
BindingFlags.NonPublic | BindingFlags.Instance |
BindingFlags.DeclaredOnly);
Console.WriteLine("{0} fields:", _fields.Length);
foreach (FieldInfo fi in _fields)
{
Console.WriteLine(fi.Name);
}
This spits out all the private members nicely, but still doesn't display internals. I know this is possible, because when I was messing around with the autogenerated tests that Visual Studio can produce, it asked about something to do with displaying internals to the Test project. Well, now I'm using NUnit and really liking it, but how can I achieve the same thing with it?
A: Adding the InternalsVisibleTo assembly level attribute to your main project, with the Assembly name of thre test project should make internal members visible.
For example add the following to your assembly outside any class:
[assembly: InternalsVisibleTo("AssemblyB")]
Or for a more specific targeting:
[assembly:InternalsVisibleTo("AssemblyB, PublicKey=32ab4ba45e0a69a1")]
Note, if your application assembly has a strong name, your test assembly will also need to be strongly named.
A: I think you need to ask whether you should write unit tests for private methods? If you write unit tests for your public methods, with a 'reasonable' code coverage, aren't you already testing any private methods that need to be called as a result?
Tying tests to private methods will make the tests more brittle. You should be able to change the implementation of any private methods without breaking any tests.
Refs:
http://weblogs.asp.net/tgraham/archive/2003/12/31/46984.aspx
http://richardsbraindump.blogspot.com/2008/08/should-i-unit-test-private-methods.html
http://junit.sourceforge.net/doc/faq/faq.htm#tests_11
Link
A: Your code is only showing fields - so I'd hope it wouldn't show any internal members, as fields should always be private IMO. (With the potential exception of constants.)
Does ButtonedForm.TitleButton actually have any non-private fields? If you're trying to find internal methods then obviously you need to be calling GetMethods (or GetMembers) to get at them.
As others have suggested, InternalsVisibleTo is very handy for testing (and almost solely for testing!). As for whether you should be testing internal methods - I certainly find it useful to be able to do so. I don't regard unit testing as being exclusively black-box testing. Often when you know that public functionality is implemented using a few internal methods connected in a simple way, it's easier to do thorough testing of each of the internal methods and some "pseudo-integration" tests on the public method.
A: It would be more appropriate to use the InternalsVisibleTo attribute to grant access to the internal members of the assembly to your unit test assembly.
Here is a link with some helpful additional info and a walk through:
*
*The Wonders Of InternalsVisibleTo
To actually answer your question... Internal and protected are not recognized in the .NET Reflection API. Here is a quotation from MSDN:
The C# keywords protected and internal have no meaning in IL and are not used in the Reflection APIs. The corresponding terms in IL are Family and Assembly. To identify an internal method using Reflection, use the IsAssembly property. To identify a protected internal method, use the IsFamilyOrAssembly.
A: A justification for using the InternalsVisible is under my circumstances. We purchase the source code to a Chart Control. We have found where we need to make some modifications to that source control and compile our own version. Now to ensure we did not break anything, there are some unit tests I need to write that need access to some internal fields.
This is a perfect case where the InternalsVisible makes sense.
I was wondering, though, what do you do if you do not have access to the source? How could you get to an internal field? .Net Reflector can see that code, but I guess it is just looking at the IL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: What is the best way to validate a terminal command has run successfully in Rails? I'm writing a quick Rails app and was wondering how I can validate the success an exec'd command. The two commands I'm running are and SVN update, and a cp from one directory to another.
A: If you use the Kernel.system() method it will return a boolean indicating the success of the command.
result = system("cp -r dir1 dir2")
if(result)
#do the next thing
else
# handle the error
There is a good comparison of different ruby system commands here.
A: How are you executing the external commands? The Ruby system() function returns true or false depending on whether the command was successful. Additionally, $? contains an error status.
A: *
*Just to be pedantic, you can't validate an exec'd command because exec replaces the current program with the exec'd command, so the command would never return to Ruby for validation.
*For the cp, at least, you would probably be better of using the FileUtils module (part of the Ruby Standard Library), rather than dropping to the shell.
*As noted above, the $? predefined variable will give you the return code of the last command to be executed by system() or the backtick operator.
A: For SVN update, check the version number before and after the update.
svn_start_version = IO.popen("svn info").readlines[4]
`svn update`
svn_end_version = IO.popen("svn info").readlines[4]
if svn_end_version > svn_start_version
"success"
end
For the cp, you could do a filesize check on the original file being equal to the copied file.
source_file_size = IO.popen("du file1").readlines
`cp file1 file2`
dest_file_size = IO.popen("du file2").readlines
if dest_file_size == source_file_size
"success"
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: matlab FFT. Stuck understanding relationship between frequency and result We're trying to analyse flow around circular cylinder and we have a set of Cp values that we got from wind tunnel experiment. Initially, we started off with a sample frequency of 20 Hz and tried to find the frequency of vortex shedding using FFT in matlab. We got a frequency of around 7 Hz. Next, we did the same experiment, but the only thing we changed was the sampling frequency- from 20 Hz to 200 Hz. We got the frequency of the vortex shedding to be around 70 Hz (this is where the peak is located in the graph). The graph doesn't change regardless of the Cp data that we enter. The only time the peak differs is when we change the sample frequency. It seems like the increase in the frequency of vortex shedding is proportional to the sample frequency and this doesn't seem to make sense at all. Any help regarding establishing a relation between sample frequency and vortex shedding frequency would be greatly appreaciated.
A: this is probably not a programming problem, it sounds like an experiment-measurement problem
i think the sampling frequency has to be at least twice the rate of the oscillation frequency, otherwise you get artifacts; this might explain the difference. Note that the ratio of the FFT frequency to the sampling frequency is 0.35 in both cases. Can you repeat the experiment with higher sampling rates? I'm thinking that if this is a narrow cylinder in a strong wind, it may be vibrating/oscillating faster than the sampling rate can detect..
i hope this helps - there's a 97.6% probability that i don't know what i'm talking about ;-)
A: If it's not an aliasing problem, it sounds like you could be plotting the frequency response on a normalised frequency scale, which will change with sample frequency. Here's an example of a reasonably good way to plot a frequency response of a signal in Matlab:
Fs = 100;
Tmax = 10;
time = 0:1/Fs:Tmax;
omega = 2*pi*10; % 10 Hz
signal = 10*sin(omega*time) + rand(1,Tmax*Fs+1);
Nfft = 2^8;
[Pxx,freq] = pwelch(signal,Nfft,[],[],Fs)
plot(freq,Pxx)
Note that the sample frequency must be explicitly passed to the pwelch command in order to output the “real” frequency data. Otherwise, when you change the sample frequency the bin where the resonance occurs will seem to shift, which is similar to the problem you describe.
A: Methinks you need to do some serious reading on digital signal processing before you can even begin to understand all the nuances of the DFT (FFT). If I was you, I'd get grounded in it first with this great book:
Discrete-Time Signal Processing
If you want more of a mathematical treatment that will really expand your abilities,
Fourier Analysis by Körner
A: The problem you are seeing is related to "data aliasing" due to limitations of the FFT being able to detect frequencies higher than the Nyquist Frequency (half-the sampling frequency).
With data aliasing, a peak in real frequency will be centered around (real frequency modulo Nyquist frequency). In your 20 Hz sampling (assuming 70 Hz is the true frequency, that results in zero frequency which means you're not seeing the real information. One thing that can help you with this is to use FFT "windowing".
Another problem that you may be experiencing is related to noisy data generation via single-FFT measurement. It's better to take lots of data, use windowing with overlap, and make sure you have at least 5 FFTs which you average to find your result. As Steven Lowe mentioned, you should also sample at faster rates if possible. I would recommend sampling at the fastest rate your instruments can sample.
Lastly, I would recommend that you read some excerpts from Numerical Recipes in C (<-- link):
*
*Section 12.0 -- Introduction to FFT
*Section 12.1 (Discusses data aliasing)
*Section 13.4 (Discusses FFT windowing)
You don't need to read the C source code -- just the explanations. Numerical Recipes for C has excellent condensed information on the subject.
If you have any more questions, leave them in the comments. I'll try to do my best in answering them.
Good luck!
A: Take a look at this related question. While it was originally asked about asked about VB the responses are generically about FFTs
A: I tried using the frequency response code as above but it seems that I dont have the appropriate toolbox in Matlab. Is there any way to do the same thing without using fft command? So far, this is what I have:
% FFT Algorithm
Fs = 200; % Sampling frequency
T = 1/Fs; % Sample time
L = 65536; % Length of signal
t = (0:L-1)*T; % Time vector
y = data1; % Your CP values go in this vector
NFFT = 2^nextpow2(L); % Next power of 2 from length of y
Y = fft(y,NFFT)/L;
f = Fs/2*linspace(0,1,NFFT/2);
% Plot single-sided amplitude spectrum.
loglog(f,2*abs(Y(1:NFFT/2)))
title(' y(t)')
xlabel('Frequency (Hz)')
ylabel('|Y(f)|')
I think there might be something wrong with the code I am using. I'm not sure what though.
A: A colleague of mine has written some nice GPL-licenced functions for spectral analysis:
http://www.mecheng.adelaide.edu.au/~pvl/octave/
(Update: this code is now part of one of the Octave modules:
http://octave.svn.sourceforge.net/viewvc/octave/trunk/octave-forge/main/signal/inst/.
But it might be tricky to extract just the pieces you need from there.)
They're written for both Matlab and Octave and serve mostly as a drop-in replacement for the analogous functions in the Signal Processing Toolbox. (So the code above should still work fine.)
It may help with your data analysis; better than rolling your own with fft and the like.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: c# store user settings in database Is there an easy method to store a person's user settings in a sql 2000 database. Ideally all settings in one field so I don't keep having to edit the table every time I add a setting. I am thinking along the lines of serialize a settings class if anyone has an example.
The reason I don't want to use the built in .NET user settings stored in persistent storage is work uses super mandatory profiles so upon a users log off the settings are cleared which is a pain. I posted asking for any solutions to this previously but didn't get much of a response.
A: The VS designer keeps property settings in the ApplicationSettingsBase class. By default, these properties are serialized/deserialized into a per user XML file. You can override this behavior by using a custom SettingsProvider which is where you can add your database functionality. Just add the SettingsProvider attribute to the VS generated Settings class:
[SettingsProvider(typeof(CustomSettingsProvider))]
internal sealed partial class Settings {
...
}
A good example of this is the RegistrySettingsProvider.
I answered another similar question the same way here.
A: you can easily serialize classes in C#: http://www.google.com/search?q=c%23+serializer. You can either store the XML in a varchar field, or if you want to use the binary serializer, you can store it in an "Image" datatype, which is really just binary.
A: You could serialize into a database, or you could create a User settings table containing Name-Value pairs and keyed by UserId. The advantage of doing it this way is it's easier to query and update through RDMS tools.
A: First you need your table.
create table user_settings
(
user_id nvarchar(256) not null,
keyword nvarchar(64) not null,
constraint PK_user_settings primary key (user_id, keyword),
value nvarchar(max) not null
)
Then you can build your API:
public string GetUserSetting(string keyword, string defaultValue);
public void SetUserSetting(string keyword, string value);
If you're already doing CRUD development (which I assume from the existence and availability of a database), then this should be trivially easy to implement.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Third party data delivery of lots of data Does anyone know how sites that have a real-time feed of a lot of data work? I am referring to something like a stock site, where they can tell you in real time (well, 20 minute delay mostly, but still real-time - 20 minutes as I understand it).
They have thousands of data pieces delivered to them every second, I would imagine: MSFT 25.00 +.23 VOL 12000 ???? for each stock that had a change during some interval.
So, is there just a constant feed of small pushes going on? Or do you think a site will pull from the place that has the real data and say "give me all changes since 12:23:45 CST to now" type query?
I ask this because at work we might have a situation where we need to have at our application's fingertips real time information like this, and it won't make sense to hit our third party provider over and over and over again every second...
A: Generally there is a server/client protocol defined between the 2 parties. In the company I work for the connection is maintained at all times.
Here is info on real time data feeds to go with your stock example
NYSE,NASDAQ
It is common for data providers to also have FTP sites with (delayed) batched data. One that comes to mind is the NWS EMWIN
A: Sites like Twitter feed data to certain approved sites in real-time via XMPP (Wiki link).
A: In the broadest terms, a push model is going to be the best way of achieving "real time" transfer, particularly if you're talking about a large amount of data.
However you do always have a problem when using a purely push model of how to recover from missed data.
Depending on the nature of your data that may not be a problem (thinking of video delivery as an analogue, where the amount of data is huge but there is sufficient redundancy for it to recover from missing data). And if you have any control over the data you may be able to build some redundancy in. For example, on every change event you can provide absolute values rather than changes, or previous value and new value.
A: I've done this making an attempt to retrieve the stock quote from the source, and falling back to a timestamped on-disk cache of the quote when the main source fails or times out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Strategies for pushing updates to an ASP.NET webfarm? How do most people handle updating ASP.NET applications running in a webfarm? I am having the problem that because the app is in use and the request affitnity is not sticky, when we push the update users run into errors as the process requests the request might be handled by the wrong version of the application. How do you do this? Take the entire application offline and let the push complete or do you update live and let the chips fall where they may? Ideally we'd like to minimize down time if at all possible.
any thoughts/suggestions/pointers would be appreciated
A: Make half the servers inaccessible, update those, flip all the inaccessible servers to accesible and vice versa, update the other half, and put the other half back up.
A: This is what we do:
Drain the active sessions off a particular server in the farm no new traffic will be routed to that server during the time.
Apply the patch to the drained server
Drain the sessons off the remaining servers
Allow traffic back to the original server
As the other servers are drained, apply the patch and let them come back to life.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is Reflection? I am VERY new to ASP.NET. I come from a VB6 / ASP (classic) / SQL Server 2000 background. I am reading a lot about Visual Studio 2008 (have installed it and am poking around). I have read about "reflection" and would like someone to explain, as best as you can to an older developer of the technologies I've written above, what exactly Reflection is and why I would use it... I am having trouble getting my head around that. Thanks!
A: Reflection is how you can explore the internals of different Types, without normally having access (ie. private, protected, etc members).
It's also used to dynamically load DLL's and get access to types and methods defined in them without statically compiling them into your project.
In a nutshell: Reflection is your toolkit for peeking under the hood of a piece of code.
As to why you would use it, it's generally only used in complex situations, or code analysis. The other common use is for loading precompiled plugins into your project.
A: Reflection lets you programmatically load an assembly, get a list of all the types in an assembly, get a list of all the properties and methods in these types, etc.
As an example:
myobject.GetType().GetProperty("MyProperty").SetValue(myobject, "wicked!", null)
A: It allows the internals of an object to be reflected to the outside world (code that is using said objects).
A practical use in statically typed languages like C# (and Java) is to allow invocation of methods/members at runtime via a string (eg the name of the method - perhaps you don't know the name of the method you will use at compile time).
In the context of dynamic languages I haven't heard the term as much (as generally you don't worry about the above), other then perhaps to iterate through a list of methods/members etc...
A: Reflection is .Net's means to manipulate or extract information of an assembly, class or method at run time. For example, you can create a class at runtime, including it's methods. As stated by monoxide, reflection is used to dynamically load assembly as plugins, or in advance cases, it is used to create .Net compiler targeting .Net, like IronPython.
Updated: You may refer to the topic on metaprogramming and its related topics for more details.
A: When you build any assembly in .NET (ASP.NET, Windows Forms, Command line, class library etc), a number of meta-data "definition tables" are also created within the assembly storing information about methods, fields and types corresponding to the types, fields and methods you wrote in your code.
The classes in System.Reflection namespace in .NET allow you to enumerate and interate over these tables, providing an "object model" for you to query and access items in these tables.
One common use of Reflection is providing extensibility (plug-ins) to your application. For example, Reflection allows you to load an assembly dynamically from a file path, query its types for a specific useful type (such as an Interface your application can call) and then actually invoke a method on this external assembly.
Custom Attributes also go hand in hand with reflection. For example the NUnit unit testing framework allows you to indicate a testing class and test methods by adding [Test] {TestFixture] attributes to your own code.
However then the NUnit test runner must use Reflection to load your assembly, search for all occurrences of methods that have the test attribute and then actually call your test.
This is simplifying it a lot, however it gives you a good practical example of where Reflection is essential.
Reflection certainly is powerful, however be ware that it allows you to completely disregard the fundamental concept of access modifiers (encapsulation) in object oriented programming.
For example you can easily use it to retrieve a list of Private methods in a class and actually call them. For this reason you need to think carefully about how and where you use it to avoid bypassing encapsulation and very tightly coupling (bad) code.
A: Reflection is the process of inspecting the metadata of an application. In other words,When reading attributes, you’ve already looked at some of the functionality that reflection
offers. Reflection enables an application to collect information about itself and act on this in-
formation. Reflection is slower than normally executing static code. It can, however, give you
a flexibility that static code can’t provide
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Can I save a record in a Windows Gadget in a file or other storage (database, etc.)? I'm planning to create a simple gadget to create a TO DO list application. This is just for practice and to explore Windows Gadget. How can I store the values in the database? As much as possible, I don't want to set up a local http handler file to be a means to store value to file or database.
Note: I tag this with html and javascript since I'm aware it uses such.
A: Once installed, gadgets run with all the permissions of the logged in user. So you should be able to access the local file system and instantiate COM objects such as ADO to connect to a database.
The chap here wrote a gadget settings persistence manager to allow gadgets to save their settings between being uninstalled and re-installed in the sidebar. He uses the Scripting.FileSystemObject to write out settings to a file:
http://channel9.msdn.com/playground/Sandbox/231595/
This is also worthwhile reading to understand gadget security:
http://msdn.microsoft.com/en-us/library/aa965881(VS.85).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: When to use Fixed Point these days For intense number-crunching i'm considering using fixed point instead of floating point. Of course it'll matter how many bytes the fixed point type is in size, on what CPU it'll be running on, if i can use (for Intel) the MMX or SSE or whatever new things come up...
I'm wondering if these days when floating point runs faster than ever, is it ever worth considering fixed point? Are there general rules of thumb where we can say it'll matter by more than a few percent? What is the overview from 35,000 feet of numerical performance? (BTW, i'm assuming a general CPU as found in most computers, not DSP or specialized embedded systems.)
A: Its nearly ALWAYS faster to use fixed point (experience of x86, pentium, 68k and ARM). It can, though, also depend on the application type. For graphics programming (one of my main uses of fixed point) I've been able to optimize the code using prebuilt cosine tables, log tables etc. But also the basic mathematical operations have also proven faster.
A comment on financial software. It was said in an earlier answer that fixed point is useful for financial calculations. In my own experience (development of large treasury management system and extensive experience of credit card processing) I would NOT use fixed point. You will have rounding errors using either floating or fixed point. We always use whole amounts to represent monetary amounts, counting the minimum amount possible (1c for Euro or dollar). This ensure no partial amounts are ever lost. When doing complex calculations values are converted to doubles, application specific rounding rules are applied and results are converted back to whole numbers.
A: Use fixed-point when the hardware doesn't support floating-point or the hardware implementation sucks.
Also beware when making classes for it. Something you think would be quick could actually turn out to be a dog when it comes to profiling due to (un)necessary copies of classes. That is another question for another time however.
A: It's still worth it. Floating point is faster than in the past, but fixed-point is also. And fixed is still the only way to go if you care about precision beyond that guaranteed by IEEE 754.
A: In situations where you are dealing with very large amounts of data, fixed point can be twice as memory efficient, e.g. a four byte long integer as opposed to an eight byte double. A technique often used in large geospatial datasets is to reduce all the data to a common origin, such that the most significant bits can be disposed of, and work with fixed point integers for the rest. Floating point is only important if the point does actually float, i.e. you are dealing with a very wide range of numbers at very high accuracy.
A: Another good reason to use fixed decimal is that rounding is much simpler and predictable. Most of the financial software uses fixed point arbitrary precision decimals with half-even rounding to represent money.
A: Another reason to use fixed-point is that ARM devices, like mobile phones and tablets, lack of FPU (at least many of them).
For developing real-time applications it makes sense to optimize functions using fixed-point arithmetic. There are implementations of FFTs (Fast Fourier Transform), very importan for graphics, that base its improvements on efficiency on relying on floating point arithmetic.
A: Since you are using a general-purpose CPU, I would suggest not using fixed point, unless performance is so critical for your application that you have to count every tic. The hassle of implementing fixed point, and dealing with issues like overflow is just not worth it, when you have a CPU, which will do it for you.
IMHO, fixed point is only necessary when you are using a DSP without hardware support for floating point operations.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Which is fastest in PHP- MySQL or MySQLi? I'd like to know if anyone has any first-hand experience with this dichotomy. A few blogs say the mysql extension is faster than mysqli. Is this true?
And I'm only asking about speed. I know mysqli has features that are not present in the older extension.
A: The MySQL extension is very slightly faster than MySQLi in most benchmarks I've seen reported. The difference is so slight, however, that this should probably not be your criterion for deciding between the two.
Other factors dwarf the difference in performance between mysql and mysqli. Using mod_php or FastCGI, a bytecode cache like APC, or using data caching judiciously to reduce database hits, are far more beneficial for overall performance of PHP scripts than the choice of MySQL extension.
Don't be penny wise and pound foolish! :-)
A: See http://php.net/manual/en/mysqlinfo.api.choosing.php
The overall performance of all three extensions is considered to be about the same. Although the performance of the extension contributes only a fraction of the total run time of a PHP web request. Often, the impact is as low as 0.1%.
A: According to all the Google results for benchmarks linked by ceejayoz it looks like MySQL is at least slightly faster than MySQLi in all the benchmark tests. I do recommend reading the results for details but just figured I'd post something that directly answered the question and bumps up ceejayoz's answer.
A: The PHP documentation has a good comparison of mysql, mysqli, and PDO. I know you only asked about speed, but others might find this useful. It talks about the feature differences between the options.
A:
Maybe, this can be a reason to make the right choice :: The Plot to
Kill PHP MySQL Extension
" Yes, you read it right. Recently, Phillip Olson sent to the PHP internals mailing list a proposal to kill the original PHP MySQL extension in future PHP versions. "
A: In relation to PHP programming language, MySQL is the old database driver, and MySQLi is the Improved driver. MySQLi takes advantage of the newer features of MySQL 5.
Features of MySQLi taken from php.net site:
*
*Object-oriented interface
*Support for Prepared Statements
*Support for Multiple Statements
*Support for Transactions
*Enhanced debugging capabilities
*Embedded server support
A: "It depends."
For example, PHP MySQL vs MySQLi Database Access Metrics and the subsequent comments point out arguments both ways.
If you have a mature database and codebase, do some testing and see what works in your system. If not, stop worrying about premature optimization.
A: Unless milliseconds matter then don't worry. Surely if you have no need for the extra functionality provided by mysqli then just stick with the tried and tested mysql.
A: <?php
$start = microtime();
$c = new mysqli('localhost', 'username', 'userpass', 'username_dbname');
$c -> select_db('username_dbname');
$q = $c -> query("SELECT * FROM example");
while ($r = $q -> fetch_array(MYSQLI_ASSOC))
{
echo $r['col1'] . "<br/>\n";
}
$me = $c -> query("SELECT col1 FROM example WHERE id='11'") -> fetch_array(MYSQLI_ASSOC);
echo $me['col1'];
echo (microtime() - $start);
?>
Why when using mysqli oop is there a slight speed increase from using this script with mysql or mysqli procedural style? When testing the above script I get .0009 seconds consistantly more than when using the other 2. When using mysql or mysqli procedural, I loaded the scripts 20x in each different style and the two were always above .001. I load the above script 20x and I get below .001 5x.
A: MySQLi has two basic advantages over MySQL; prepared statements are a great way to avoid SQL injection attacks. Secondly MySQL (or mariaDB) will do their best to optimize prepared statements and thus you have the potential for speed optimizations there. Speed increases from making the database happy will vastly outsize the tiny difference between MySQL and MySQLi.
If you are feeding in statements you mangle together yourself like SELECT * FROM users WHERE ID=$user_id the database will treat this as a unique statement with each new value of $user_id. But a prepared statement SELECT * FROM users WHERE ID=? stands a much better chance of having some optimizations/caching performed by the database.
But comparisons are fairly moot as MySQL is now officially deprecated. From the horse's mouth:
Deprecated features in PHP 5.5.x
ext/mysql deprecation
The original MySQL extension is now deprecated, and will generate E_DEPRECATED errors when connecting to a database. Instead, use the MySQLi or PDO_MySQL extensions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: Implementing Mozilla's toSource() method in Internet Explorer Has anyone implemented Mozilla's Object.toSource() method for Internet Explorer and other non-Gecko browsers? I'm looking for a lightweight way to serialize simple objects into strings.
A: Consider the following: (when using FireFox 3.6)
javascript:
x=function(){alert('caveat compter')};
alert(['JSON:\t',JSON.stringify(x),'\n\ntoSource():\t',x.toSource()].join(''));
which displays:
JSON:
toSource(): (function () {alert("caveat compter");})
or even:
javascript:
x=[];x[3]=x;
alert('toSource():\t'+x.toSource());
alert('JSON can not handle this at all and goes "infinite".');
alert('JSON:\n'+JSON.stringify(x));
which displays:
toSource(): #1=[, , , #1#]
and the "going 'infinite'" message whence follows JSON's stackoverflow recursive digression.
The examples emphasize the subtleties of expression explicitly excluded from JSON representation that are rendered by toSource().
It is not easy to compose a program to replicate the same results, for ALL cases, as the Gecko toSource() primitive, which is exceptionally powerful.
Below are a few of the 'moving targets' that a program duplicating toSource() functionality MUST handle successfully:
javascript:
function render(title,src){ (function(objRA){
alert([ title, src,
'\ntoSource():',objRA.toSource(),
'\nJSON:',JSON.stringify(objRA) ].join('\n'));
})(eval(src));
}
render('Simple Raw Object source code:',
'[new Array, new Object, new Number, new String, ' +
'new Boolean, new Date, new RegExp, new Function]' );
render( 'Literal Instances source code:',
'[ [], 1, true, {}, "", /./, new Date(), function(){} ]' );
render( 'some predefined entities:',
'[JSON, Math, null, Infinity, NaN, ' +
'void(0), Function, Array, Object, undefined]' );
which displays:
Simple Raw Object source code:
[new Array, new Object, new Number, new String,
new Boolean, new Date, new RegExp, new Function]
toSource():
[[], {}, (new Number(0)), (new String("")),
(new Boolean(false)), (new Date(1302637995772)), /(?:)/,
(function anonymous() {})]
JSON:
[[],{},0,"",false,"2011-04-12T19:53:15.772Z",{},null]
and then displays:
Literal Instances source code:
[ [], 1, true, {}, "", /./, new Date(), function(){} ]
toSource():
[[], 1, true, {}, "", /./, (new Date(1302638514097)), (function () {})]
JSON:
[[],1,true,{},"",{},"2011-04-12T20:01:54.097Z",null]
and lastly:
some predefined entities:
[JSON, Math, null, Infinity, NaN, void(0),
Function, Array, Object, undefined]
toSource():
[JSON, Math, null, Infinity, NaN, (void 0),
function Function() {[native code]}, function Array() {[native code]},
function Object() {[native code]}, (void 0)]
JSON:
[{},{},null,null,null,null,null,null,null,null]
The previous analysis is significant if the translations are 'to be used' or less stringent if the need is for simple benign human consumption to view an object's internals. A primary JSON feature, as a representation, is the transfer of some structured information 'to be used' between environments.
The quality of a toSource() function is a factor in the denotational semantics of a programme influencing, but not limited to:
round trip computations, least fixed point properties, and inverse functions.
*
*Does repetition of code conversion
quiesce to a static state?
*Does obj.toSource() ==
eval(eval(eval(obj.toSource()).toSource()).toSource()).toSource()?
*Does it make sense to consider
whether obj == eval(obj.toSource())?
*Does undoing a conversion result in, not
just a similar object, but an
IDENTICAL one?
This is a loaded
question with profound implications
when cloning an operational object.
and many, many more ...
Note that the above questions take on added significance when obj contains an executed code object, such as (new Function ... )()!
A: If matching the exact serialization format of Firefox is not your aim, you could use one of the JavaScript JSON serialization/deserialization libraries listed at http://json.org. Using a standard scheme like JSON may be better than mimicking the proprietary Gecko format.
A: If you need to serialise objects with circular references you can use the cycle.js extension to the JSON object by Douglas Crockford available at https://github.com/douglascrockford/JSON-js. This works very like toSource(), although it won't serialise functions (but could probably be adapted to using a function's toString method).
A: You could do something like this:
Object.prototype.getSource = function() {
var output = [], temp;
for (var i in this) {
if (this.hasOwnProperty(i)) {
temp = i + ":";
switch (typeof this[i]) {
case "object" :
temp += this[i].getSource();
break;
case "string" :
temp += "\"" + this[i] + "\""; // add in some code to escape quotes
break;
default :
temp += this[i];
}
output.push(temp);
}
}
return "{" + output.join() + "}";
}
A: In order to take this a little further: when you send something - to work on - a receiver must get it and be able to work on it. So this next bit of code will do the trick - adapted from the previous answer by Eliran Malka.
// SENDER IS WRAPPING OBJECT TO BE SENT AS STRING
// object to serialize
var s1 = function (str) {
return {
n: 8,
o: null,
b: true,
s: 'text',
a: ['a', 'b', 'c'],
f: function () {
alert(str)
}
}
};
// test
s1("this function call works!").f();
// serialized object; for newbies: object is now a string and can be sent ;)
var code = s1.toString();
// RECEIVER KNOWS A WRAPPED OBJECT IS COMING IN
// you have to assign your wrapped object to somevar
eval('var s2 = ' + code);
// and then you can test somevar again
s2("this also works!").f();
Be aware of the use of eval. If you own all code being transferred: feel free to use it (although it can also have disadvantages). If you don't know where the source is coming from: it's a no-no.
javascript object tosource stringify eval
A: See also JavaScript data formatting/pretty printer. I think the routine exports in valid JS format, so it can be eval to get it back.
[EDIT] Actually, not! It is OK for quick dump but not for real serialization.
I improved it, result below:
function SerializeObject(obj, indentValue)
{
var hexDigits = "0123456789ABCDEF";
function ToHex(d)
{
return hexDigits[d >> 8] + hexDigits[d & 0x0F];
}
function Escape(string)
{
return string.replace(/[\x00-\x1F'\\]/g,
function (x)
{
if (x == "'" || x == "\\") return "\\" + x;
return "\\x" + ToHex(String.charCodeAt(x, 0));
})
}
var indent;
if (indentValue == null)
{
indentValue = "";
indent = ""; // or " "
}
else
{
indent = "\n";
}
return GetObject(obj, indent).replace(/,$/, "");
function GetObject(obj, indent)
{
if (typeof obj == 'string')
{
return "'" + Escape(obj) + "',";
}
if (obj instanceof Array)
{
result = indent + "[";
for (var i = 0; i < obj.length; i++)
{
result += indent + indentValue +
GetObject(obj[i], indent + indentValue);
}
result += indent + "],";
return result;
}
var result = "";
if (typeof obj == 'object')
{
result += indent + "{";
for (var property in obj)
{
result += indent + indentValue + "'" +
Escape(property) + "' : " +
GetObject(obj[property], indent + indentValue);
}
result += indent + "},";
}
else
{
result += obj + ",";
}
return result.replace(/,(\n?\s*)([\]}])/g, "$1$2");
}
}
indentValue can be null, "", " ", "\t" or whatever. If null, no indentation, output a rather compact result (could use less spaces...).
I could use an array to stack the results then join them, but unless you have giant objects, string concatenation should be good enough...
Also doesn't handle cyclic references...
A: You don't have to use toSource(); wrap the code to be serialized in a function that returns the JSON struct, and use function#toString() instead:
var serialized = function () {
return {
n: 8,
o: null,
b: true,
s: 'text',
a: ['a', 'b', 'c'],
f: function () {
alert('!')
}
}
};
serialized.toString();
See a live demo on jsFiddle.
A: Nobody has mentioned it yet, so I'll point out there is a polyfill available for Mozilla's Object.toSource at https://github.com/oliver-moran/toSource.js
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Can I do a find/replace in t-sql? I basically have an xml column, and I need to find and replace one tag value in each record.
A: For anything real, I'd go with xpaths, but sometimes you just need a quick and dirty solution:
You can use CAST to turn that xml column into a regular varchar, and then do your normal replace.
UPDATE xmlTable SET xmlCol = REPLACE( CAST( xmlCol as varchar(max) ), '[search]', '[replace]')
That same technique also makes searching XML a snap when you need to just run a quick query to find something, and don't want to deal with xpaths.
SELECT * FROM xmlTable WHERE CAST( xmlCol as varchar(max) ) LIKE '%found it!%'
Edit: Just want to update this a bit, if you get a message along the lines of Conversion of one or more characters from XML to target collation impossible, then you only need to use nvarchar which supports unicode.
CAST( xmlCol as nvarchar(max) )
A: To find a content in an XML column, look into the exist() method, as described in MSDN here.
SELECT * FROM Table
WHERE XMLColumn.exist('/Root/MyElement') = 1
...to replace, use the modify() method, as described here.
SET XMLColumn.modify('
replace value of (/Root/MyElement/text())[1]
with "new value"
')
..all assuming SqlServer 2005 or 2008. This is based on XPath, which you'll need to know.
A: update my_table
set xml_column = replace(xml_column, "old value", "new value")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Portability of #warning preprocessor directive I know that the #warning directive is not standard C/C++, but several compilers support it, including gcc/g++. But for those that don't support it, will they silently ignore it or will it result in a compile failure? In other words, can I safely use it in my project without breaking the build for compilers that don't support it?
A: It should be noted that MSVC uses the syntax:
#pragma message ( "your warning text here" )
The usual #warning syntax generates a fatal error
C1021: invalid preprocessor command 'warning'
so it is not portable to those compilers.
A: It is likely that if a compiler doesn't support #warning, then it will issue an error. Unlike #pragma, there is no recommendation that the preprocessor ignore directives it doesn't understand.
Having said that, I've used compilers on various different (reasonably common) platforms and they have all supported #warning.
A: You are likely to get at least an unrecognized directive warning from compilers that don't recognize #warning, even if the code block is not included in your compilation. That might or might not be treated as an error - the compiler could legitimately treat it as an error, but many would be more lax.
Are you aware of (can you name) a compiler other than GCC/G++ that provides #warning?
[Edited: Sun Solaris 10 (Sparc) and the Studio 11 C/C++ compilers both accept #warning.]
A: When switching from mingw to visual studio, I added such lines to my global config header. (include it in stdafx.h)
#ifdef __GNUC__
//from https://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html
//Instead of put such pragma in code:
//#pragma GCC diagnostic ignored "-Wformat"
//use:
//PRAGMA_GCC(diagnostic ignored "-Wformat")
#define DO_PRAGMA(x) _Pragma (#x)
#define PRAGMA_GCC(x) DO_PRAGMA(GCC #x)
#define PRAGMA_MESSAGE(x) DO_PRAGMA(message #x)
#define PRAGMA_WARNING(x) DO_PRAGMA(warning #x)
#endif //__GNUC__
#ifdef _MSC_VER
/*
#define PRAGMA_OPTIMIZE_OFF __pragma(optimize("", off))
// These two lines are equivalent
#pragma optimize("", off)
PRAGMA_OPTIMIZE_OFF
*/
#define PRAGMA_GCC(x)
// https://support2.microsoft.com/kb/155196?wa=wsignin1.0
#define __STR2__(x) #x
#define __STR1__(x) __STR2__(x)
#define __PRAGMA_LOC__ __FILE__ "("__STR1__(__LINE__)") "
#define PRAGMA_WARNING(x) __pragma(message(__PRAGMA_LOC__ ": warning: " #x))
#define PRAGMA_MESSAGE(x) __pragma(message(__PRAGMA_LOC__ ": message : " #x))
#endif
//#pragma message "message quoted"
//#pragma message message unquoted
//#warning warning unquoted
//#warning "warning quoted"
PRAGMA_MESSAGE(PRAGMA_MESSAGE unquoted)
PRAGMA_MESSAGE("PRAGMA_MESSAGE quoted")
#warning "#pragma warning quoted"
PRAGMA_WARNING(PRAGMA_WARNING unquoted)
PRAGMA_WARNING("PRAGMA_WARNING quoted")
Now I use PRAGMA_WARNING(this need to be fixed)
Sadly there is no #pragma warning in gcc, so it warns unspecified pragma.
I doubt that gcc will add #pragma warning" rather than microsoft adding #warning.
A: I had this problem once with a compiler for an Atmel processor. And it did generate preprocessor errors due to the unknown #warning token.
Unfortunately the solution seemed to be to convert the whole source tree to use the #pragma equivalent and accept that the build behavior was going to differ if using gcc.
A: Actually most compilers that I know about ignore unknown #pragma directives, and output a warning message - so in the worst case, you'll still get a warning.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Automatically opening a file using Windows shell script I have a Windows shell script that opens an application. I'd like to modify it to make it open a file automatically upon opening the application.
I know it uses VBscript but I'm not familiar with the language; all the tutorials I found just talked about using VBS for web pages, not for Windows scripts. I know the syntax is different because I get error messages.
The best "solution" I found was to simply add the file path at the end of the run statement using the "&" symbol, but Windows pops up an error saying the file couldn't be found. Am I missing something?
A: You need to quote the filename so that any spaces in the path don't cause problems.
Instead of just using & filename to append the filename, use: & Chr(34) & filename & Chr(34)
This behaviour will also rely on the application accepting a file to open on the command line, which whilst common isn't mandatory. An alternative approach would just be to try and execute the file directly using Shell.Execute. This is equivalent to double clicking the file in explorer and should launch the application registered to handle that filetype.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Passing Command Line Arguments to internet explorer via VB I've got an app that my client wants to open a kiosk window to ie on startup that goes to their corporate internet. Vb isn't my thing but they wanted it integrated into their current program and I figured it would be easy so I've got
Shell ("explorer.exe http://www.corporateintranet.com")
and command line thing that needs to be passed is -k
Can't figure out where in the hell to drop this to make it work. Thanks in advance! :)
A: If you would like to use -k, you will probably want to call iexplore.exe instead of explorer.exe.
A: This worked for me, not the most elegant but it'll do:
Shell ("C:\Program Files\Internet Explorer\iexplore.exe -k http://www.corporateintranet.com")
A: You have it right now but I think you are missing the closing quote after iexplore.exe
You may also want to take out the [space]-k, set the zoom level to what will work for you in kiosk mode and then put the [space]-k back in. I am guessing there is a parameter or argument as they call it to pass the opening zoom level to iexplore but don't know how to do that yet.
A: It's a bit late. But for whoever comes to this topic in the future, here is my suggestion: use the ShellExecute Function from the Shell32.dll
Example:
ShellExecute(Application.hwnd, "open", "http://www.corporateintranet.com", vbNullString, vbNullString, SW_SHOWNORMAL)
Here is the declaration to put in a module:
Public Declare Function ShellExecute Lib "shell32.dll" Alias "ShellExecuteA" (ByVal hwnd As Long, ByVal lpOperation As String, ByVal lpFile As String, ByVal lpParameters As String, ByVal lpDirectory As String, ByVal nShowCmd As Long) As Long
Public Const SW_SHOW = 5
Public Const SW_SHOWDEFAULT = 10
Public Const SW_SHOWNORMAL = 1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: User32 API calls in .NET I'm currently planning out a project involving creating a shell replacement for Windows (based on Blackbox, bblean specifically). However I wish to leverage the power of .NET to do so. Many of the API calls I'll need are housed within the User32 library. I can of course use P/Invoke and create a static class to handle this for me.
However, a lot of this functionality is already available in the .NET framework, specifically in the System.Management namespace for dealing with processes and active windows, etc. Some of it seems to be missing, for example the SetForegroundWindow functions.
Are you aware of anything built into the .NET framework that already provides this functionality, or will I need to take the P/Invoke route?
A: I was recently experimenting with creating my own windowing system, really just create new borders around all the windows, but it required a lot of functions from the Win32 API that I add to P/Invoke in.
But there is a ready to-go library in .NET that wraps most of the Win32 API already found here
A: Unofrtunately, for a lot of things you will have to go the P/Invoke route. Fortunately, there's pinvoke.net with definitions for a lot of Win32 APIs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What do you use to create flowcharts? I'm curious what tools people have found useful for building flowcharts. Obviously MS Visio and OmniGraffle come to mind but they both feel so bloated and also tend to emphasize the document formatting/printing side and less on helping to organize the actual logic. Is there anything else out there that fellow developers would recommend?
I'm hoping to find something fairly simple that would let me throw together flowcharts on the fly when I'm working through complex logic. I don't care about formatting or fonts or the like, just something that would help me keep my logic organized as I work through it. Even something that would arrange the chart itself and simply allow me to specify where to branch and what to check, etc.
Any OS would be fine, though I personally lean towards OS X apps as this has recently been my primary work environment.
A: take a look at graphviz.
Example:
digraph finite_state_machine {
rankdir=LR;
size="8,5"
node [shape = doublecircle]; LR_0 LR_3 LR_4 LR_8;
node [shape = circle];
LR_0 -> LR_2 [ label = "SS(B)" ];
LR_0 -> LR_1 [ label = "SS(S)" ];
LR_1 -> LR_3 [ label = "S($end)" ];
LR_2 -> LR_6 [ label = "SS(b)" ];
LR_2 -> LR_5 [ label = "SS(a)" ];
LR_2 -> LR_4 [ label = "S(A)" ];
LR_5 -> LR_7 [ label = "S(b)" ];
LR_5 -> LR_5 [ label = "S(a)" ];
LR_6 -> LR_6 [ label = "S(b)" ];
LR_6 -> LR_5 [ label = "S(a)" ];
LR_7 -> LR_8 [ label = "S(b)" ];
LR_7 -> LR_5 [ label = "S(a)" ];
LR_8 -> LR_6 [ label = "S(b)" ];
LR_8 -> LR_5 [ label = "S(a)" ];
}
produces:
(source: graphviz.org)
it is particularly well suited to be generated from programs.
A: How about paper and pencil? Or a whiteboard?
Sometimes the ease and tactile feedback of physical objects is the most appropriate.
A: Though you list it as bloated I nonetheless use OmniGraffle.
For quick flowcharting a series of boxes with lines to magnets in the boxes is sufficient, but the rest of the formatting options are good to have later. I find that any flowchart I take the time to draw generally ends up in a document somewhere. Even when I'm just trying to understand some difficult code, I end up with a one page document which attempts to explain that code for the next poor schmuck who has to dig in.
A: I use Dia on Linux. It's quite lightweight and is simple to use, but it doesn't automatically position elements, and I've found the interface to be a bit inhibitive at times.
There's an OS X port at dia.darwinports.com, although I've not used it.
A: Open Office's Draw is pretty good too. I've used it to create everything from simple flow charts to complex genealogical trees.
A: For the OmniGraffle, is it good enough for its FREE version? i.e. after its 14 days trial, is it still a good choice?
I am attempting to use the yEd.
A: I don't do a lot of complex flowcharts but I when I am flowcharting or putting together a data flow I tend to use Powerpoint. It is simple enough and I know it well enough.
A: I use Microsoft Visio 2003, a tad bit of overkill with all its extra junk I don't need, but I like it's simple UI.
A: I generally don't do flow charts, but do do state diagrams.
I do a high level flow on a white board,
then lower levels, on A3 paper.
Once it is all working correctly, I create it in Visio,
which becomes part of the release documentation.
A: I do not use it for programming related tasks but I don't see why it wouldn't work. Mindjet Mindmanager is a great tool for creative thought mapping. (It can be pretty expensive however)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: RegEx: Grabbing values between quotation marks I have a value like this:
"Foo Bar" "Another Value" something else
What regex will return the values enclosed in the quotation marks (e.g. Foo Bar and Another Value)?
A: The pattern (["'])(?:(?=(\\?))\2.)*?\1 above does the job but I am concerned of its performances (it's not bad but could be better). Mine below it's ~20% faster.
The pattern "(.*?)" is just incomplete. My advice for everyone reading this is just DON'T USE IT!!!
For instance it cannot capture many strings (if needed I can provide an exhaustive test-case) like the one below:
$string = 'How are you? I\'m fine, thank you';
The rest of them are just as "good" as the one above.
If you really care both about performance and precision then start with the one below:
/(['"])((\\\1|.)*?)\1/gm
In my tests it covered every string I met but if you find something that doesn't work I would gladly update it for you.
Check my pattern in an online regex tester.
A: This version
*
*accounts for escaped quotes
*controls backtracking
/(["'])((?:(?!\1)[^\\]|(?:\\\\)*\\[^\\])*)\1/
A: MORE ANSWERS! Here is the solution i used
\"([^\"]*?icon[^\"]*?)\"
TLDR;
replace the word icon with what your looking for in said quotes and voila!
The way this works is it looks for the keyword and doesn't care what else in between the quotes.
EG:
id="fb-icon"
id="icon-close"
id="large-icon-close"
the regex looks for a quote mark "
then it looks for any possible group of letters thats not "
until it finds icon
and any possible group of letters that is not "
it then looks for a closing "
A: I liked Axeman's more expansive version, but had some trouble with it (it didn't match for example
foo "string \\ string" bar
or
foo "string1" bar "string2"
correctly, so I tried to fix it:
# opening quote
(["'])
(
# repeat (non-greedy, so we don't span multiple strings)
(?:
# anything, except not the opening quote, and not
# a backslash, which are handled separately.
(?!\1)[^\\]
|
# consume any double backslash (unnecessary?)
(?:\\\\)*
|
# Allow backslash to escape characters
\\.
)*?
)
# same character as opening quote
\1
A: string = "\" foo bar\" \"loloo\""
print re.findall(r'"(.*?)"',string)
just try this out , works like a charm !!!
\ indicates skip character
A: In general, the following regular expression fragment is what you are looking for:
"(.*?)"
This uses the non-greedy *? operator to capture everything up to but not including the next double quote. Then, you use a language-specific mechanism to extract the matched text.
In Python, you could do:
>>> import re
>>> string = '"Foo Bar" "Another Value"'
>>> print re.findall(r'"(.*?)"', string)
['Foo Bar', 'Another Value']
A: I've been using the following with great success:
(["'])(?:(?=(\\?))\2.)*?\1
It supports nested quotes as well.
For those who want a deeper explanation of how this works, here's an explanation from user ephemient:
([""']) match a quote; ((?=(\\?))\2.) if backslash exists, gobble it, and whether or not that happens, match a character; *? match many times (non-greedily, as to not eat the closing quote); \1 match the same quote that was use for opening.
A: My solution to this is below
(["']).*\1(?![^\s])
Demo link : https://regex101.com/r/jlhQhV/1
Explanation:
(["'])-> Matches to either ' or " and store it in the backreference \1 once the match found
.* -> Greedy approach to continue matching everything zero or more times until it encounters ' or " at end of the string. After encountering such state, regex engine backtrack to previous matching character and here regex is over and will move to next regex.
\1 -> Matches to the character or string that have been matched earlier with the first capture group.
(?![^\s]) -> Negative lookahead to ensure there should not any non space character after the previous match
A: Lets see two efficient ways that deal with escaped quotes. These patterns are not designed to be concise nor aesthetic, but to be efficient.
These ways use the first character discrimination to quickly find quotes in the string without the cost of an alternation. (The idea is to discard quickly characters that are not quotes without to test the two branches of the alternation.)
Content between quotes is described with an unrolled loop (instead of a repeated alternation) to be more efficient too: [^"\\]*(?:\\.[^"\\]*)*
Obviously to deal with strings that haven't balanced quotes, you can use possessive quantifiers instead: [^"\\]*+(?:\\.[^"\\]*)*+ or a workaround to emulate them, to prevent too much backtracking. You can choose too that a quoted part can be an opening quote until the next (non-escaped) quote or the end of the string. In this case there is no need to use possessive quantifiers, you only need to make the last quote optional.
Notice: sometimes quotes are not escaped with a backslash but by repeating the quote. In this case the content subpattern looks like this: [^"]*(?:""[^"]*)*
The patterns avoid the use of a capture group and a backreference (I mean something like (["']).....\1) and use a simple alternation but with ["'] at the beginning, in factor.
Perl like:
["'](?:(?<=")[^"\\]*(?s:\\.[^"\\]*)*"|(?<=')[^'\\]*(?s:\\.[^'\\]*)*')
(note that (?s:...) is a syntactic sugar to switch on the dotall/singleline mode inside the non-capturing group. If this syntax is not supported you can easily switch this mode on for all the pattern or replace the dot with [\s\S])
(The way this pattern is written is totally "hand-driven" and doesn't take account of eventual engine internal optimizations)
ECMA script:
(?=["'])(?:"[^"\\]*(?:\\[\s\S][^"\\]*)*"|'[^'\\]*(?:\\[\s\S][^'\\]*)*')
POSIX extended:
"[^"\\]*(\\(.|\n)[^"\\]*)*"|'[^'\\]*(\\(.|\n)[^'\\]*)*'
or simply:
"([^"\\]|\\.|\\\n)*"|'([^'\\]|\\.|\\\n)*'
A: Peculiarly, none of these answers produce a regex where the returned match is the text inside the quotes, which is what is asked for. MA-Madden tries but only gets the inside match as a captured group rather than the whole match. One way to actually do it would be :
(?<=(["']\b))(?:(?=(\\?))\2.)*?(?=\1)
Examples for this can be seen in this demo https://regex101.com/r/Hbj8aP/1
The key here is the the positive lookbehind at the start (the ?<= ) and the positive lookahead at the end (the ?=). The lookbehind is looking behind the current character to check for a quote, if found then start from there and then the lookahead is checking the character ahead for a quote and if found stop on that character. The lookbehind group (the ["']) is wrapped in brackets to create a group for whichever quote was found at the start, this is then used at the end lookahead (?=\1) to make sure it only stops when it finds the corresponding quote.
The only other complication is that because the lookahead doesn't actually consume the end quote, it will be found again by the starting lookbehind which causes text between ending and starting quotes on the same line to be matched. Putting a word boundary on the opening quote (["']\b) helps with this, though ideally I'd like to move past the lookahead but I don't think that is possible. The bit allowing escaped characters in the middle I've taken directly from Adam's answer.
A: Unlike Adam's answer, I have a simple but worked one:
(["'])(?:\\\1|.)*?\1
And just add parenthesis if you want to get content in quotes like this:
(["'])((?:\\\1|.)*?)\1
Then $1 matches quote char and $2 matches content string.
A: All the answer above are good.... except they DOES NOT support all the unicode characters! at ECMA Script (Javascript)
If you are a Node users, you might want the the modified version of accepted answer that support all unicode characters :
/(?<=((?<=[\s,.:;"']|^)["']))(?:(?=(\\?))\2.)*?(?=\1)/gmu
Try here.
A: The RegEx of accepted answer returns the values including their sourrounding quotation marks: "Foo Bar" and "Another Value" as matches.
Here are RegEx which return only the values between quotation marks (as the questioner was asking for):
Double quotes only (use value of capture group #1):
"(.*?[^\\])"
Single quotes only (use value of capture group #1):
'(.*?[^\\])'
Both (use value of capture group #2):
(["'])(.*?[^\\])\1
-
All support escaped and nested quotes.
A: echo 'junk "Foo Bar" not empty one "" this "but this" and this neither' | sed 's/[^\"]*\"\([^\"]*\)\"[^\"]*/>\1</g'
This will result in: >Foo Bar<><>but this<
Here I showed the result string between ><'s for clarity, also using the non-greedy version with this sed command we first throw out the junk before and after that ""'s and then replace this with the part between the ""'s and surround this by ><'s.
A: From Greg H. I was able to create this regex to suit my needs.
I needed to match a specific value that was qualified by being inside quotes. It must be a full match, no partial matching could should trigger a hit
e.g. "test" could not match for "test2".
reg = r"""(['"])(%s)\1"""
if re.search(reg%(needle), haystack, re.IGNORECASE):
print "winning..."
Hunter
A: If you're trying to find strings that only have a certain suffix, such as dot syntax, you can try this:
\"([^\"]*?[^\"]*?)\".localized
Where .localized is the suffix.
Example:
print("this is something I need to return".localized + "so is this".localized + "but this is not")
It will capture "this is something I need to return".localized and "so is this".localized but not "but this is not".
A: A supplementary answer for the subset of Microsoft VBA coders only one uses the library Microsoft VBScript Regular Expressions 5.5 and this gives the following code
Sub TestRegularExpression()
Dim oRE As VBScript_RegExp_55.RegExp '* Tools->References: Microsoft VBScript Regular Expressions 5.5
Set oRE = New VBScript_RegExp_55.RegExp
oRE.Pattern = """([^""]*)"""
oRE.Global = True
Dim sTest As String
sTest = """Foo Bar"" ""Another Value"" something else"
Debug.Assert oRE.test(sTest)
Dim oMatchCol As VBScript_RegExp_55.MatchCollection
Set oMatchCol = oRE.Execute(sTest)
Debug.Assert oMatchCol.Count = 2
Dim oMatch As Match
For Each oMatch In oMatchCol
Debug.Print oMatch.SubMatches(0)
Next oMatch
End Sub
A: I liked Eugen Mihailescu's solution to match the content between quotes whilst allowing to escape quotes. However, I discovered some problems with escaping and came up with the following regex to fix them:
(['"])(?:(?!\1|\\).|\\.)*\1
It does the trick and is still pretty simple and easy to maintain.
Demo (with some more test-cases; feel free to use it and expand on it).
PS: If you just want the content between quotes in the full match ($0), and are not afraid of the performance penalty use:
(?<=(['"])\b)(?:(?!\1|\\).|\\.)*(?=\1)
Unfortunately, without the quotes as anchors, I had to add a boundary \b which does not play well with spaces and non-word boundary characters after the starting quote.
Alternatively, modify the initial version by simply adding a group and extract the string form $2:
(['"])((?:(?!\1|\\).|\\.)*)\1
PPS: If your focus is solely on efficiency, go with Casimir et Hippolyte's solution; it's a good one.
A: A very late answer, but like to answer
(\"[\w\s]+\")
http://regex101.com/r/cB0kB8/1
A: I would go for:
"([^"]*)"
The [^"] is regex for any character except '"'
The reason I use this over the non greedy many operator is that I have to keep looking that up just to make sure I get it correct.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "341"
} |
Q: Explicit Type Conversion in Scala Lets say I have the following code:
abstract class Animal
case class Dog(name:String) extends Animal
var foo:Animal = Dog("rover")
var bar:Dog = foo //ERROR!
How do I fix the last line of this code? Basically, I just want to do what, in a C-like language would be done:
var bar:Dog = (Dog) foo
A: I figured this out myself. There are two solutions:
1) Do the explicit cast:
var bar:Dog = foo.asInstanceOf[Dog]
2) Use pattern matching to cast it for you, this also catches errors:
var bar:Dog = foo match {
case x:Dog => x
case _ => {
// Error handling code here
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
} |
Q: Getting 256 colors out of ruby-ncurses I've got 256 colors working great in my terminal (test scripts here), but it stops working when I use ncurses (via Ruby-ncurses). Printing the escape sequences given on that page works fine, but when I initialize ncurses 'puts' stops working and I can't output the colors with any of the various ncurses color changing/string output functions I've found. What gives?
A: I am not sure if this would be all the story, but make sure your terminal capabilities do indeed provide for the 256 colors description.
What is the TERM environment variable value? Try setting it to xterm-256color and rerun it.
ncurses should then get the proper color escape sequences.
You can also test the terminal capabilities and terminal color output with the program we use at SXEmacs development:
http://www.triatlantico.org/tmp/tty-colors.c
Compile with gcc -o tty-colors tty-colors.c -lncurses
EDIT:
Note that just because the scripts that are found on the net output the 256 colors, that is not "all set".
Curses programs rely on terminfo and termcap and the TERM environment variable to find out how to interact with the terminal.
So in order for a curses app to be able to use the 256 colors one should set the TERM variable to an existing terminal name which supports 256 colors.
The C program above will show you what ncurses thinks about your terminal, not just output the xterm sequences like most scripts do [even the one from X.org]
A: njsf: You were partially right here, and after tinkering a lot more I eventually got it to work. Thanks for your help. The story: XTerm (and rxvt, and Eterm) support 256 colors via escape sequences (what I was seeing) but 'tput colors' will say '8' and ncurses won't be able to get at them, because ncurses is playing nice and attempting to access via terminfo.
For the benefit of anyone with similar pain:
I found I need to install the ncurses-term (Ubuntu) package to get /lib/terminfo/x/xterm-256color and other 256-color terminfo files. Then I set my TERM to xterm-256color and added the line '*customization: -color' to my ~/.Xdefaults, ran 'xrdb -merge ~/.Xdefaults' to load it, and from then on I have proper 256 color support in new xterms.
A: setting
ENV['TERM'] += '-256color' if ENV['TERM'] == 'xterm' # activate 256 colors
works on ubuntu 10.04 +
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Make and build utilities on CentOS/RHEL? I've been unsuccessfully searching for a way to install make utility on my CentOS 5.2. I've looked through some RPM repositories and online, with no avail. Installing gcc, gcc-c++ didn't help! Package build-essential is not made for CentOS/RHEL. I have RPMFORGE repo enabled in YUM.
A: yum groupinstall "Development Tools"
or
yum install gcc gcc-c++ kernel-devel
A: You'll need this if groupinstall doesn't work:
yum install -y gcc-c++ make
A: yum install make
also works.
A: I just double checked and CentOS 5.2 already includes make!
I found it also in one of the online mirrors, if it is easier for you:
http://centos.cogentcloud.com/5.2/os/i386/CentOS/make-3.81-3.el5.i386.rpm
if you installed the 64 bit version:
http://centos.cogentcloud.com/5.2/os/x86_64/CentOS/make-3.81-3.el5.x86_64.rpm
A: This command works for me
yum groupinstall "Development Tools" -y
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: How would I implement a bit map? I wish to implement a 2d bit map class in Python. The class would have the following requirements:
*
*Allow the creating of arbitrarily sized 2d bitmaps. i.e. to create an 8 x 8 bitmap (8 bytes), something like:
bitmap = Bitmap(8,8)
*provide an API to access the bits in this 2d map as boolean or even integer values, i.e.:
if bitmap[1, 2] or bitmap.get(0, 1)
*Able to retrieve the data as packed Binary data. Essentially it would be each row of the bit map concatenated and returned as Binary data. It may be padded to the nearest byte or something similar.
bitmap.data()
*Be able to create new maps from the binary data retrieved:
new_bitmap = Bitmap(8, 8, bitmap.data())
I know Python is able to perform binary operations, but I'd like some suggestions as how best to use them to implement this class.
A: Bit-Packing numpy ( SciPY ) arrays does what you are looking for.
The example shows 4x3 bit (Boolean) array packed into 4 8-bit bytes. unpackbits unpacks uint8 arrays into a Boolean output array that you can use in computations.
>>> a = np.array([[[1,0,1],
... [0,1,0]],
... [[1,1,0],
... [0,0,1]]])
>>> b = np.packbits(a,axis=-1)
>>> b
array([[[160],[64]],[[192],[32]]], dtype=uint8)
If you need 1-bit pixel images, PIL is the place to look.
A: No need to create this yourself.
Use the very good Python Imaging Library (PIL)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Accessing HttpApplicationState during Session_End In my ASP.NET application using InProc sessions, Session_End calls a static method in another object to do session-specific clean up. This clean up uses a shared database connection that I am storing in application state.
The problem is that I cannot see how to access the application state without passing it (or rather the database connection) as a parameter to the clean up method. Since I am not in a request I have no current HttpContext, and I cannot find any other static method to access the state.
Am I missing something?
UPDATE: It appears that my question needs further clarification, so let me try the following code sample. What I want to be able to do is:
// in Global.asax
void Session_End(object sender, EventArgs e)
{
NeedsCleanup nc = Session["NeedsCleanup"] as NeedsCleanup;
nc.CleanUp();
}
But the problem is that the CleanUp method in turn needs information that is stored in application state. I am already doing the following, but it is exactly what I was hoping to avoid; this is what I meant by "...without passing it... as a parameter to the clean up method" above.
// in Global.asax
void Session_End(object sender, EventArgs e)
{
NeedsCleanup nc = Session["NeedsCleanup"] as NeedsCleanup;
nc.CleanUp(this.Application);
}
I just do not like the idea that Global.asax has to know where the NeedsCleanup object gets its information. That sort of thing that makes more sense as self-contained within the class.
A: You should be able to access the ApplicationState object using the Application property from inside Session_End.
void Session_End(object sender, EventArgs e)
{
HttpApplicationState state = this.Application;
}
(had to reply in a different answer because I don't have the reputation needed to comment directly)
A: You should be able to access the SessionState object using the Session property from inside Session_End.
void Session_End(object sender, EventArgs e)
{
HttpSessionState session = this.Session;
}
This property and a lot more come from the base class of Global.asax
A: Where are you creating the "NeedsCleanup" instances? If it's in Session_Start, it makes sense that your global class would know how/when to both create and destroy these instance.
I understand you'd like to decouple the cleanup of NeedsCleanup from its caller. Perhaps a cleaner way would to pass in the "HttpApplication" instance found on both "HttpContext.Current.ApplicationInstance" as well as from your Global class via the "this" reference. Alternatively you could specify any of the aforementioned instance on construction as well, that would a make cleanup less coupled.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Authenticating in PHP using LDAP through Active Directory I'm looking for a way to authenticate users through LDAP with PHP (with Active Directory being the provider). Ideally, it should be able to run on IIS 7 (adLDAP does it on Apache). Anyone had done anything similar, with success?
*
*Edit: I'd prefer a library/class with code that's ready to go... It'd be silly to invent the wheel when someone has already done so.
A: I like the Zend_Ldap Class, you can use only this class in your project, without the Zend Framework.
A: PHP has libraries: http://ca.php.net/ldap
PEAR also has a number of packages: http://pear.php.net/search.php?q=ldap&in=packages&x=0&y=0
I haven't used either, but I was going to at one point and they seemed like they should work.
A: For those looking for a complete example check out http://www.exchangecore.com/blog/how-use-ldap-active-directory-authentication-php/.
I have tested this connecting to both Windows Server 2003 and Windows Server 2008 R2 domain controllers from a Windows Server 2003 Web Server (IIS6) and from a windows server 2012 enterprise running IIS 8.
A: You would think that simply authenticating a user in Active Directory would be a pretty simple process using LDAP in PHP without the need for a library. But there are a lot of things that can complicate it pretty fast:
*
*You must validate input. An empty username/password would pass otherwise.
*You should ensure the username/password is properly encoded when binding.
*You should be encrypting the connection using TLS.
*Using separate LDAP servers for redundancy in case one is down.
*Getting an informative error message if authentication fails.
It's actually easier in most cases to use a LDAP library supporting the above. I ultimately ended up rolling my own library which handles all the above points: LdapTools (Well, not just for authentication, it can do much more). It can be used like the following:
use LdapTools\Configuration;
use LdapTools\DomainConfiguration;
use LdapTools\LdapManager;
$domain = (new DomainConfiguration('example.com'))
->setUsername('username') # A separate AD service account used by your app
->setPassword('password')
->setServers(['dc1', 'dc2', 'dc3'])
->setUseTls(true);
$config = new Configuration($domain);
$ldap = new LdapManager($config);
if (!$ldap->authenticate($username, $password, $message)) {
echo "Error: $message";
} else {
// Do something...
}
The authenticate call above will:
*
*Validate that neither the username or password is empty.
*Ensure the username/password is properly encoded (UTF-8 by default)
*Try an alternate LDAP server in case one is down.
*Encrypt the authentication request using TLS.
*Provide additional information if it failed (ie. locked/disabled account, etc)
There are other libraries to do this too (Such as Adldap2). However, I felt compelled enough to provide some additional information as the most up-voted answer is actually a security risk to rely on with no input validation done and not using TLS.
A: Importing a whole library seems inefficient when all you need is essentially two lines of code...
$ldap = ldap_connect("ldap.example.com");
if ($bind = ldap_bind($ldap, $_POST['username'], $_POST['password'])) {
// log them in!
} else {
// error message
}
A: I do this simply by passing the user credentials to ldap_bind().
http://php.net/manual/en/function.ldap-bind.php
If the account can bind to LDAP, it's valid; if it can't, it's not. If all you're doing is authentication (not account management), I don't see the need for a library.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "111"
} |
Q: acts_as_tree does not destroy the model's children I have this Task model:
class Task < ActiveRecord::Base
acts_as_tree :order => 'sort_order'
end
And I have this test
class TaskTest < Test::Unit::TestCase
def setup
@root = create_root
end
def test_destroying_a_task_should_destroy_all_of_its_descendants
d1 = create_task(:parent_id => @root.id, :sort_order => 2)
d2 = create_task(:parent_id => d1.id, :sort_order => 3)
d3 = create_task(:parent_id => d2.id, :sort_order => 4)
d4 = create_task(:parent_id => d1.id, :sort_order => 5)
assert_equal 5, Task.count
d1.destroy
assert_equal @root, Task.find(:first)
assert_equal 1, Task.count
end
end
The test is successful: when I destroy d1, it destroys all the descendants of d1. Thus, after the destroy only the root is left.
However, this test is now failing after I have added a before_save callback to the Task. This is the code I added to Task:
before_save :update_descendants_if_necessary
def update_descendants_if_necessary
handle_parent_id_change if self.parent_id_changed?
return true
end
def handle_parent_id_change
self.children.each do |sub_task|
#the code within the loop is deliberately commented out
end
end
When I added this code, assert_equal 1, Task.count fails, with Task.count == 4. I think self.children under handled_parent_id_change is the culprit, because when I comment out the self.children.each do |sub_task| block, the test passes again.
Any ideas?
A: I found the bug. The line
d1 = create_task(:parent_id => @root.id, :sort_order => 2)
creates d1. This calls the before_save callback, which in turn calls self.children. As Orion pointed out, this caches the children of d1.
However, at this point, d1 doesn't have any children yet. So d1's cache of children is empty.
Thus, when I try to destroy d1, the program tries to destroy d1's children. It encounters the cache, finds that it is empty, and a result doesn't destroy d2, d3, and d4.
I solved this by changing the task creations like this:
@root.children << (d1 = new_task(:sort_order => 2))
@root.save!
This worked so I'm ok with it :) I think it is also possible to fix this by either reloading d1 (d1.reload) or self.children (self.children(true)) although I didn't try any of these solutions.
A: children is a simple has_many association
This means, when you call .children, it will load them from the database (if not already present). It will then cache them.
I was going to say that your second 'test' will actually be looking at the cached values not the real database, but that shouldn't happen as you are just using Task.count rather than d1.children.count. Hrm
Have you looked at the logs? They will show you the SQL which is being executed. You may see a mysql error in there which will tell you what's going on
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Updating a XML file using PHP What is the easiest way to update a single attribute in a XML tag using PHP without rewriting and saving the file? Any way to do this just using regular DOM stuff?
A: If you have PHP5 on your server you could try:
$string = "<?xml version='1.0'?>
<doc>
<title>XML Document</title>
<date timezone=\"GMT+1\">2008-01-01 13:42:53</date>
<message>Daylight savings starting soon!</message>
</doc>";
$xml = simplexml_load_string($string);
// Show current timezone
echo $xml->date['timezone'].'<br>';
// Set a new timezone
$xml->date['timezone'] = 'GMT+10';
echo $xml->date['timezone'];
Note: Watch the whitespace -- the XML needs to be well-formed for SimpleXML to parse correctly.
Alternatives include simplexml_load_file() and simplexml_import_dom().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I obtain a list of the application domains my application has created? I have a service app that creates AppDomain's during the course of its use for long running tasks. I've been tracking these by storing them in a Hashtable with a unique ID.
After a task is completed the service app then unloads the AppDomain allocated to that task and then it's removed it from the appdomain Hashtable.
Purely from a sanity checking point of view, is there a way I can query the CLR to see what app domains are still loaded by the creating app domain (i.e. so I can compare the tracking Hashtable against what the CLR actually sees)?
A: AFAIK, you need to keep your own list - like you are already.
A: If you use the unmanaged APIs you may set-up a DomainManager that gets called on each AppDomain creation, and you'll find that many pieces are creating their own AppDomains, e.g. WCF. A detailed explanatin is in Customizing the Microsoft .NET Framework Common Language Runtime
Another route is using the debug APIs.
A: I think you would like to check also this article - "Working with Application Domains in WPF".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Proper IE6 HTML element dimensions I'm trying to set the width and height of an element with javascript to cover the entire browser viewport, and I'm successful using document.body.clientHeight but in IE6 it seems that I always get horizontal and vertical scrollbars because the element must be slightly too big.
Now, I really don't want to use browser specific logic and substract a pixel or 2 from each dimension just for IE6. Also, I am not using CSS (width: 100% etc.) for this because I need the pixel amounts.
Does anyone know a better way to fill the viewport with an element in IE6+ (obviously all good browsers, too)?
Edit: Thanks Owen for the suggestion, I'm sure jQuery will work. I should have specified that I need a toolkit-agnostic solution.
A: This may help the cause ...
From http://andylangton.co.uk/articles/javascript/get-viewport-size-javascript/ :
<script type="text/javascript">
<!--
var viewportwidth;
var viewportheight;
// the more standards compliant browsers (mozilla/netscape/opera/IE7) use window.innerWidth and window.innerHeight
if (typeof window.innerWidth != 'undefined')
{
viewportwidth = window.innerWidth,
viewportheight = window.innerHeight
}
// IE6 in standards compliant mode (i.e. with a valid doctype as the first line in the document)
else if (typeof document.documentElement != 'undefined'
&& typeof document.documentElement.clientWidth !=
'undefined' && document.documentElement.clientWidth != 0)
{
viewportwidth = document.documentElement.clientWidth,
viewportheight = document.documentElement.clientHeight
}
// older versions of IE
else
{
viewportwidth = document.getElementsByTagName('body')[0].clientWidth,
viewportheight = document.getElementsByTagName('body')[0].clientHeight
}
document.write('<p>Your viewport width is '+viewportwidth+'x'+viewportheight+'</p>');
//-->
</script>
A: have you considered using jQuery? it abstracts most of the browser specific functionality away into a common interface.
var width = $(document).width();
var height = $(document.height();
$('#mySpecialElement').width(width).height(height);
A: Ah ha! I forgot about document.documentElement.clientLeft and document.documentElement.clientTop.
They were 2 in IE and 0 in the good browsers.
So using var WIDTH = ((document.documentElement.clientWidth - document.documentElement.clientLeft) || document.body.clientWidth);
etc. did the trick, similar idea for HEIGHT!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best way to document AJAX + PHP code? I have always been for documenting code, but when it comes to AJAX + PHP, it's not always easy: the code is really spread out! Logic, data, presentation - you name it - are split and mixed between server-side and client-side code. Sometimes there's also database-side code (stored procedures, views, etc) doing part of the work.
This challenges me to come up with an efficient way to document such code. I usually provide a list of .js files inside .php file as well as list of .php files inside .js file. I also do in-line comments and function descriptions, where I list what function is used by what file and what output is expected. I do similar tasks for database procedures. Maybe there's a better method?
Any ideas or experiences?
Note: This question applies to any client+server-side applications, not just Javascript+PHP.
A: I think your method is pretty good. The only thing is that everything inside the js file is readable by others and therefore documenting what PHP files are used could lead to a security hole, in the off chance they can get to a file that returns something it shouldn't. Also, although not a big deal, on higher traffic sites, downloading say 500bytes of comments can add up.
Both of these are not big, but just thoughts I've had before.
A: I think it's best to take a hierarchical approach.
For api-level documentation like on the function and class level, write inline documentation in the code and generate html documentation out of them using the many documentation tools out there (JSDoc, phpDocumentor, OraDoclet, etc). Bonus points if your doc tools can integrate with your source control tools so you can jump to specific lines of code from your api docs.
Once you have your doc tools in place, start generating the documentation as part of your build process (you have a build process, right?) for each new build and push the documentation to a standard web location.
Once these api docs are online, you can create a wiki for high level documentation such as browser->web->db interactions, user stories, schema diagrams, etc. It's best to write in brief prose or bullet points for high level documentation, linking to api docs and source control when necessary.
A: Serve your javascript (and css) through PHP - you can keep your source files together for easy cross reference and with careful use of headers you can easily handle caching. Doing this also lets you have a nicely formatted comment-heavy source version which you can then condense or obfuscate before sending to the browser.
function OutputJs($Content) {
ob_start();
echo $Content;
$expires = DAY_IN_S;
header("Content-type: x-javascript");
header('Content-Length: ' . ob_get_length());
header('Cache-Control: max-age='.$expires.', must-revalidate');
header('Pragma: public');
header('Expires: '. gmdate('D, d M Y H:i:s', time()+$expires).'GMT');
ob_end_flush();
}
A: For projects with a lot of javascript, I use a build system (makefiles) with a javascript minimizer. As the jsmin author notes, stripping comments "encourages a more expressive programming style because it eliminates the download cost of clean, literate self-documentation."
The bonus is that jsmin also strips comments from CSS - so you can start commenting freely there as well. (I find that using css classes is crucial for writing clear javascript.)
It's an interesting idea to use PHP to dynamically strip out the code and organize javascript files. Keep in mind that an important optimization for web apps is to reduce HTTP requests so it is often wise to join smaller javascript files together. (I've found that simply concatenating minimized js files into a single file, works great.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Find out which remote branch a local branch is tracking
See also:
How can I see which Git branches are tracking which remote / upstream branch?
How can I find out which remote branch a local branch is tracking?
Do I need to parse git config output, or is there a command that would do this for me?
A: Update: Well, it's been several years since I posted this! For my specific purpose of comparing HEAD to upstream, I now use @{u}, which is a shortcut that refers to the HEAD of the upstream tracking branch. (See https://git-scm.com/docs/gitrevisions#gitrevisions-emltbranchnamegtupstreamemegemmasterupstreamememuem ).
Original answer: I've run across this problem as well. I often use multiple remotes in a single repository, and it's easy to forget which one your current branch is tracking against. And sometimes it's handy to know that, such as when you want to look at your local commits via git log remotename/branchname..HEAD.
All this stuff is stored in git config variables, but you don't have to parse the git config output. If you invoke git config followed by the name of a variable, it will just print the value of that variable, no parsing required. With that in mind, here are some commands to get info about your current branch's tracking setup:
LOCAL_BRANCH=`git name-rev --name-only HEAD`
TRACKING_BRANCH=`git config branch.$LOCAL_BRANCH.merge`
TRACKING_REMOTE=`git config branch.$LOCAL_BRANCH.remote`
REMOTE_URL=`git config remote.$TRACKING_REMOTE.url`
In my case, since I'm only interested in finding out the name of my current remote, I do this:
git config branch.`git name-rev --name-only HEAD`.remote
A: Another method (thanks osse), if you just want to know whether or not it exists:
if git rev-parse @{u} > /dev/null 2>&1
then
printf "has an upstream\n"
else
printf "has no upstream\n"
fi
A: git branch -r -vv
will list all branches including remote.
A: You can try this :
git remote show origin | grep "branch_name"
branch_name needs to be replaced with your branch
A: The local branches and their remotes.
git branch -vv
All branches and tracking remotes.
git branch -a -vv
See where the local branches are explicitly configured for push and pull.
git remote show {remote_name}
A: Lists both local and remote branches:
$ git branch -ra
Output:
feature/feature1
feature/feature2
hotfix/hotfix1
* master
remotes/origin/HEAD -> origin/master
remotes/origin/develop
remotes/origin/master
A: Display only current branch info without using grep:
git branch -vv --contains
This is short for:
git branch -vv --contains HEAD
and if your current HEAD's commit id is in other branches, those branches will display also.
A: If you want to find the upstream for any branch (as opposed to just the one you are on), here is a slight modification to @cdunn2001's answer:
git rev-parse --abbrev-ref --symbolic-full-name YOUR_LOCAL_BRANCH_NAME@{upstream}
That will give you the remote branch name for the local branch named YOUR_LOCAL_BRANCH_NAME.
A: Two choices:
% git rev-parse --abbrev-ref --symbolic-full-name @{u}
origin/mainline
or
% git for-each-ref --format='%(upstream:short)' "$(git symbolic-ref -q HEAD)"
origin/mainline
A: Having tried all of the solutions here, I realized none of them were good in all situations:
*
*works on local branches
*works on detached branches
*works under CI
This command gets all names:
git branch -a --contains HEAD --list --format='%(refname:short)'
For my application, I had to filter out the HEAD & master refs, prefer remote refs, and strip off the word 'origin/'. and then if that wasn't found, use the first non HEAD ref that didn't have a / or a ( in it.
A: git branch -vv | grep 'BRANCH_NAME'
git branch -vv : This part will show all local branches along with their upstream branch .
grep 'BRANCH_NAME' : It will filter the current branch from the branch list.
A: This will show you the branch you are on:
$ git branch -vv
This will show only the current branch you are on:
$ git for-each-ref --format='%(upstream:short)' $(git symbolic-ref -q HEAD)
for example:
myremote/mybranch
You can find out the URL of the remote that is used by the current branch you are on with:
$ git remote get-url $(git for-each-ref --format='%(upstream:short)' $(git symbolic-ref -q HEAD)|cut -d/ -f1)
for example:
https://github.com/someone/somerepo.git
A: You can use git checkout, i.e. "check out the current branch". This is a no-op with a side-effects to show the tracking information, if exists, for the current branch.
$ git checkout
Your branch is up-to-date with 'origin/master'.
A: I think git branch -av only tells you what branches you have and which commit they're at, leaving you to infer which remote branches the local branches are tracking.
git remote show origin explicitly tells you which branches are tracking which remote branches. Here's example output from a repository with a single commit and a remote branch called abranch:
$ git branch -av
* abranch d875bf4 initial commit
master d875bf4 initial commit
remotes/origin/HEAD -> origin/master
remotes/origin/abranch d875bf4 initial commit
remotes/origin/master d875bf4 initial commit
versus
$ git remote show origin
* remote origin
Fetch URL: /home/ageorge/tmp/d/../exrepo/
Push URL: /home/ageorge/tmp/d/../exrepo/
HEAD branch (remote HEAD is ambiguous, may be one of the following):
abranch
master
Remote branches:
abranch tracked
master tracked
Local branches configured for 'git pull':
abranch merges with remote abranch
master merges with remote master
Local refs configured for 'git push':
abranch pushes to abranch (up to date)
master pushes to master (up to date)
A: I don't know if this counts as parsing the output of git config, but this will determine the URL of the remote that master is tracking:
$ git config remote.$(git config branch.master.remote).url
A: Improving on this answer, I came up with these .gitconfig aliases:
branch-name = "symbolic-ref --short HEAD"
branch-remote-fetch = !"branch=$(git branch-name) && git config branch.\"$branch\".remote || echo origin #"
branch-remote-push = !"branch=$(git branch-name) && git config branch.\"$branch\".pushRemote || git config remote.pushDefault || git branch-remote-fetch #"
branch-url-fetch = !"remote=$(git branch-remote-fetch) && git remote get-url \"$remote\" #" # cognizant of insteadOf
branch-url-push = !"remote=$(git branch-remote-push ) && git remote get-url --push \"$remote\" #" # cognizant of pushInsteadOf
A: Yet another way
git status -b --porcelain
This will give you
## BRANCH(...REMOTE)
modified and untracked files
A: git-status porcelain (machine-readable) v2 output looks like this:
$ git status -b --porcelain=v2
# branch.oid d0de00da833720abb1cefe7356493d773140b460
# branch.head the-branch-name
# branch.upstream gitlab/the-branch-name
# branch.ab +2 -2
And to get the branch upstream only:
$ git status -b --porcelain=v2 | grep -m 1 "^# branch.upstream " | cut -d " " -f 3-
gitlab/the-branch-name
If the branch has no upstream, the above command will produce an empty output (or fail with set -o pipefail).
A: Here is a command that gives you all tracking branches (configured for 'pull'), see:
$ git branch -vv
main aaf02f0 [main/master: ahead 25] Some other commit
* master add0a03 [jdsumsion/master] Some commit
You have to wade through the SHA and any long-wrapping commit messages, but it's quick to type and I get the tracking branches aligned vertically in the 3rd column.
If you need info on both 'pull' and 'push' configuration per branch, see the other answer on git remote show origin.
Update
Starting in git version 1.8.5 you can show the upstream branch with git status and git status -sb
A: Another simple way is to use
cat .git/config in a git repo
This will list details for local branches
A: I use this alias
git config --global alias.track '!sh -c "
if [ \$# -eq 2 ]
then
echo \"Setting tracking for branch \" \$1 \" -> \" \$2;
git branch --set-upstream \$1 \$2;
else
git for-each-ref --format=\"local: %(refname:short) <--sync--> remote: %(upstream:short)\" refs/heads && echo --URLs && git remote -v;
fi
" -'
then
git track
note that the script can also be used to setup tracking.
More great aliases at https://github.com/orefalo/bash-profiles
A: Following command will remote origin current fork is referring to
git remote -v
For adding a remote path,
git remote add origin path_name
A: If you are using Gradle,
def gitHash = new ByteArrayOutputStream()
project.exec {
commandLine 'git', 'rev-parse', '--short', 'HEAD'
standardOutput = gitHash
}
def gitBranch = new ByteArrayOutputStream()
project.exec {
def gitCmd = "git symbolic-ref --short -q HEAD || git branch -rq --contains "+getGitHash()+" | sed -e '2,\$d' -e 's/\\(.*\\)\\/\\(.*\\)\$/\\2/' || echo 'master'"
commandLine "bash", "-c", "${gitCmd}"
standardOutput = gitBranch
}
A: git branch -vv | grep 'hardcode-branch-name'
# "git rev-parse --abbrev-ref head" will get your current branch name
# $(git rev-parse --abbrev-ref head) save it as string
# find the tracking branch by grep filtering the current branch
git branch -vv | grep $(git rev-parse --abbrev-ref head)
A: I use EasyGit (a.k.a. "eg") as a super lightweight wrapper on top of (or along side of) Git. EasyGit has an "info" subcommand that gives you all kinds of super useful information, including the current branches remote tracking branch. Here's an example (where the current branch name is "foo"):
pknotz@s883422: (foo) ~/workspace/bd
$ eg info
Total commits: 175
Local repository: .git
Named remote repositories: (name -> location)
origin -> git://sahp7577/home/pknotz/bd.git
Current branch: foo
Cryptographic checksum (sha1sum): bd248d1de7d759eb48e8b5ff3bfb3bb0eca4c5bf
Default pull/push repository: origin
Default pull/push options:
branch.foo.remote = origin
branch.foo.merge = refs/heads/aal_devel_1
Number of contributors: 3
Number of files: 28
Number of directories: 20
Biggest file size, in bytes: 32473 (pygooglechart-0.2.0/COPYING)
Commits: 62
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1163"
} |
Q: Javascript memory profiler for Firefox Is there a tool/plugin/function for Firefox that'll dump out a memory usage of Javascript objects that you create in a page/script? I know about Firebug's profiler but I'd like something more than just times. Something akin to what Yourkit has for Java profiling of memory usage.
Reason is that a co-worker is using id's for "keys" in an array and is creating 1000's of empty slots when he does this. He's of the opinion that this is harmless whereas my opinion differs. I'd like to offer some proof to prove whether I'm right or not.
A: I think JavaScript Memory Validator from Software Verification Limited can help you, it has allocations view, objects view, generations view, etc. It's not free but you can use the evaluation version to check your coworker's code. They also have a Performance and Coverage Validators...
A: See the source. Sparse arrays don't take up lots of memory, but if your colleague doesn't need any Array functionality, he should be using plain Objects anyway.
A: Try also about:memory which shows how much memory each window occupies and how much of it is dedicated to JS objects. It gives high level summary without per object usage, but it is a good starting point for investigating memory requirements of a site.
A: I haven't tried the Sofware verify tools, but Mozilla has tools that track overall memory consumed by firefox for the purpose of stemming leaks:
http://www.mozilla.org/performance/tools.html
and:
https://wiki.mozilla.org/Performance:Leak_Tools
There's also this guy saying to avoid large arrays in the context of closures, towards article bottom
http://ajax.sys-con.com/node/352585
A: You can use Mozilla’s Developer Tools. In order to use advanced developer tools of Firefox you need to create a debug build instead of a release build. For more on building process, see the page. Also, more information about using Mozilla’s Developer Tools you can find in this paper.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: Using TTreeview as a menu Im using delphi's ttreeview as an 'options' menu. how would i go upon selecting the next node at runtime like a previous and next button? i tried the getprev and getnext methods but no luck.
A: Here you have the 'Next' behavior. For 'Previous' I leave as exercise for you: :-)
procedure TForm8.btn1Click(Sender: TObject);
var
crt: TTreeNode;
begin
with tv1 do //this is our tree
begin
if Selected=nil then
crt:=Items[0] //the first one
else
crt:=Selected.GetNext; //for previous you'll have 'GetPrev'
if crt<>nil then //can be 'nil' if we reached to the end
Selected:=crt;
end;
end;
HTH
A: Maybe there is some space in tree item to store pointer to you correct page.
But - if you have some time - try to explore Virtual Treeview - it's Delphi's best treeview component.
A: here is another way to do this:
type TfrmMain = class(TForm)
...
public
DLLHandle : THandle;
function GetNodePath(node: TTreeNode; delimiter: string = '\') : String;
...
function TfrmMain.GetNodePath(node: TTreeNode; delimiter: string = '\') : String;
begin
Result:='';
while Assigned(node) do
begin
Result:=delimiter+node.Text+Result;
node:=node.Parent;
end;
if Result <> '' then
Delete(Result, 1, 1);
end;
...
here is how to use it: on your treeview's click or doubleclick event do this
...
var
path : String;
begin
path:=GetNodePath(yourTreeView.Selected);
ShowMessage(path);
...
if you have a 'Item 1' and a subitem called 'Item 1' and click on Item 2 than the message should be 'Item 1\Item 2'. By doing this you can have a better control...
hope this gives you another idea to enhance your code
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is Ubuntu JeOS good for production purpose? Actually I will want to use that JeOS for our webserver. Is it a good choice?
A: Thanks for piquing my interest. From the Ubuntu website:
Ubuntu Server Edition JeOS (pronounced
"Juice") is an efficient variant of
our server operating system,
configured specifically for virtual
appliances. Currently available as a
CD-Rom ISO for download, JeOS is a
specialised installation of Ubuntu
Server Edition with a tuned kernel
that only contains the base elements
needed to run within a virtualized
environment.
It looks promising to me - I run several full Ubuntu 8.04 VMs so I'll certainly check it out. Why not just try it?
A: Be aware that the kernel it installs is striped down to only have the stuff required for virtual machines, therefore you might have problems accessing the network from a real machine. (Note that the install-CD kernel isn't the same as the installed kernel as well).
If you can bypass that (IIRC I booted from the CD, and downloaded the normal server kernel and it all worked fine), then you end up with an absolutely minimal Linux system, but backed by the full Ubuntu repositories, so it's an excellent base for a server.
Also note minimal really means minimal - no cron by default for example.
A:
Is it a good choice?
If you plan to run JeOS in a virtual machine, then yes, this is a good choice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best practices for migrating web application from Netscape to IIS? We are in the process of developing a .NET based IIS hosted web application as part of a re-platforming project. The original web app is on a Netscape server, in the process of migration we need to point the dns to the IIS server so that the requests are responded by IIS. at the same time we would still need the Netscape server so as to redirect the users from the IIS web app for the regions of the web site which the new application doesn't process (yet).
The old application is frame based, so we plan on using IFrames in the content area (of a master page in web client software factory) and use a URL rewrite engine to render pages from the old system in the iframe.
We also need to point the DNS entry which currently points to the Netscape server to IIS.
Are there and best practices for the above activities?
A: Maybe this link can help: Migrating a Web Server to IIS: Basic Steps
It discussed the steps you can take to get ready for Internet Information Services (IIS). In this article, the author takes a "nuts and bolts" approach to migrating an individual Web server.
You'll find detailed information about migrating configuration settings and content to a server running IIS 5.0 from another type of Web server, such as Apache HTTP Server or Netscape Enterprise Server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a command to refresh environment variables from the command prompt in Windows? If I modify or add an environment variable I have to restart the command prompt. Is there a command I could execute that would do this without restarting CMD?
A: It is possible to do this by overwriting the Environment Table within a specified process itself.
As a proof of concept I wrote this sample app, which just edited a single (known) environment variable in a cmd.exe process:
typedef DWORD (__stdcall *NtQueryInformationProcessPtr)(HANDLE, DWORD, PVOID, ULONG, PULONG);
int __cdecl main(int argc, char* argv[])
{
HMODULE hNtDll = GetModuleHandleA("ntdll.dll");
NtQueryInformationProcessPtr NtQueryInformationProcess = (NtQueryInformationProcessPtr)GetProcAddress(hNtDll, "NtQueryInformationProcess");
int processId = atoi(argv[1]);
printf("Target PID: %u\n", processId);
// open the process with read+write access
HANDLE hProcess = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION | PROCESS_VM_READ | PROCESS_VM_WRITE | PROCESS_VM_OPERATION, 0, processId);
if(hProcess == NULL)
{
printf("Error opening process (%u)\n", GetLastError());
return 0;
}
// find the location of the PEB
PROCESS_BASIC_INFORMATION pbi = {0};
NTSTATUS status = NtQueryInformationProcess(hProcess, ProcessBasicInformation, &pbi, sizeof(pbi), NULL);
if(status != 0)
{
printf("Error ProcessBasicInformation (0x%8X)\n", status);
}
printf("PEB: %p\n", pbi.PebBaseAddress);
// find the process parameters
char *processParamsOffset = (char*)pbi.PebBaseAddress + 0x20; // hard coded offset for x64 apps
char *processParameters = NULL;
if(ReadProcessMemory(hProcess, processParamsOffset, &processParameters, sizeof(processParameters), NULL))
{
printf("UserProcessParameters: %p\n", processParameters);
}
else
{
printf("Error ReadProcessMemory (%u)\n", GetLastError());
}
// find the address to the environment table
char *environmentOffset = processParameters + 0x80; // hard coded offset for x64 apps
char *environment = NULL;
ReadProcessMemory(hProcess, environmentOffset, &environment, sizeof(environment), NULL);
printf("environment: %p\n", environment);
// copy the environment table into our own memory for scanning
wchar_t *localEnvBlock = new wchar_t[64*1024];
ReadProcessMemory(hProcess, environment, localEnvBlock, sizeof(wchar_t)*64*1024, NULL);
// find the variable to edit
wchar_t *found = NULL;
wchar_t *varOffset = localEnvBlock;
while(varOffset < localEnvBlock + 64*1024)
{
if(varOffset[0] == '\0')
{
// we reached the end
break;
}
if(wcsncmp(varOffset, L"ENVTEST=", 8) == 0)
{
found = varOffset;
break;
}
varOffset += wcslen(varOffset)+1;
}
// check to see if we found one
if(found)
{
size_t offset = (found - localEnvBlock) * sizeof(wchar_t);
printf("Offset: %Iu\n", offset);
// write a new version (if the size of the value changes then we have to rewrite the entire block)
if(!WriteProcessMemory(hProcess, environment + offset, L"ENVTEST=def", 12*sizeof(wchar_t), NULL))
{
printf("Error WriteProcessMemory (%u)\n", GetLastError());
}
}
// cleanup
delete[] localEnvBlock;
CloseHandle(hProcess);
return 0;
}
Sample output:
>set ENVTEST=abc
>cppTest.exe 13796
Target PID: 13796
PEB: 000007FFFFFD3000
UserProcessParameters: 00000000004B2F30
environment: 000000000052E700
Offset: 1528
>set ENVTEST
ENVTEST=def
Notes
This approach would also be limited to security restrictions. If the target is run at higher elevation or a higher account (such as SYSTEM) then we wouldn't have permission to edit its memory.
If you wanted to do this to a 32-bit app, the hard coded offsets above would change to 0x10 and 0x48 respectively. These offsets can be found by dumping out the _PEB and _RTL_USER_PROCESS_PARAMETERS structs in a debugger (e.g. in WinDbg dt _PEB and dt _RTL_USER_PROCESS_PARAMETERS)
To change the proof-of-concept into a what the OP needs, it would just enumerate the current system and user environment variables (such as documented by @tsadok's answer) and write the entire environment table into the target process' memory.
Edit: The size of the environment block is also stored in the _RTL_USER_PROCESS_PARAMETERS struct, but the memory is allocated on the process' heap. So from an external process we wouldn't have the ability to resize it and make it larger. I played around with using VirtualAllocEx to allocate additional memory in the target process for the environment storage, and was able to set and read an entirely new table. Unfortunately any attempt to modify the environment from normal means will crash and burn as the address no longer points to the heap (it will crash in RtlSizeHeap).
A: The easiest way to add a variable to the path without rebooting for the current session is to open the command prompt and type:
PATH=(VARIABLE);%path%
and press enter.
to check if your variable loaded, type
PATH
and press enter. However, the variable will only be a part of the path until you reboot.
A: By design there isn't a built in mechanism for Windows to propagate an environment variable add/change/remove to an already running cmd.exe, either from another cmd.exe or from "My Computer -> Properties ->Advanced Settings -> Environment Variables".
If you modify or add a new environment variable outside of the scope of an existing open command prompt you either need to restart the command prompt, or, manually add using SET in the existing command prompt.
The latest accepted answer shows a partial work-around by manually refreshing all the environment variables in a script. The script handles the use case of changing environment variables globally in "My Computer...Environment Variables", but if an environment variable is changed in one cmd.exe the script will not propagate it to another running cmd.exe.
A: I made a better alternative to the Chocolatey refreshenv for cmd and cygwin which solves a lot of problems like:
*
*The Chocolatey refreshenv is so bad if the variable have some
cmd meta-characters, see this test:
add this to the path in HKCU\Environment: test & echo baaaaaaaaaad,
and run the chocolatey refreshenv you will see that it prints
baaaaaaaaaad which is very bad, and the new path is not added to
your path variable.
This script solve this and you can test it with any meta-character, even something so bad like:
; & % ' ( ) ~ + @ # $ { } [ ] , ` ! ^ | > < \ / " : ? * = . - _ & echo baaaad
*refreshenv adds only system and user
environment variables, but CMD adds volatile variables too
(HKCU\Volatile Environment). This script will merge all the three and
remove any duplicates.
*refreshenv reset your PATH. This script append the new path to the
old path of the parent script which called this script. It is better
than overwriting the old path, otherwise it will delete any newly
added path by the parent script.
*This script solve this problem described in a comment here by @Gene
Mayevsky: refreshenv modifies env variables TEMP and TMP replacing
them with values stored in HKCU\Environment. In my case I run the
script to update env variables modified by Jenkins job on a slave
that's running under SYSTEM account, so TEMP and TMP get substituted
by %USERPROFILE%\AppData\Local\Temp instead of C:\Windows\Temp. This
breaks build because linker cannot open system profile's Temp folder.
I made one script for cmd and another for cygwin/bash,
you can found them here: https://github.com/badrelmers/RefrEnv
For cmd
This script uses vbscript so it works in all windows versions xp+
to use it save it as refrenv.bat and call it with call refrenv.bat
<!-- : Begin batch script
@echo off
REM PUSHD "%~dp0"
REM author: Badr Elmers 2021
REM description: refrenv = refresh environment. this is a better alternative to the chocolatey refreshenv for cmd
REM https://github.com/badrelmers/RefrEnv
REM https://stackoverflow.com/questions/171588/is-there-a-command-to-refresh-environment-variables-from-the-command-prompt-in-w
REM ___USAGE_____________________________________________________________
REM usage:
REM call refrenv.bat full refresh. refresh all non critical variables*, and refresh the PATH
REM debug:
REM to debug what this script do create this variable in your parent script like that
REM set debugme=yes
REM then the folder containing the files used to set the variables will be open. Then see
REM _NewEnv.cmd this is the file which run inside your script to setup the new variables, you
REM can also revise the intermediate files _NewEnv.cmd_temp_.cmd and _NewEnv.cmd_temp2_.cmd
REM (those two contains all the variables before removing the duplicates and the unwanted variables)
REM you can also put this script in windows\systems32 or another place in your %PATH% then call it from an interactive console by writing refrenv
REM *critical variables: are variables which belong to cmd/windows and should not be refreshed normally like:
REM - windows vars:
REM ALLUSERSPROFILE APPDATA CommonProgramFiles CommonProgramFiles(x86) CommonProgramW6432 COMPUTERNAME ComSpec HOMEDRIVE HOMEPATH LOCALAPPDATA LOGONSERVER NUMBER_OF_PROCESSORS OS PATHEXT PROCESSOR_ARCHITECTURE PROCESSOR_ARCHITEW6432 PROCESSOR_IDENTIFIER PROCESSOR_LEVEL PROCESSOR_REVISION ProgramData ProgramFiles ProgramFiles(x86) ProgramW6432 PUBLIC SystemDrive SystemRoot TEMP TMP USERDOMAIN USERDOMAIN_ROAMINGPROFILE USERNAME USERPROFILE windir SESSIONNAME
REM ___INFO_____________________________________________________________
REM :: this script reload environment variables inside cmd every time you want environment changes to propagate, so you do not need to restart cmd after setting a new variable with setx or when installing new apps which add new variables ...etc
REM This is a better alternative to the chocolatey refreshenv for cmd, which solves a lot of problems like:
REM The Chocolatey refreshenv is so bad if the variable have some cmd meta-characters, see this test:
REM add this to the path in HKCU\Environment: test & echo baaaaaaaaaad, and run the chocolatey refreshenv you will see that it prints baaaaaaaaaad which is very bad, and the new path is not added to your path variable.
REM This script solve this and you can test it with any meta-character, even something so bad like:
REM ; & % ' ( ) ~ + @ # $ { } [ ] , ` ! ^ | > < \ / " : ? * = . - _ & echo baaaad
REM refreshenv adds only system and user environment variables, but CMD adds volatile variables too (HKCU\Volatile Environment). This script will merge all the three and remove any duplicates.
REM refreshenv reset your PATH. This script append the new path to the old path of the parent script which called this script. It is better than overwriting the old path, otherwise it will delete any newly added path by the parent script.
REM This script solve this problem described in a comment by @Gene Mayevsky: refreshenv modifies env variables TEMP and TMP replacing them with values stored in HKCU\Environment. In my case I run the script to update env variables modified by Jenkins job on a slave that's running under SYSTEM account, so TEMP and TMP get substituted by %USERPROFILE%\AppData\Local\Temp instead of C:\Windows\Temp. This breaks build because linker cannot open system profile's Temp folder.
REM ________
REM this script solve things like that too:
REM The confusing thing might be that there are a few places to start the cmd from. In my case I run cmd from windows explorer and the environment variables did not change while when starting cmd from the "run" (windows key + r) the environment variables were changed.
REM In my case I just had to kill the windows explorer process from the task manager and then restart it again from the task manager.
REM Once I did this I had access to the new environment variable from a cmd that was spawned from windows explorer.
REM my conclusion:
REM if I add a new variable with setx, i can access it in cmd only if i run cmd as admin, without admin right i have to restart explorer to see that new variable. but running this script inside my script (who sets the variable with setx) solve this problem and i do not have to restart explorer
REM ________
REM windows recreate the path using three places at less:
REM the User namespace: HKCU\Environment
REM the System namespace: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment
REM the Session namespace: HKCU\Volatile Environment
REM but the original chocolatey script did not add the volatile path. This script will merge all the three and remove any duplicates. this is what windows do by default too
REM there is this too which cmd seems to read when first running, but it contains only TEMP and TMP,so i will not use it
REM HKEY_USERS\.DEFAULT\Environment
REM ___TESTING_____________________________________________________________
REM to test this script with extreme cases do
REM :: Set a bad variable
REM add a var in reg HKCU\Environment as the following, and see that echo is not executed. if you use refreshenv of chocolatey you will see that echo is executed which is so bad!
REM so save this in reg:
REM all 32 characters: & % ' ( ) ~ + @ # $ { } [ ] ; , ` ! ^ | > < \ / " : ? * = . - _ & echo baaaad
REM and this:
REM (^.*)(Form Product=")([^"]*") FormType="[^"]*" FormID="([0-9][0-9]*)".*$
REM and use set to print those variables and see if they are saved without change ; refreshenv fail dramatically with those variables
REM invalid characters (illegal characters in file names) in Windows using NTFS
REM \ / : * ? " < > | and ^ in FAT
REM __________________________________________________________________________________________
REM __________________________________________________________________________________________
REM __________________________________________________________________________________________
REM this is a hybrid script which call vbs from cmd directly
REM :: The only restriction is the batch code cannot contain - - > (without space between - - > of course)
REM :: The only restriction is the VBS code cannot contain </script>.
REM :: The only risk is the undocumented use of "%~f0?.wsf" as the script to load. Somehow the parser properly finds and loads the running .BAT script "%~f0", and the ?.wsf suffix mysteriously instructs CSCRIPT to interpret the script as WSF. Hopefully MicroSoft will never disable that "feature".
REM :: https://stackoverflow.com/questions/9074476/is-it-possible-to-embed-and-execute-vbscript-within-a-batch-file-without-using-a
if "%debugme%"=="yes" (
echo RefrEnv - Refresh the Environment for CMD - ^(Debug enabled^)
) else (
echo RefrEnv - Refresh the Environment for CMD
)
set "TEMPDir=%TEMP%\refrenv"
IF NOT EXIST "%TEMPDir%" mkdir "%TEMPDir%"
set "outputfile=%TEMPDir%\_NewEnv.cmd"
REM detect if DelayedExpansion is enabled
REM It relies on the fact, that the last caret will be removed only in delayed mode.
REM https://www.dostips.com/forum/viewtopic.php?t=6496
set "DelayedExpansionState=IsDisabled"
IF "^!" == "^!^" (
REM echo DelayedExpansion is enabled
set "DelayedExpansionState=IsEnabled"
)
REM :: generate %outputfile% which contain all the new variables
REM cscript //nologo "%~f0?.wsf" %1
cscript //nologo "%~f0?.wsf" "%outputfile%" %DelayedExpansionState%
REM ::set the new variables generated with vbscript script above
REM for this to work always it is necessary to use DisableDelayedExpansion or escape ! and ^ when using EnableDelayedExpansion, but this script already solve this, so no worry about that now, thanks to God
REM test it with some bad var like:
REM all 32 characters: ; & % ' ( ) ~ + @ # $ { } [ ] , ` ! ^ | > < \ / " : ? * = . - _ & echo baaaad
REM For /f delims^=^ eol^= %%a in (%outputfile%) do %%a
REM for /f "delims== tokens=1,2" %%G in (%outputfile%) do set "%%G=%%H"
For /f delims^=^ eol^= %%a in (%outputfile%) do set %%a
REM for safely print a variable with bad charachters do:
REM SETLOCAL EnableDelayedExpansion
REM echo "!z9!"
REM or
REM set z9
REM but generally paths and environment variables should not have bad metacharacters, but it is not a rule!
if "%debugme%"=="yes" (
explorer "%TEMPDir%"
) else (
rmdir /Q /S "%TEMPDir%"
)
REM cleanup
set "TEMPDir="
set "outputfile="
set "DelayedExpansionState="
set "debugme="
REM pause
exit /b
REM #############################################################################
REM :: to run jscript you have to put <script language="JScript"> directly after ----- Begin wsf script --->
----- Begin wsf script --->
<job><script language="VBScript">
REM #############################################################################
REM ### put you code here #######################################################
REM #############################################################################
REM based on itsadok script from here
REM https://stackoverflow.com/questions/171588/is-there-a-command-to-refresh-environment-variables-from-the-command-prompt-in-w
REM and it is faster as stated by this comment
REM While I prefer the Chocolatey code-wise for being pure batch code, overall I decided to use this one, since it's faster. (~0.3 seconds instead of ~1 second -- which is nice, since I use it frequently in my Explorer "start cmd here" entry) –
REM and it is safer based on my tests, the Chocolatey refreshenv is so bad if the variable have some cmd metacharacters
Const ForReading = 1
Const ForWriting = 2
Const ForAppending = 8
Set WshShell = WScript.CreateObject("WScript.Shell")
filename=WScript.Arguments.Item(0)
DelayedExpansionState=WScript.Arguments.Item(1)
TMPfilename=filename & "_temp_.cmd"
Set fso = CreateObject("Scripting.fileSystemObject")
Set tmpF = fso.CreateTextFile(TMPfilename, TRUE)
set oEnvS=WshShell.Environment("System")
for each sitem in oEnvS
tmpF.WriteLine(sitem)
next
SystemPath = oEnvS("PATH")
set oEnvU=WshShell.Environment("User")
for each sitem in oEnvU
tmpF.WriteLine(sitem)
next
UserPath = oEnvU("PATH")
set oEnvV=WshShell.Environment("Volatile")
for each sitem in oEnvV
tmpF.WriteLine(sitem)
next
VolatilePath = oEnvV("PATH")
set oEnvP=WshShell.Environment("Process")
REM i will not save the process env but only its path, because it have strange variables like =::=::\ and =F:=.... which seems to be added by vbscript
REM for each sitem in oEnvP
REM tmpF.WriteLine(sitem)
REM next
REM here we add the actual session path, so we do not reset the original path, because maybe the parent script added some folders to the path, If we need to reset the path then comment the following line
ProcessPath = oEnvP("PATH")
REM merge System, User, Volatile, and process PATHs
NewPath = SystemPath & ";" & UserPath & ";" & VolatilePath & ";" & ProcessPath
REM ________________________________________________________________
REM :: remove duplicates from path
REM :: expand variables so they become like windows do when he read reg and create path, then Remove duplicates without sorting
REM why i will clean the path from duplicates? because:
REM the maximum string length in cmd is 8191 characters. But string length doesnt mean that you can save 8191 characters in a variable because also the assignment belongs to the string. you can save 8189 characters because the remaining 2 characters are needed for "a="
REM based on my tests:
REM when i open cmd as user , windows does not remove any duplicates from the path, and merge system+user+volatil path
REM when i open cmd as admin, windows do: system+user path (here windows do not remove duplicates which is stupid!) , then it adds volatil path after removing from it any duplicates
REM ' https://www.rosettacode.org/wiki/Remove_duplicate_elements#VBScript
Function remove_duplicates(list)
arr = Split(list,";")
Set dict = CreateObject("Scripting.Dictionary")
REM ' force dictionary compare to be case-insensitive , uncomment to force case-sensitive
dict.CompareMode = 1
For i = 0 To UBound(arr)
If dict.Exists(arr(i)) = False Then
dict.Add arr(i),""
End If
Next
For Each key In dict.Keys
tmp = tmp & key & ";"
Next
remove_duplicates = Left(tmp,Len(tmp)-1)
End Function
REM expand variables
NewPath = WshShell.ExpandEnvironmentStrings(NewPath)
REM remove duplicates
NewPath=remove_duplicates(NewPath)
REM remove_duplicates() will add a ; to the end so lets remove it if the last letter is ;
If Right(NewPath, 1) = ";" Then
NewPath = Left(NewPath, Len(NewPath) - 1)
End If
tmpF.WriteLine("PATH=" & NewPath)
tmpF.Close
REM ________________________________________________________________
REM :: exclude setting variables which may be dangerous to change
REM when i run a script from task scheduler using SYSTEM user the following variables are the differences between the scheduler env and a normal cmd script, so i will not override those variables
REM APPDATA=D:\Users\LLED2\AppData\Roaming
REM APPDATA=D:\Windows\system32\config\systemprofile\AppData\Roaming
REM LOCALAPPDATA=D:\Users\LLED2\AppData\Local
REM LOCALAPPDATA=D:\Windows\system32\config\systemprofile\AppData\Local
REM TEMP=D:\Users\LLED2\AppData\Local\Temp
REM TEMP=D:\Windows\TEMP
REM TMP=D:\Users\LLED2\AppData\Local\Temp
REM TMP=D:\Windows\TEMP
REM USERDOMAIN=LLED2-PC
REM USERDOMAIN=WORKGROUP
REM USERNAME=LLED2
REM USERNAME=LLED2-PC$
REM USERPROFILE=D:\Users\LLED2
REM USERPROFILE=D:\Windows\system32\config\systemprofile
REM i know this thanks to this comment
REM The solution is good but it modifies env variables TEMP and TMP replacing them with values stored in HKCU\Environment. In my case I run the script to update env variables modified by Jenkins job on a slave that's running under SYSTEM account, so TEMP and TMP get substituted by %USERPROFILE%\AppData\Local\Temp instead of C:\Windows\Temp. This breaks build because linker cannot open system profile's Temp folder. – Gene Mayevsky Sep 26 '19 at 20:51
REM Delete Lines of a Text File Beginning with a Specified String
REM those are the variables which should not be changed by this script
arrBlackList = Array("ALLUSERSPROFILE=", "APPDATA=", "CommonProgramFiles=", "CommonProgramFiles(x86)=", "CommonProgramW6432=", "COMPUTERNAME=", "ComSpec=", "HOMEDRIVE=", "HOMEPATH=", "LOCALAPPDATA=", "LOGONSERVER=", "NUMBER_OF_PROCESSORS=", "OS=", "PATHEXT=", "PROCESSOR_ARCHITECTURE=", "PROCESSOR_ARCHITEW6432=", "PROCESSOR_IDENTIFIER=", "PROCESSOR_LEVEL=", "PROCESSOR_REVISION=", "ProgramData=", "ProgramFiles=", "ProgramFiles(x86)=", "ProgramW6432=", "PUBLIC=", "SystemDrive=", "SystemRoot=", "TEMP=", "TMP=", "USERDOMAIN=", "USERDOMAIN_ROAMINGPROFILE=", "USERNAME=", "USERPROFILE=", "windir=", "SESSIONNAME=")
Set objFS = CreateObject("Scripting.FileSystemObject")
Set objTS = objFS.OpenTextFile(TMPfilename, ForReading)
strContents = objTS.ReadAll
objTS.Close
TMPfilename2= filename & "_temp2_.cmd"
arrLines = Split(strContents, vbNewLine)
Set objTS = objFS.OpenTextFile(TMPfilename2, ForWriting, True)
REM this is the equivalent of findstr /V /I /L or grep -i -v , i don t know a better way to do it, but it works fine
For Each strLine In arrLines
bypassThisLine=False
For Each BlackWord In arrBlackList
If Left(UCase(LTrim(strLine)),Len(BlackWord)) = UCase(BlackWord) Then
bypassThisLine=True
End If
Next
If bypassThisLine=False Then
objTS.WriteLine strLine
End If
Next
REM ____________________________________________________________
REM :: expand variables because registry save some variables as unexpanded %....%
REM :: and escape ! and ^ for cmd EnableDelayedExpansion mode
set f=fso.OpenTextFile(TMPfilename2,ForReading)
REM Write file: ForAppending = 8 ForReading = 1 ForWriting = 2 , True=create file if not exist
set fW=fso.OpenTextFile(filename,ForWriting,True)
Do Until f.AtEndOfStream
LineContent = f.ReadLine
REM expand variables
LineContent = WshShell.ExpandEnvironmentStrings(LineContent)
REM _____this part is so important_____
REM if cmd delayedexpansion is enabled in the parent script which calls this script then bad thing happen to variables saved in the registry if they contain ! . if var have ! then ! and ^ are removed; if var do not have ! then ^ is not removed . to understand what happens read this :
REM how cmd delayed expansion parse things
REM https://stackoverflow.com/questions/4094699/how-does-the-windows-command-interpreter-cmd-exe-parse-scripts/7970912#7970912
REM For each parsed token, first check if it contains any !. If not, then the token is not parsed - important for ^ characters. If the token does contain !, then scan each character from left to right:
REM - If it is a caret (^) the next character has no special meaning, the caret itself is removed
REM - If it is an exclamation mark, search for the next exclamation mark (carets are not observed anymore), expand to the value of the variable.
REM - Consecutive opening ! are collapsed into a single !
REM - Any remaining unpaired ! is removed
REM ...
REM Look at next string of characters, breaking before !, :, or <LF>, and call them VAR
REM conclusion:
REM when delayedexpansion is enabled and var have ! then i have to escape ^ and ! ,BUT IF VAR DO NOT HAVE ! THEN DO NOT ESCAPE ^ .this made me crazy to discover
REM when delayedexpansion is disabled then i do not have to escape anything
If DelayedExpansionState="IsEnabled" Then
If InStr(LineContent, "!") > 0 Then
LineContent=Replace(LineContent,"^","^^")
LineContent=Replace(LineContent,"!","^!")
End If
End If
REM __________
fW.WriteLine(LineContent)
Loop
f.Close
fW.Close
REM #############################################################################
REM ### end of vbscript code ####################################################
REM #############################################################################
REM this must be at the end for the hybrid trick, do not remove it
</script></job>
For cygwin/bash:
I cannot post it here I reached the post limit, so download it from here
call it from bash with: source refrenv.sh
For Powershell:
download it from here
Call it from Powershell with: . .\refrenv.ps1
A: Environment variables are kept in HKEY_LOCAL_MACHINE\SYSTEM\ControlSet\Control\Session Manager\Environment.
Many of the useful env vars, such as Path, are stored as REG_SZ. There are several ways to access the registry including REGEDIT:
REGEDIT /E <filename> "HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Session Manager\Environment"
The output starts with magic numbers. So to search it with the find command it needs to be typed and redirected: type <filename> | findstr -c:\"Path\"
So, if you just want to refresh the path variable in your current command session with what's in system properties the following batch script works fine:
RefreshPath.cmd:
@echo off
REM This solution requests elevation in order to read from the registry.
if exist %temp%\env.reg del %temp%\env.reg /q /f
REGEDIT /E %temp%\env.reg "HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Session Manager\Environment"
if not exist %temp%\env.reg (
echo "Unable to write registry to temp location"
exit 1
)
SETLOCAL EnableDelayedExpansion
for /f "tokens=1,2* delims==" %%i in ('type %temp%\env.reg ^| findstr -c:\"Path\"=') do (
set upath=%%~j
echo !upath:\\=\! >%temp%\newpath
)
ENDLOCAL
for /f "tokens=*" %%i in (%temp%\newpath) do set path=%%i
A: Try opening a new command prompt as an administrator. This worked for me on Windows 10. (I know this is an old answer, but I had to share this because having to write a VBS script just for this is absurd).
A: I came across this answer before eventually finding an easier solution.
Simply restart explorer.exe in Task Manager.
I didn't test, but you may also need to reopen you command prompt.
Credit to Timo Huovinen here: Node not recognized although successfully installed (if this helped you, please go give this man's comment credit).
A: The confusing thing might be that there are a few places to start the cmd from.
In my case I ran cmd from windows explorer and the environment variables did not change while when starting cmd from the "run" (windows key + r) the environment variables were changed.
In my case I just had to kill the windows explorer process from the task manager and then restart it again from the task manager.
Once I did this I had access to the new environment variable from a cmd that was spawned from windows explorer.
A: I use the following code in my batch scripts:
if not defined MY_ENV_VAR (
setx MY_ENV_VAR "VALUE" > nul
set MY_ENV_VAR=VALUE
)
echo %MY_ENV_VAR%
By using SET after SETX it is possible to use the "local" variable directly without restarting the command window. And on the next run, the enviroment variable will be used.
A: If it concerns just one (or a few) specific vars you want to change, I think the easiest way is a workaround: just set in in your environment AND in your current console session
*
*Set will put the var in your current session
*SetX will put the var in the environment, but NOT in your current session
I have this simple batch script to change my Maven from Java7 to Java8 (which are both env. vars) The batch-folder is in my PATH var so I can always call 'j8' and within my console and in the environment my JAVA_HOME var gets changed:
j8.bat:
@echo off
set JAVA_HOME=%JAVA_HOME_8%
setx JAVA_HOME "%JAVA_HOME_8%"
Till now I find this working best and easiest.
You probably want this to be in one command, but it simply isn't there in Windows...
A: The solution I've been using for a few years now:
@echo off
rem Refresh PATH from registry.
setlocal
set USR_PATH=
set SYS_PATH=
for /F "tokens=3* skip=2" %%P in ('%SystemRoot%\system32\reg.exe query "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment" /v PATH') do @set "SYS_PATH=%%P %%Q"
for /F "tokens=3* skip=2" %%P in ('%SystemRoot%\system32\reg.exe query "HKCU\Environment" /v PATH') do @set "USR_PATH=%%P %%Q"
if "%SYS_PATH:~-1%"==" " set "SYS_PATH=%SYS_PATH:~0,-1%"
if "%USR_PATH:~-1%"==" " set "USR_PATH=%USR_PATH:~0,-1%"
endlocal & call set "PATH=%SYS_PATH%;%USR_PATH%"
goto :EOF
Edit: Woops, here's the updated version.
A: I liked the approach followed by chocolatey, as posted in anonymous coward's answer, since it is a pure batch approach. However, it leaves a temporary file and some temporary variables lying around. I made a cleaner version for myself.
Make a file refreshEnv.bat somewhere on your PATH. Refresh your console environment by executing refreshEnv.
@ECHO OFF
REM Source found on https://github.com/DieterDePaepe/windows-scripts
REM Please share any improvements made!
REM Code inspired by http://stackoverflow.com/questions/171588/is-there-a-command-to-refresh-environment-variables-from-the-command-prompt-in-w
IF [%1]==[/?] GOTO :help
IF [%1]==[/help] GOTO :help
IF [%1]==[--help] GOTO :help
IF [%1]==[] GOTO :main
ECHO Unknown command: %1
EXIT /b 1
:help
ECHO Refresh the environment variables in the console.
ECHO.
ECHO refreshEnv Refresh all environment variables.
ECHO refreshEnv /? Display this help.
GOTO :EOF
:main
REM Because the environment variables may refer to other variables, we need a 2-step approach.
REM One option is to use delayed variable evaluation, but this forces use of SETLOCAL and
REM may pose problems for files with an '!' in the name.
REM The option used here is to create a temporary batch file that will define all the variables.
REM Check to make sure we don't overwrite an actual file.
IF EXIST %TEMP%\__refreshEnvironment.bat (
ECHO Environment refresh failed!
ECHO.
ECHO This script uses a temporary file "%TEMP%\__refreshEnvironment.bat", which already exists. The script was aborted in order to prevent accidental data loss. Delete this file to enable this script.
EXIT /b 1
)
REM Read the system environment variables from the registry.
FOR /F "usebackq tokens=1,2,* skip=2" %%I IN (`REG QUERY "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment"`) DO (
REM /I -> ignore casing, since PATH may also be called Path
IF /I NOT [%%I]==[PATH] (
ECHO SET %%I=%%K>>%TEMP%\__refreshEnvironment.bat
)
)
REM Read the user environment variables from the registry.
FOR /F "usebackq tokens=1,2,* skip=2" %%I IN (`REG QUERY HKCU\Environment`) DO (
REM /I -> ignore casing, since PATH may also be called Path
IF /I NOT [%%I]==[PATH] (
ECHO SET %%I=%%K>>%TEMP%\__refreshEnvironment.bat
)
)
REM PATH is a special variable: it is automatically merged based on the values in the
REM system and user variables.
REM Read the PATH variable from the system and user environment variables.
FOR /F "usebackq tokens=1,2,* skip=2" %%I IN (`REG QUERY "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment" /v PATH`) DO (
ECHO SET PATH=%%K>>%TEMP%\__refreshEnvironment.bat
)
FOR /F "usebackq tokens=1,2,* skip=2" %%I IN (`REG QUERY HKCU\Environment /v PATH`) DO (
ECHO SET PATH=%%PATH%%;%%K>>%TEMP%\__refreshEnvironment.bat
)
REM Load the variable definitions from our temporary file.
CALL %TEMP%\__refreshEnvironment.bat
REM Clean up after ourselves.
DEL /Q %TEMP%\__refreshEnvironment.bat
ECHO Environment successfully refreshed.
A: This works on windows 7: SET PATH=%PATH%;C:\CmdShortcuts
tested by typing echo %PATH% and it worked, fine. also set if you open a new cmd, no need for those pesky reboots any more :)
A: On Windows 7/8/10, you can install Chocolatey, which has a script for this built-in.
After installing Chocolatey, just type RefreshEnv.cmd.
A: Use "setx" and restart cmd prompt
There is a command line tool named "setx" for this job.
It's for reading and writing env variables.
The variables persist after the command window has been closed.
It "Creates or modifies environment variables in the user or system environment, without requiring programming or scripting. The setx command also retrieves the values of registry keys and writes them to text files."
Note: variables created or modified by this tool will be available in future command windows but not in the current CMD.exe command window. So, you have to restart.
If setx is missing:
*
*http://download.microsoft.com/download/win2000platform/setx/1.00.0.1/nt5/en-us/setx_setup.exe
Or modify the registry
MSDN says:
To programmatically add or modify system environment variables, add
them to the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
Manager\Environment registry key, then broadcast a WM_SETTINGCHANGE
message with lParam set to the string "Environment".
This allows applications, such as the shell, to pick up your updates.
A: Thank you for posting this question which is quite interesting, even in 2019 (Indeed, it is not easy to renew the shell cmd since it is a single instance as mentioned above), because renewing environment variables in windows allows to accomplish many automation tasks without having to manually restart the command line.
For example, we use this to allow software to be deployed and configured on a large number of machines that we reinstall regularly. And I must admit that having to restart the command line during the deployment of our software would be very impractical and would require us to find workarounds that are not necessarily pleasant.
Let's get to our problem.
We proceed as follows.
1 - We have a batch script that in turn calls a powershell script like this
[file: task.cmd].
cmd > powershell.exe -executionpolicy unrestricted -File C:\path_here\refresh.ps1
2 - After this, the refresh.ps1 script renews the environment variables using registry keys (GetValueNames(), etc.).
Then, in the same powershell script, we just have to call the new environment variables available.
For example, in a typical case, if we have just installed nodeJS before with cmd using silent commands, after the function has been called, we can directly call npm to install, in the same session, particular packages like follows.
[file: refresh.ps1]
function Update-Environment {
$locations = 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment',
'HKCU:\Environment'
$locations | ForEach-Object {
$k = Get-Item $_
$k.GetValueNames() | ForEach-Object {
$name = $_
$value = $k.GetValue($_)
if ($userLocation -and $name -ieq 'PATH') {
$env:Path += ";$value"
} else {
Set-Item -Path Env:\$name -Value $value
}
}
$userLocation = $true
}
}
Update-Environment
#Here we can use newly added environment variables like for example npm install..
npm install -g create-react-app serve
Once the powershell script is over, the cmd script goes on with other tasks.
Now, one thing to keep in mind is that after the task is completed, cmd has still no access to the new environment variables, even if the powershell script has updated those in its own session. Thats why we do all the needed tasks in the powershell script which can call the same commands as cmd of course.
A: Calling this function has worked for me:
VOID Win32ForceSettingsChange()
{
DWORD dwReturnValue;
::SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0, (LPARAM) "Environment", SMTO_ABORTIFHUNG, 5000, &dwReturnValue);
}
A: Here is what Chocolatey uses.
https://github.com/chocolatey/choco/blob/master/src/chocolatey.resources/redirects/RefreshEnv.cmd
@echo off
::
:: RefreshEnv.cmd
::
:: Batch file to read environment variables from registry and
:: set session variables to these values.
::
:: With this batch file, there should be no need to reload command
:: environment every time you want environment changes to propagate
::echo "RefreshEnv.cmd only works from cmd.exe, please install the Chocolatey Profile to take advantage of refreshenv from PowerShell"
echo | set /p dummy="Refreshing environment variables from registry for cmd.exe. Please wait..."
goto main
:: Set one environment variable from registry key
:SetFromReg
"%WinDir%\System32\Reg" QUERY "%~1" /v "%~2" > "%TEMP%\_envset.tmp" 2>NUL
for /f "usebackq skip=2 tokens=2,*" %%A IN ("%TEMP%\_envset.tmp") do (
echo/set "%~3=%%B"
)
goto :EOF
:: Get a list of environment variables from registry
:GetRegEnv
"%WinDir%\System32\Reg" QUERY "%~1" > "%TEMP%\_envget.tmp"
for /f "usebackq skip=2" %%A IN ("%TEMP%\_envget.tmp") do (
if /I not "%%~A"=="Path" (
call :SetFromReg "%~1" "%%~A" "%%~A"
)
)
goto :EOF
:main
echo/@echo off >"%TEMP%\_env.cmd"
:: Slowly generating final file
call :GetRegEnv "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" >> "%TEMP%\_env.cmd"
call :GetRegEnv "HKCU\Environment">>"%TEMP%\_env.cmd" >> "%TEMP%\_env.cmd"
:: Special handling for PATH - mix both User and System
call :SetFromReg "HKLM\System\CurrentControlSet\Control\Session Manager\Environment" Path Path_HKLM >> "%TEMP%\_env.cmd"
call :SetFromReg "HKCU\Environment" Path Path_HKCU >> "%TEMP%\_env.cmd"
:: Caution: do not insert space-chars before >> redirection sign
echo/set "Path=%%Path_HKLM%%;%%Path_HKCU%%" >> "%TEMP%\_env.cmd"
:: Cleanup
del /f /q "%TEMP%\_envset.tmp" 2>nul
del /f /q "%TEMP%\_envget.tmp" 2>nul
:: capture user / architecture
SET "OriginalUserName=%USERNAME%"
SET "OriginalArchitecture=%PROCESSOR_ARCHITECTURE%"
:: Set these variables
call "%TEMP%\_env.cmd"
:: Cleanup
del /f /q "%TEMP%\_env.cmd" 2>nul
:: reset user / architecture
SET "USERNAME=%OriginalUserName%"
SET "PROCESSOR_ARCHITECTURE=%OriginalArchitecture%"
echo | set /p dummy="Finished."
echo .
A: You can capture the system environment variables with a vbs script, but you need a bat script to actually change the current environment variables, so this is a combined solution.
Create a file named resetvars.vbs containing this code, and save it on the path:
Set oShell = WScript.CreateObject("WScript.Shell")
filename = oShell.ExpandEnvironmentStrings("%TEMP%\resetvars.bat")
Set objFileSystem = CreateObject("Scripting.fileSystemObject")
Set oFile = objFileSystem.CreateTextFile(filename, TRUE)
set oEnv=oShell.Environment("System")
for each sitem in oEnv
oFile.WriteLine("SET " & sitem)
next
path = oEnv("PATH")
set oEnv=oShell.Environment("User")
for each sitem in oEnv
oFile.WriteLine("SET " & sitem)
next
path = path & ";" & oEnv("PATH")
oFile.WriteLine("SET PATH=" & path)
oFile.Close
create another file name resetvars.bat containing this code, same location:
@echo off
%~dp0resetvars.vbs
call "%TEMP%\resetvars.bat"
When you want to refresh the environment variables, just run resetvars.bat
Apologetics:
The two main problems I had coming up with this solution were
a. I couldn't find a straightforward way to export environment variables from a vbs script back to the command prompt, and
b. the PATH environment variable is a concatenation of the user and the system PATH variables.
I'm not sure what the general rule is for conflicting variables between user and system, so I elected to make user override system, except in the PATH variable which is handled specifically.
I use the weird vbs+bat+temporary bat mechanism to work around the problem of exporting variables from vbs.
Note: this script does not delete variables.
This can probably be improved.
ADDED
If you need to export the environment from one cmd window to another, use this script (let's call it exportvars.vbs):
Set oShell = WScript.CreateObject("WScript.Shell")
filename = oShell.ExpandEnvironmentStrings("%TEMP%\resetvars.bat")
Set objFileSystem = CreateObject("Scripting.fileSystemObject")
Set oFile = objFileSystem.CreateTextFile(filename, TRUE)
set oEnv=oShell.Environment("Process")
for each sitem in oEnv
oFile.WriteLine("SET " & sitem)
next
oFile.Close
Run exportvars.vbs in the window you want to export from, then switch to the window you want to export to, and type:
"%TEMP%\resetvars.bat"
A: The best method I came up with was to just do a Registry query. Here is my example.
In my example I did an install using a Batch file that added new environment variables. I needed to do things with this as soon as the install was complete, but was unable to spawn a new process with those new variables. I tested spawning another explorer window and called back to cmd.exe and this worked but on Vista and Windows 7, Explorer only runs as a single instance and normally as the person logged in. This would fail with automation since I need my admin creds to do things regardless of running from local system or as an administrator on the box. The limitation to this is that it does not handle things like path, this only worked on simple enviroment variables. This allowed me to use a batch to get over to a directory (with spaces) and copy in files run .exes and etc. This was written today from may resources on stackoverflow.com
Orginal Batch calls to new Batch:
testenvget.cmd SDROOT (or whatever the variable)
@ECHO OFF
setlocal ENABLEEXTENSIONS
set keyname=HKLM\System\CurrentControlSet\Control\Session Manager\Environment
set value=%1
SET ERRKEY=0
REG QUERY "%KEYNAME%" /v "%VALUE%" 2>NUL| FIND /I "%VALUE%"
IF %ERRORLEVEL% EQU 0 (
ECHO The Registry Key Exists
) ELSE (
SET ERRKEY=1
Echo The Registry Key Does not Exist
)
Echo %ERRKEY%
IF %ERRKEY% EQU 1 GOTO :ERROR
FOR /F "tokens=1-7" %%A IN ('REG QUERY "%KEYNAME%" /v "%VALUE%" 2^>NUL^| FIND /I "%VALUE%"') DO (
ECHO %%A
ECHO %%B
ECHO %%C
ECHO %%D
ECHO %%E
ECHO %%F
ECHO %%G
SET ValueName=%%A
SET ValueType=%%B
SET C1=%%C
SET C2=%%D
SET C3=%%E
SET C4=%%F
SET C5=%%G
)
SET VALUE1=%C1% %C2% %C3% %C4% %C5%
echo The Value of %VALUE% is %C1% %C2% %C3% %C4% %C5%
cd /d "%VALUE1%"
pause
REM **RUN Extra Commands here**
GOTO :EOF
:ERROR
Echo The the Enviroment Variable does not exist.
pause
GOTO :EOF
Also there is another method that I came up with from various different ideas. Please see below. This basically will get the newest path variable from the registry however, this will cause a number of issues beacuse the registry query is going to give variables in itself, that means everywhere there is a variable this will not work, so to combat this issue I basically double up the path. Very nasty. The more perfered method would be to do:
Set Path=%Path%;C:\Program Files\Software....\
Regardless here is the new batch file, please use caution.
@ECHO OFF
SETLOCAL ENABLEEXTENSIONS
set org=%PATH%
for /f "tokens=2*" %%A in ('REG QUERY "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment" /v Path ^|FIND /I "Path"') DO (
SET path=%%B
)
SET PATH=%org%;%PATH%
set path
A: Restarting explorer did this for me, but only for new cmd terminals.
The terminal I set the path could see the new Path variable already (in Windows 7).
taskkill /f /im explorer.exe && explorer.exe
A: There is no straight way, as Kev said. In most cases, it is simpler to spawn another CMD box. More annoyingly, running programs are not aware of changes either (although IIRC there might be a broadcast message to watch to be notified of such change).
It have been worse: in older versions of Windows, you had to log off then log back to take in account the changes...
A: I use this Powershell script to add to the PATH variable.
With a little adjustment it can work in your case too I believe.
#REQUIRES -Version 3.0
if (-not ("win32.nativemethods" -as [type])) {
# import sendmessagetimeout from win32
add-type -Namespace Win32 -Name NativeMethods -MemberDefinition @"
[DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)]
public static extern IntPtr SendMessageTimeout(
IntPtr hWnd, uint Msg, UIntPtr wParam, string lParam,
uint fuFlags, uint uTimeout, out UIntPtr lpdwResult);
"@
}
$HWND_BROADCAST = [intptr]0xffff;
$WM_SETTINGCHANGE = 0x1a;
$result = [uintptr]::zero
function global:ADD-PATH
{
[Cmdletbinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True, Position=0)]
[string] $Folder
)
# See if a folder variable has been supplied.
if (!$Folder -or $Folder -eq "" -or $Folder -eq $null) {
throw 'No Folder Supplied. $ENV:PATH Unchanged'
}
# Get the current search path from the environment keys in the registry.
$oldPath=$(Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment' -Name PATH).Path
# See if the new Folder is already in the path.
if ($oldPath | Select-String -SimpleMatch $Folder){
return 'Folder already within $ENV:PATH'
}
# Set the New Path and add the ; in front
$newPath=$oldPath+';'+$Folder
Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment' -Name PATH -Value $newPath -ErrorAction Stop
# Show our results back to the world
return 'This is the new PATH content: '+$newPath
# notify all windows of environment block change
[win32.nativemethods]::SendMessageTimeout($HWND_BROADCAST, $WM_SETTINGCHANGE, [uintptr]::Zero, "Environment", 2, 5000, [ref]$result)
}
function global:REMOVE-PATH {
[Cmdletbinding()]
param (
[parameter(Mandatory=$True, ValueFromPipeline=$True, Position=0)]
[String] $Folder
)
# See if a folder variable has been supplied.
if (!$Folder -or $Folder -eq "" -or $Folder -eq $NULL) {
throw 'No Folder Supplied. $ENV:PATH Unchanged'
}
# add a leading ";" if missing
if ($Folder[0] -ne ";") {
$Folder = ";" + $Folder;
}
# Get the Current Search Path from the environment keys in the registry
$newPath=$(Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment' -Name PATH).Path
# Find the value to remove, replace it with $NULL. If it's not found, nothing will change and you get a message.
if ($newPath -match [regex]::Escape($Folder)) {
$newPath=$newPath -replace [regex]::Escape($Folder),$NULL
} else {
return "The folder you mentioned does not exist in the PATH environment"
}
# Update the Environment Path
Set-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment' -Name PATH -Value $newPath -ErrorAction Stop
# Show what we just did
return 'This is the new PATH content: '+$newPath
# notify all windows of environment block change
[win32.nativemethods]::SendMessageTimeout($HWND_BROADCAST, $WM_SETTINGCHANGE, [uintptr]::Zero, "Environment", 2, 5000, [ref]$result)
}
# Use ADD-PATH or REMOVE-PATH accordingly.
#Anything to Add?
#Anything to Remove?
REMOVE-PATH "%_installpath_bin%"
A: Edit: this only works if the environment changes you're doing are as a result of running a batch file.
If a batch file begins with SETLOCAL then it will always unravel back to your original environment on exit even if you forget to call ENDLOCAL before the batch exits, or if it aborts unexpectedly.
Almost every batch file I write begins with SETLOCAL since in most cases I don't want the side-effects of environment changes to remain. In cases where I do want certain environment variable changes to propagate outside the batch file then my last ENDLOCAL looks like this:
ENDLOCAL & (
SET RESULT1=%RESULT1%
SET RESULT2=%RESULT2%
)
A: To solve this I have changed the environment variable using BOTH setx and set, and then restarted all instances of explorer.exe. This way any process subsequently started will have the new environment variable.
My batch script to do this:
setx /M ENVVAR "NEWVALUE"
set ENVVAR="NEWVALUE"
taskkill /f /IM explorer.exe
start explorer.exe >nul
exit
The problem with this approach is that all explorer windows that are currently opened will be closed, which is probably a bad idea - But see the post by Kev to learn why this is necessary
A: I just wanted to state that, those who use Anaconda, when you use the chocolatey Refreshenv command; all the environment variables associated with conda will be lost.
To counter this, the best way is to restart CMD. :(
A: This doesn't directly answer your question but if you just want interprocess communication, and you can use PowerShell, you can just use the Clipboard.
In one process
Set-Clipboard("MyText")
In separate process
$clipValue=Get-Clipboard
then you can use clipValue as any other string. This actually give you the ability to send the entire list of environment variables to another process using a CSV text string
A: More than likely, you are wanting your updated Environment Variables to be accessible by an Application that you have open and/or are running. As such, your best and easiest course of action is to simply close/re-open your application so that it picks up your updated Environment Variables.
The details behind the scenes are very nuanced, but the above should work for most use cases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "643"
} |
Q: Is it possible to deploy an already built Drupal site? I'm a Drupal newbie. Is it possible to set up everything and deploy Drupal on the server? I mean things like putting in the content, setting up the modules, etc..., then you put it all up to the production server?
A: Of course.
*
*copy all the files
*edit the database credentials (sites/default/settings.php)
*export the database content via mysqldump or phpMyAdmin (supposing you use MySQL)
*import the database content at the target server
I've done it several times.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: C++ Strategy Design Pattern, making an interface array After having implemented the strategy pattern, I wanted to make an array of the interface-type, to which I can then add any concrete type.
For those who don't know the strategy pattern:
http://en.wikipedia.org/wiki/Strategy_pattern
In this particular example I would want to make a StrategyInterface array, which I can then fill with concrete type's A, B and C. However, because this is an abstract class, I can't get it done. Is there a way to do this, or is it completely impossible, without removing the abstract method?
A: Make the array store pointers to the interface type:
typedef std::vector<Interface *> Array;
Array myArray;
myArray.push_back(new A());
Additionally, you can use ptr_vector which manages memory for you:
typedef boost::ptr_vector<Interface> Array;
// the rest is the same
A: store pointers not objects..... use boost::shared_ptr if you are wanting to store objects.
A: errr, for example...
std::vector< boost::shared_ptr< AbstractStrategy > >
A: How about using boost any?
Here's an example of what it would look like
#include <list>
#include <boost/any.hpp>
using boost::any_cast;
typedef std::list<boost::any> many;
void append_int(many & values, int value)
{
boost::any to_append = value;
values.push_back(to_append);
}
void append_string(many & values, const std::string & value)
{
values.push_back(value);
}
void append_char_ptr(many & values, const char * value)
{
values.push_back(value);
}
void append_any(many & values, const boost::any & value)
{
values.push_back(value);
}
void append_nothing(many & values)
{
values.push_back(boost::any());
}
Seems nice and elegant, plus you get well tested code and can treat your values as objects instead of pointers
Note: These function names are informative, but you could use overriding to have a single interface.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Coming up to speed on the programming environment I'm not a full-time software guy. In fact, in the last ten years, 90 % of my work was either on the hardware or doing low-level (embedded) code.
But the other 10% involves writing shell scripts for development tools, making kernel changes to add special features, and writing GUI applications for end-users.
The problem is that I find myself facing significant holes in my knowledge, often because it's been years since I've done "X", and I've either forgotten, or the environment has changed.
Every so often, there are threads on TheDailyWTF.com along the lines of "WTF: the guy spent all day writing tons of code, when he could have called foobar() in library baz". I've been there myself, because I don't remember much beyond the #include <stdio.h> stuff (for example), and my quick search somehow missed the right library.
What methods have you found effective to crash-learn and/or crash-refresh yourself in programming environments that you rarely touch?
A: *
*Ask developers you know that work in the environment that you are interested in.
*Search the web a lot.
*Ask specific questions in relevant IRC channels (Freenode is great).
*Ask specific questions on StackOverflow and other sites.
There really isn't any substitue for being "in the daily flow" of the programming environment in question. Having a good feel for the current state of the art is something you only get from experience, as I'm sure you can verify in you own areas of expertise.
A: i try to keep up with general news about languages i'm interested in but aren't necessarily using at the moment. being able to follow the general changes helps a lot for when you have to pick it up again.
beyond that, i personally find it easiest to grab an up to date reference book, and code a few basic things to get me used to the environment again, ie as a web programmer i'd make a simple crud app, or a quick web service/client.
A: For frameworks/APIs (such as a JavaScript framework or a widget library):
*
*Quickly scan through the entire API documentation; get a glimpse of all that's out there instead of picking the first method that seems to fit your needs.
*If available, glance at the source code of the
framework to see how the
API was intended to be used. Seeing what's behind the curtain helps. And also
some of the methods will have been used
internally, showcasing their true intents.
*Don't necessarily always trust existing code (Googled, from co-workers, from books) since not everyone does the due diligence to find out the most proper way to use an API. Sometimes even samples in API documentation can be out-of-date.
A: In newer full-featured environments like Java, .NET, and Python, there are library solutions to almost every common problem. Don't think "how can I program this in plain C", but "which library solves this problem for me?" It's an attitude shift. As far as resources, the library documentation for the three environments I mentioned are all good.
A: The best solution I think is to get a book on the topic / environment you need to catch up on.
Ask questions from developers who you know who have the experience in that area.
You can also check out news groups (Google Groups makes this easy) and forums. You can ask questions, but even reading 10 minutes of the latest popular questions for a particular topic / environment will keep you a little bit "in the know".
The same thing can go for blogs too if you can find a focussed blog. These are pretty rare though and I personally don't look to blogs to keep me "in the know" on a particular environment. (I personally find blogs most popular and interesting in the "here's something neat" or "here's how I failed and you can avoid it" or "general practice" areas.)
A: In addition to the answers above, I think what you are asking for will take a significant amount of your time, and you must be willing to spend that time to achieve your goals. My method would be pretty much the same as Owen's answer; get a reference book or tutorial and work through the examples hacking in changes as you go to experiment with how any given thing works. I'd say as a bare minimum, allocate a hour to do this every other day, in a time that you know you won't be interrupted. Any less, and you'll probably continue to struggle.
A: The best way to crash-learn is simple, simply do it, use google to search for X tutorial, open your favorite browser and start typing away. Once you reached a certain level of feeling with X, do look at other people things, there is lots of open source out there and there must be someboby who has used X before, look at how they solved certain problems and learn from this, this is an easy way to verify that you are 'on the right track' or that you're doing things or thinking in patterns that other people would define as 'common sense'.
Crash-refreshing something is much easier since you have a suspended learning curve already, the way I do this is to keep some of the example you did while writing or keep some projects you did. Then you can easily refresh and use your own examples.
The library issue you mention here well, only improving your search skills will improve that one (although looking on how others solved this will help as well)
A: Don't try and pick up every environment.
Focus on the one that's useful and/or interesting, and then pick a few quality blogs to regularly read or podcasts to listen to. You'll pick up the current state of the environment fairly quickly.
Concrete example: I've been out of the Java world for a long time, but I've been put on a Java project in the last few months. Since then I've listened to the Java Posse podcast and read a few blogs, and although I'm far from a Java guru I've got back up to speed without too much trouble.
A: Just a though. While we are working on our code we know that we need to work very hard to optimize the critical path, but on non critical path we usually don't spend to much effort to optimize.
From your description you are working 90% on embedded and 10% on rest, lets assume that in 50% of the rest you are spending more time that needed. So according to my calculation you are optimizing about 5% of your work flow ...
Of course the usual google/SO/forums search is useful before you doing something new, but investing more than just that is waste of time for my opinion, unless you want to waste some time just for fun or general education ... :), but this is another story.
By the way I'm in same position and last time i needed some GUI and used MFC (because i used it sometimes 10 years ago :) ) and i perfectly understand that i probably will get better results with C# and friends, but the learning curve just not justify this especially knowing that i need mix the C code with GUI.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What functional differences exist between WPF and WinForms WebBrowser control? WPF WebBrowser control looks great but knowledge accumlated over time about WinForms WebBrowser is substantial and it's hard to ignore work like csExWB. It would be nice to know what functional shortcomings or advantages exists in .NET 3.5's WPF WebBrowser control over WinForms WebBrowser control. In particular, is it possible to build csExWB-like functionality on top of WPF WebBrowser?
A: From one full day of frustration with wpf's component, here's what I discovered. Apparently, winforms webbrowser exposes much more methods and properties. For instance, there's no IsWebBrowserContextMenuEnabled, ActiveXInstance, etc. in wpf webbrowser.
Also, the document property of each contains different types of objects. Winform contains a document of type System.Windows.Forms.HtmlDocument with a few interesting methods and properties like PointToClient and GetElementFromPoint. Wpf webbrowser document is an Object type document that can be cast to mshtml.HtmlDocument, which only provides the same methods and properties available from a standard html + javascript document. Not very exciting. I don't know if it can be cast to something else (useful that is) since there's no real documentation about it.
The only disadvantage I could notice about winforms webbrowser is that the buttons and scrollbars inside the component don't have the same appearance as the wpf native controls.
A: I must admit I don't know the differences, but if you hit problems you could perhaps use WindowsFormsHost to host the winform version in WPF, like so? Ultimately, both is a wrapper around shdocvw, so principles like "pure WPF" don't really apply.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: LINQ Benchmarks in multitiered environment I am involved in development of a tiered application that uses LINQ2SQL separated from the web server with a NET.TCP Binding on WCF.
My questions are:
*
*What sort of measures should I take
to achieve the best performance?
*Since the entity objects returned by
the LINQ need to be converted to a
IEnumerable list to be serialized
everytime, is there anyway to remove
this dependency?
A: 1) Concentrate on a properly normalized database design. I would say that when you are forced to make design tradeoffs in your code vs. database design, if performance is your goal, make tradeoffs in your object design instead of your database design. Understand that you aren't going to be able to do a proper supertype/subtype database design which will work with Linq to SQL (I'm told you need to use the EF instead).
2) Depends what you mean here. If you're asking how you would serialize anonymous classes across the wire, the easy answer is: "you can't, so don't try". If you want to put lists of objects across the wire, just use the ToArray() extension method on your IEnumerable collections to ship arrays of your business objects over the wire.
A: Linq to SQL is very slow unless you compile queries. Otherwise your application will be CPU bound as most of the time will be spend converting Expression trees into SQL.
We are talking about 10x performance gain if you use compiled queries. Try it :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Binding Gtk# NodeView to a IList? I've got a data object with a component in it that is an System.Collections.Generic.IList, and I'd like to reflect changes to that list into a Gtk# NodeView, so that when an item is added to the list, the NodeView will get a new item added to it.
How would I listen for changes to an IList? I have considered wrapping the IList with a class that implements IList, delegates the requisite methods, and broadcasts an event when changing it's contents, but that seems like a lot of work for something that has probably already been solved by someone else.
A: Do System.Componen.BindingList or System.Collections.ObjectModel.ObservableCollection exist in mono?
A: Gtk.DataBindings is wahat you're looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Greasemonkey: love it or hate it? As users, we love the power of Greasemonkey. As developers, it can complicate things.
Some people advocate defensively disabling user scripts; others are willing to die to defend them.
Is there a middle ground? How can we reduce the threat of an evolutionary arms race between users and unscrupulous advertisers?
A: If someone uses a browser plugin or modification of any kind, you shouldn't attempt to block them or disable it- that's just likely to result in more problems.
On the other hand, you shouldn't support them either; if it works, let it work, if it breaks, let it break.
Lots of people use dodgy browser add-ons, sometimes unintentionally and occasionally without any choice (Corporate IT imposes some dodgy IE modification on them). Try to play nicely with everyone.
A: I agree with MarkR (+1, by the way).
The application is on the client side...
So whatever web designers will try to do to stop some feature, they will either anger their users, have them hack away another solution, or move away to friendlier sites.
(I hate it when some broken down site needs me opening my Firefox debugger to enable me to complete my correct for just because some braindead developer forgot to declare his i loop variable and thus, making it a global... And I did it again less than two days ago)
As online apps should never rely on client-side controls and protections (i.e. testing the date value on client is a cool bonus, but testing it anyway and always on the server is the thing to do).
To the worst thing that could happen would be for the app to break on the GM user because of some faulty script. But from the server viewpoint, it should remain pristine.
... Thus, the client should be held responsible...
That means that whatever hack the user adds to his browser, the user is then on his own. At the very best, he/she could discover some hidden bug and report it. On the worst side, the site won't work properly
...Now, does it means client/server cooperation, too?
Most people using GM or whatever to enhance a site is showing that the site does not suit exactly is tastes. The good thing, like Rich wrote down, is that they are still on your site, and not elsewhere.
Either the "enhancement" is for his/her own very personal taste, and, hey, what's the problem if he/she wants to see your black-on-white web page with yellow-on-blue? Or perhaps the "enhancement" adds a lot of value to your site. In this case, I guess that what you want is to either offer the same features for everyone (thanking the author of the GM for his idea could be a good idea), or perhaps support it as an optional feature ("click here for advanced experience") or... an GreaseMonkey script?
A: I don't see why developers should hate it?
Advertisers? Nowadays, I doubt people write GM scripts to get rid of ads, they use AdBlock [Plus] instead...
In general, I write GM scripts to improve something lacking in the sites where I go frequently, so if you try and take measures to disable GM, I would be very upset and might boycott the site.
Beside, I guess that the number of people knowledgeable enough to install GM (even those using pre-made scripts) is quite small with regard to total number of visitors.
A: Any control you think you have over which user agents your site visitors use is illusory. However, the vast majority will be using vanilla IE/Firefox/Safari. But, if you've a site where the audience has adopted a Greasemonkey script en masse, then treat that as a strong vote from your users that the site needs to change!
A: If your users are using a Greasemonkey script and coming back, take it as good fortune: there's something they need that you're not giving them, and they haven't left yet.
If they didn't have Greasemonkey, they'd be gone. What can you do to keep them?
A: We've got accept the reality of our platform: once your website is in the (computer) memory of the viewer, they're able to do whatever they want with it, without your permission. Popular sites that try to dictate their own viewing terms to their audience often suffer immediate and angry backlash - instead of trying to do it your way, let the users do what they want, embrace it, and you'll end up providing a better service, which your users will appreciate and reward.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Change min/max/close buttons theme im currently overiding the WM_NCPAINT, WM_NCCALCSIZE and WM_NCACTIVATE to paint my own color/themed title bar for an application im working on. Now this is working great however the min, max and close buttons still are xp default theme.
I looked into what controls them and the mouse messages do. However they also contol resizing and other functions that I dont want to lose.
Is there an easy way to just change the theme of these buttons?
*
*Windows XP
*MFC Forms
*Visual studio 2005
A: I think your best bet here is to disable the buttons and redraw them with something akin to to the code I posted in this answer. It's in C# with WinForms, but the vast majority of it is overloaded WndProc() anyway, which you should be able to use almost copy/paste into MFC.
Implementing click handlers to do what you want them to do is trivial.
Note: The asker of that question said the code didn't work in Vista. I don't have a Vista box, but it works for me in XP.
A: This also helped: http://www.catch22.net/tuts/titlebar
A: You can also check out how it's done in MFC Next (VS2008 SP1). The theming support there does custom draw of the whole title bar, you can get a few ideas from that. I presume they tested it on Vista, too ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Accessing crash logs on iPhones used for ad hoc distribution When using one's own iPhone for development it's easy enough to access any crash logs via XCode->Organizer->Crash Logs.
How does one access crash logs on another person's phone if they don't have it set up for development in XCode, as would likely be the case if you were distributing your app to them via ad hoc distribution for beta testing?
A: http://www.ispeeddial.com/how-to-find-the-crash-log-for-an-iphone-application/
This will also be helpful;
http://furbo.org/2008/08/08/symbolicatifination/
A: Related to what @millenomi said - from what I can tell, crash logs are downloaded when you connect your iPhone to the computer, not when you sync the phone via iTunes. If your users have iTunes configured to not sync on connection, knowing this can save them the time of syncing. Along those same lines, if your application crashes while it's connected to a computer, simply syncing via iTunes is not enough to download crash logs - I've found that I need to disconnect and reconnect the phone to the computer.
I've only tested this on iPhones and iPod touches that are configured as development devices. Don't know if this makes any difference.
A: Two ways:
*
*iTunes syncs all crash reports during a regular sync. They can be found in Library/Logs/CrashReporter/MobileDevice on a Mac and probably somewhere in %APPDATA% on Windows.
* You can use the iPhone Configuration Utility for Mac OS X on any Mac to access the phone's console and crash logs. Note: the iPhone Web Configuration Utility, which is available for Windows and Mac (note the "web" in the name) does not allow this kind of access and only provides a portion of the Configuration Utility's features. Er, no you can't. Xcode provides this facility in the Organizer (from the Window menu), but not iPCU.
A: From Apple's Technical Note TN2151:
For applications that have been distributed using Ad Hoc or Enterprise methods, getting crash reports requires user cooperation. Specifically, the user will need to retrieve the crash report from the directory where it was copied by iTunes. Depending on the platform, the directory is:
Mac OS X: ~/Library/Logs/CrashReporter/MobileDevice/<DEVICE_NAME>
Windows XP: C:\Documents and Settings\<USERNAME>\Application Data\Apple Computer\Logs\CrashReporter\MobileDevice\<DEVICE_NAME>
Windows Vista or 7: C:\Users\<USERNAME>\AppData\Roaming\Apple Computer\Logs\CrashReporter\MobileDevice\<DEVICE_NAME>
<USERNAME> is the user's login name for the computer. <DEVICE_NAME> is the name of the iPod touch or iPhone, for example, "John's iPhone".
You are only interested in .crash files. The crash report's file name begins with the application name and contains date/time information. In addition, <DEVICE_NAME> will appear at the end of the file name, before the extension.
A: On an iPhone 5, you do not need to connect the iPhone to iTunes to see the logs. Not sure about other iPhone versions but you can get to the logs by opening up Settings and then navigating to:
Settings -> General -> About -> Diagnostics & Usage -> Diagnostics & Usage Data
For the app you are looking for all crash logs should be in this area as:
-.ips
Not 100% sure if this works only for ad hoc apps or not. I think it works for any apps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Making locking easier in MTAs In multi-threaded code, when an instance may be read or written by multiple threads, they need to be locked on to perform these operations safely.
To avoid the repetition of creating an object to lock on and writing a bunch of lock statements through code, I've created a generic class to handle the locking.
Am I missing anything, conceptually? This should work, right?
public class Locked<T> where T : new()
{
private readonly object locker = new object();
private T value;
public Locked()
: this(default(T))
{ }
public Locked(T value)
{
this.value = value;
}
public T Get()
{
lock (this.locker)
{
return this.value;
}
}
public void Set(T value)
{
lock (this.locker)
{
this.value = value;
}
}
}
And an example of it being used in a class:
private Locked<bool> stopWorkerThread = new Locked<bool>();
public void WorkerThreadEntryPoint()
{
while (true)
{
if (this.stopWorkerThread.Get())
{
break;
}
Also, how would I test something like this, in an automated way (e.g. create a unit test)?
Lastly, what can I do to implement a ++ and -- operator, to avoid this:
this.runningThreads.Set(this.runningThreads.Get() + 1);
A: That only locks for the duration of the get/set; of course, in many common cases this will be atomic anyway, simply due to to data size.
However, in reality most locks need to span more than this, in the same way that collections locking over just the Add etc don't help much - a caller typically needs a single lock to span the "is it there? if so update, else add" sequence.
For something as simple as a bool, "volatile" might solve the problem a lot more simply - especially if it is just for a loop exit.
You might also want to consider [MethodImpl(MethodImplOptions.Synchronized)] - although personally I prefer a private lock object (like you have used) to prevent issues with external people locking on the object (the above uses "this" as the lock).
For unit testing this, you'd need something to prove it broken first - which would be hard since the operations are so small (and already atomic for most data types). One of the other things it avoids (that volatile also fixes) is caching in a register, but again that is an optimisation and hard to force to prove it is broken.
If you are interested in a lock-wrapper, you might consider existing code like this.
A: Your code above has quite a few potential and real multi-threading issues, and I wouldn't use something like it in a real-world situation. For example:
this.runningThreads.Set(this.runningThreads.Get() + 1);
There is a pretty obvious race condition here. When the Get() call returns, the object is no longer locked. To do a real post or pre increment, the counter would need to be locked from before the Get to after the Set.
Also you don't always need to do a full lock if all you are doing is synchronous reads.
A better lock interface would (I think) require you to explicitly lock the instance where you need to do it. My experience is mainly with C++ so I can't recommend a full implementation, but my preferred syntax might look something like this:
using (Locked<T> lock = Locked<T>(instance))
{
// write value
instance++;
}
// read value
print instance;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Should there be a difference between an empty BSTR and a NULL BSTR? When maintaining a COM interface should an empty BSTR be treated the same way as NULL?
In other words should these two function calls produce the same result?
// Empty BSTR
CComBSTR empty(L""); // Or SysAllocString(L"")
someObj->Foo(empty);
// NULL BSTR
someObj->Foo(NULL);
A: The easiest way to handle this dilemma is to use CComBSTR and check for .Length() to be zero. That works for both empty and NULL values.
However, keep in mind, empty BSTR must be released or there will be a memory leak. I saw some of those recently in other's code. Quite hard to find, if you are not looking carefully.
A: Yes - a NULL BSTR is the same as an empty one. I remember we had all sorts of bugs that were uncovered when we switched from VS6 to 2003 - the CComBSTR class had a change to the default constructor that allocated it using NULL rather than an empty string. This happens when you for example treat a BSTR as a regular C style string and pass it to some function like strlen, or try to initialise a std::string with it.
Eric Lippert discusses BSTR's in great detail in Eric's Complete Guide To BSTR Semantics:
Let me list the differences first and
then discuss each point in
excruciating detail.
*
*A BSTR must have identical
semantics for NULL and for "". A PWSZ
frequently has different semantics for
those.
*A BSTR must be allocated and freed
with the SysAlloc* family of
functions. A PWSZ can be an
automatic-storage buffer from the
stack or allocated with malloc, new,
LocalAlloc or any other memory
allocator.
*A BSTR is of fixed length. A PWSZ
may be of any length, limited only by
the amount of valid memory in its
buffer.
*A BSTR always points to the first
valid character in the buffer. A PWSZ
may be a pointer to the middle or end
of a string buffer.
*When allocating an n-byte BSTR you
have room for n/2 wide characters.
When you allocate n bytes for a PWSZ
you can store n / 2 - 1 characters --
you have to leave room for the null.
*A BSTR may contain any Unicode data
including the zero character. A PWSZ
never contains the zero character
except as an end-of-string marker.
Both a BSTR and a PWSZ always have a
zero character after their last valid
character, but in a BSTR a valid
character may be a zero character.
*A BSTR may actually contain an odd
number of bytes -- it may be used for
moving binary data around. A PWSZ is
almost always an even number of bytes
and used only for storing Unicode
strings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Formatting a list of text into columns I'm trying to output a list of string values into a 2 column format. The standard way of making a list of strings into "normal text" is by using the string.join method. However, it only takes 2 arguments so I can only make a single column using "\n". I thought trying to make a loop that would simply add a tab between columns would do it but the logic didn't work correctly.
I found an ActiveState page that has a fairly complicated way of doing it but it's from 4 years ago. Is there an easy way to do it nowadays?
Edit Here is the list that I want to use.
skills_defs = ["ACM:Aircraft Mechanic", "BC:Body Combat", "BIO:Biology",
"CBE:Combat Engineer", "CHM:Chemistry", "CMP:Computers",
"CRM:Combat Rifeman", "CVE:Civil Engineer", "DIS:Disguise",
"ELC:Electronics","EQ:Equestrian", "FO:Forward Observer",
"FOR:Forage", "FRG:Forgery", "FRM:Farming", "FSH:Fishing",
"GEO:Geology", "GS:Gunsmith", "HW:Heavy Weapons", "IF:Indirect Fire",
"INS:Instruction", "INT:Interrogation", "JP:Jet Pilot", "LB:Longbow",
"LAP:Light Aircraft Pilot", "LCG:Large Caliber Gun", "LNG:Language",
"LP:Lockpick", "MC:Melee Combat", "MCY:Motorcycle", "MEC:Mechanic",
"MED:Medical", "MET:Meterology", "MNE:Mining Engineer",
"MTL:Metallurgy", "MTN:Mountaineering", "NWH:Nuclear Warhead",
"PAR:Parachute", "PST:Pistol", "RCN:Recon", "RWP:Rotary Wing Pilot",
"SBH:Small Boat Handling","SCD:Scuba Diving", "SCR:Scrounging",
"SWM:Swimming", "TW:Thrown Weapon", "TVD:Tracked Vehicle Driver",
"WVD:Wheeled Vehicle Driver"]
I just want to output this list into a simple, 2 column format to reduce space. Ideally there should be a standard amount of space between the columns but I can work with it.
ACM:Aircraft Mechanic BC:Body Combat
BIO:Biology CBE:Combat Engineer
CHM:Chemistry CMP:Computers
CRM:Combat Rifeman CVE:Civil Engineer
DIS:Disguise ELC:Electronics
EQ:Equestrian FO:Forward Observer
FOR:Forage FRG:Forgery
FRM:Farming FSH:Fishing
GEO:Geology GS:Gunsmith
HW:Heavy Weapons IF:Indirect Fire
INS:Instruction INT:Interrogation
JP:Jet Pilot LB:Longbow
LAP:Light Aircraft Pilot LCG:Large Caliber Gun
LNG:Language LP:Lockpick
MC:Melee Combat MCY:Motorcycle
MEC:Mechanic MED:Medical
MET:Meterology MNE:Mining Engineer
MTL:Metallurgy MTN:Mountaineering
NWH:Nuclear Warhead PAR:Parachute
PST:Pistol RCN:Recon
RWP:Rotary Wing Pilot SBH:Small Boat Handling
SCD:Scuba Diving SCR:Scrounging
SWM:Swimming TW:Thrown Weapon
TVD:Tracked Vehicle Driver WVD:Wheeled Vehicle Driver
A: This works
it = iter(skills_defs)
for i in it:
print('{:<60}{}'.format(i, next(it, "")))
See:
String format examples
A: It's long-winded, so I'll break it into two parts.
def columns( skills_defs, cols=2 ):
pairs = [ "\t".join(skills_defs[i:i+cols]) for i in range(0,len(skills_defs),cols) ]
return "\n".join( pairs )
It can, obviously, be done as a single loooong statement.
This works for an odd number of skills, also.
A: Here is an extension of the solution provided by gimel, which allows to print equally spaced columns.
def fmtcols(mylist, cols):
maxwidth = max(map(lambda x: len(x), mylist))
justifyList = map(lambda x: x.ljust(maxwidth), mylist)
lines = (' '.join(justifyList[i:i+cols])
for i in xrange(0,len(justifyList),cols))
print "\n".join(lines)
which returns something like this
ACM:Aircraft Mechanic BC:Body Combat
BIO:Biology CBE:Combat Engineer
CHM:Chemistry CMP:Computers
CRM:Combat Rifeman CVE:Civil Engineer
DIS:Disguise ELC:Electronics
... ...`
A: Two columns, separated by tabs, joined into lines. Look in itertools for iterator equivalents, to achieve a space-efficient solution.
import string
def fmtpairs(mylist):
pairs = zip(mylist[::2],mylist[1::2])
return '\n'.join('\t'.join(i) for i in pairs)
print fmtpairs(list(string.ascii_uppercase))
A B
C D
E F
G H
I J
...
Oops... got caught by S.Lott (thank you).
A more general solution, handles any number of columns and odd lists. Slightly modified from S.lott, using generators to save space.
def fmtcols(mylist, cols):
lines = ("\t".join(mylist[i:i+cols]) for i in xrange(0,len(mylist),cols))
return '\n'.join(lines)
A: data = [ ("1","2"),("3","4") ]
print "\n".join(map("\t".join,data))
Not as flexible as the ActiveState solution, but shorter :-)
A: The format_columns function should do what you want:
from __future__ import generators
try: import itertools
except ImportError: mymap, myzip= map, zip
else: mymap, myzip= itertools.imap, itertools.izip
def format_columns(string_list, columns, separator=" "):
"Produce equal-width columns from string_list"
sublists= []
# empty_str based on item 0 of string_list
try:
empty_str= type(string_list[0])()
except IndexError: # string_list is empty
return
# create a sublist for every column
for column in xrange(columns):
sublists.append(string_list[column::columns])
# find maximum length of a column
max_sublist_len= max(mymap(len, sublists))
# make all columns same length
for sublist in sublists:
if len(sublist) < max_sublist_len:
sublist.append(empty_str)
# calculate a format string for the output lines
format_str= separator.join(
"%%-%ds" % max(mymap(len, sublist))
for sublist in sublists)
for line_items in myzip(*sublists):
yield format_str % line_items
if __name__ == "__main__":
skills_defs = ["ACM:Aircraft Mechanic", "BC:Body Combat", "BIO:Biology",
"CBE:Combat Engineer", "CHM:Chemistry", "CMP:Computers",
"CRM:Combat Rifeman", "CVE:Civil Engineer", "DIS:Disguise",
"ELC:Electronics","EQ:Equestrian", "FO:Forward Observer",
"FOR:Forage", "FRG:Forgery", "FRM:Farming", "FSH:Fishing",
"GEO:Geology", "GS:Gunsmith", "HW:Heavy Weapons", "IF:Indirect Fire",
"INS:Instruction", "INT:Interrogation", "JP:Jet Pilot", "LB:Longbow",
"LAP:Light Aircraft Pilot", "LCG:Large Caliber Gun", "LNG:Language",
"LP:Lockpick", "MC:Melee Combat", "MCY:Motorcycle", "MEC:Mechanic",
"MED:Medical", "MET:Meterology", "MNE:Mining Engineer",
"MTL:Metallurgy", "MTN:Mountaineering", "NWH:Nuclear Warhead",
"PAR:Parachute", "PST:Pistol", "RCN:Recon", "RWP:Rotary Wing Pilot",
"SBH:Small Boat Handling","SCD:Scuba Diving", "SCR:Scrounging",
"SWM:Swimming", "TW:Thrown Weapon", "TVD:Tracked Vehicle Driver",
"WVD:Wheeled Vehicle Driver"]
for line in format_columns(skills_defs, 2):
print line
This assumes that you have a Python with generators available.
A: I think many of these solutions are conflating two separate things into one.
You want to:
*
*be able to force a string to be a certain width
*print a table
Here's a really simple take on how to do this:
import sys
skills_defs = ["ACM:Aircraft Mechanic", "BC:Body Combat", "BIO:Biology",
"CBE:Combat Engineer", "CHM:Chemistry", "CMP:Computers",
"CRM:Combat Rifeman", "CVE:Civil Engineer", "DIS:Disguise",
"ELC:Electronics","EQ:Equestrian", "FO:Forward Observer",
"FOR:Forage", "FRG:Forgery", "FRM:Farming", "FSH:Fishing",
"GEO:Geology", "GS:Gunsmith", "HW:Heavy Weapons", "IF:Indirect Fire",
"INS:Instruction", "INT:Interrogation", "JP:Jet Pilot", "LB:Longbow",
"LAP:Light Aircraft Pilot", "LCG:Large Caliber Gun", "LNG:Language",
"LP:Lockpick", "MC:Melee Combat", "MCY:Motorcycle", "MEC:Mechanic",
"MED:Medical", "MET:Meterology", "MNE:Mining Engineer",
"MTL:Metallurgy", "MTN:Mountaineering", "NWH:Nuclear Warhead",
"PAR:Parachute", "PST:Pistol", "RCN:Recon", "RWP:Rotary Wing Pilot",
"SBH:Small Boat Handling","SCD:Scuba Diving", "SCR:Scrounging",
"SWM:Swimming", "TW:Thrown Weapon", "TVD:Tracked Vehicle Driver",
"WVD:Wheeled Vehicle Driver"]
# The only thing "colform" does is return a modified version of "txt" that is
# ensured to be exactly "width" characters long. It truncates or adds spaces
# on the end as needed.
def colform(txt, width):
if len(txt) > width:
txt = txt[:width]
elif len(txt) < width:
txt = txt + (" " * (width - len(txt)))
return txt
# Now that you have colform you can use it to print out columns any way you wish.
# Here's one brain-dead way to print in two columns:
for i in xrange(len(skills_defs)):
sys.stdout.write(colform(skills_defs[i], 30))
if i % 2 == 1:
sys.stdout.write('\n')
A: This might also help you.
for i in skills_defs:
if skills_defs.index(i)%2 ==0:
print(i.ljust(30),end = " ")
else:
print(i.ljust(30))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How to turn these 3 methods into one using C# generics? I have not used generics much and so cannot figure out if it is possible to turn the following three methods into one using generics to reduce duplication. Actually my code currently has six methods but if you can solve it for the three then the rest should just work anyway with the same solution.
private object EvaluateUInt64(UInt64 x, UInt64 y)
{
switch (Operation)
{
case BinaryOp.Add:
return x + y;
case BinaryOp.Subtract:
return x - y;
case BinaryOp.Multiply:
return x * y;
case BinaryOp.Divide:
return x / y;
case BinaryOp.Remainder:
return x % y;
default:
throw new ApplicationException("error");
}
}
private object EvaluateFloat(float x, float y)
{
switch(Operation)
{
case BinaryOp.Add:
return x + y;
case BinaryOp.Subtract:
return x - y;
case BinaryOp.Multiply:
return x * y;
case BinaryOp.Divide:
return x / y;
case BinaryOp.Remainder:
return x % y;
default:
throw new ApplicationException("error");
}
}
private object EvaluateDouble(double x, double y)
{
switch (Operation)
{
case BinaryOp.Add:
return x + y;
case BinaryOp.Subtract:
return x - y;
case BinaryOp.Multiply:
return x * y;
case BinaryOp.Divide:
return x / y;
case BinaryOp.Remainder:
return x % y;
default:
throw new ApplicationException("error");
}
}
I am building a simple expression parser that then needs to evaluate the simple binary operations such as addition/subtraction etc. I use the above methods to get the actual maths performed using the relevant types. But there has got to be a better answer!
A: Generics doesn't natively support arithmetic. However, it can be done with .NET 3.5, like so. The Operator class is part of MiscUtil. This then becomes:
public T Evaluate<T>(T x, T y) {
switch (Operation)
{
case BinaryOp.Add:
return Operator.Add(x, y);
case BinaryOp.Subtract:
return Operator.Subtract(x, y);
... etc
Since you are writing an expression parser, it might be a good idea to use Expression directly, but you're welcome to use the above.
A: Marc Gravell has done a lot of work on making generic maths viable. See the MiscUtil
and general article about the issue.
The code in the current version of MiscUtil requires .NET 3.5 due to its use of expression trees. However, I believe Marc has a version which works with .NET 2.0 as well. If this would be useful to people, I'm sure we could incorporate it somehow (possibly with a facade in MiscUtil itself which would use the appropriate implementation based on framework version at runtime).
For the future, I'd like to see static interfaces which could provide an alternative way of working with generic maths types.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to create a Windows Media Player plugin in Visual Studio 2008 I've been trying to create a Windows Media Player plugin in Visual Studio 2008, and am having great difficulty finding the correct template. MSDN provides advice here, but it does not appear to be relevant to VS2008.
Can anyone suggest how to start a WMP plugin in Visual Studio?
EDIT: Ive accepted this answer because it worked for me, but I'm afraid it isn't the most elegant of solutions. If anyone has a better idea, please add it!
A: download the platfrom SDK, it has a bunch of example examples. But none of them are likely to be updated for VS2008, but that doesn't matter too much, its easy to convert the projects.
A: See this post on the MSDN forums:
http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=2627171&SiteID=1
Basically follow the instructions here http://msdn.microsoft.com/en-us/library/bb262326(VS.85).aspx using the VS2005 wizard.
Then edit wmpwiz.vsz so that the first line reads Wizard=VsWizard.VsWizardEngine.9.0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Linking against a specific shared library version in linux My build process consists of Qt's qmake Makefile generator and the typical make utility bundled with linux.
My application consists of a few shared libraries and the main application is linked against them.
How can I apply the typical linux versioning scheme on my libraries? (Use version 2 -> link against foo.so.2 that points to foo.so.2.y.z with an ldconfig generated link).
The answer doesn't have to be specific for my build process.
A: Your library should be named libfoo.so.2.y.z, with symlinks of libfoo.so.2 and libfoo.so both pointing to that. The library should be created using -soname libfoo.so.2 in the linker command line (or -Wl,-soname,libfoo.so.2 on the gcc command line).
Hope that helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Applying a common font scheme to multiple objects in wxPython Many times I will use the same font scheme for static text in a wxPython application. Currently I am making a SetFont() call for each static text object but that seems like a lot of unnecessary work. However, the wxPython demo and wxPython In Action book don't discuss this.
Is there a way to easily apply the same SetFont() method to all these text objects without making separate calls each time?
A: You can do this by calling SetFont on the parent window (Frame, Dialog, etc) before adding any widgets. The child widgets will inherit the font.
A: Maybe try subclassing the text object and in your class __init__ method just call SetFont()?
Or, do something like:
def f(C):
x = C()
x.SetFont(font) # where font is defined somewhere else
return x
and then just decorate every text object you create with with it:
text = f(wx.StaticText)
(of course, if StaticText constructor requires some parameters, it will require changing the first lines in f function definition).
A: If all widgets have already been created, you can apply SetFont recursively, for example with the following function:
def changeFontInChildren(win, font):
'''
Set font in given window and all its descendants.
@type win: L{wx.Window}
@type font: L{wx.Font}
'''
try:
win.SetFont(font)
except:
pass # don't require all objects to support SetFont
for child in win.GetChildren():
changeFontInChildren(child, font)
An example usage that causes all text in frame to become default font with italic style:
newFont = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
newFont.SetStyle(wx.FONTSTYLE_ITALIC)
changeFontInChildren(frame, newFont)
A: The solution given above by @DzinX worked for me when changing the font dynamically in a Panel that already had children and was already being shown.
I ended up modifying it as follows because the original gave me trouble in corner cases (i.e. when using an AuiManager with Floating frames).
def change_font_in_children(win, font):
'''
Set font in given window and all its descendants.
@type win: L{wx.Window}
@type font: L{wx.Font}
'''
for child in win.GetChildren():
change_font_in_children(child, font)
try:
win.SetFont(font)
win.Update()
except:
pass # don't require all objects to support SetFont
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: NHibernate session management in Windows Service applications I'm developing and application that runs as a Windows service. There are other components which include a few WCF services, a client GUI and so on - but it is the Windows service that access the database.
So, the application is a long-running server, and I'd like to improve its performance and scalability, I was looking to improve data access among other things. I posted in another thread about second-level caching.
This post is about session management for the long-running thread that accesses the database.
Should I be using a thread-static context?
If so, is there any example of how that would be implemented.
Every one around the net who is using NHibernate seem to be heavily focussed on web-application style architectures. There seems to be a great lack of documentation / discussion for non-web app designs.
At the moment, my long running thread does this:
*
*Call 3 or 4 DAO methods
*Verify the state of the detached objects returned.
*Update the state if needed.
*Call a couple of DAO methods to persist the updated instances. (pass in the id of the object and the instance itself - the DAO will retrieve the object from the DB again, and set the updated values and session.SaveOrUpdate() before committing the transaction.
*Sleep for 'n' seconds
*Repeat all over again!
So, the following is a common pattern we use for each of the DAO methods:
*
*Open session using sessionFactory.OpenSession()
*Begin transaction
*Do db work. retrieve / update etc
*Commit trans
*(Rollback in case of exceptions)
*Finally always dispose transaction and session.Close()
This happens for every method call to a DAO class.
I suspect this is some sort of an anti-pattern the way we are doing it.
However, I'm not able to find enough direction anywhere as to how we could improve it.
Pls note, while this thread is running in the background, doing its stuff, there are requests coming in from the WCF clients each of which could make 2-3 DAO calls themselves - sometimes querying/updating the same objects the long running thread deals with.
Any ideas / suggestions / pointers to improve our design will be greatly appreciated.
If we can get some good discussion going, we could make this a community wiki, and possbily link to here from http://nhibernate.info
Krishna
A:
There seems to be a great lack of documentation / discussion for non-web app designs.
This has also been my experience. However, the model you are following seems correct to me. You should always open a session, commit changes, then close it again.
A: This question is a little old now, but another technique would be to use Contextual Sessions rather than creating a new session in each DAO.
In our case, we're thinking of creating the session once per thread (for our multi-threaded win32 service), and make it available to the DAOs using either a property that returns SessionFactory.GetCurrentSession() (using the ThreadContext current session provider, so it's session-per-thread) or via DI (dependency injection - once again using ThreadContext.)
More info on GetCurrentSession and Contextual Sessions here.
A: I agree, there aren't many examples for stateful apps.
I'm thinking of doing the following:
Like you I have a windows service hosting a number of WCF services. So the WCF services are the entry points.
Ultimately all my WCF services inherit from AbstractService - which handles a lot of logging and basic DB inserts/updates.
In one of the best NHibernate posts I've seen, a HttpModule does the following:
see http://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx
private void BeginTransaction(object sender, EventArgs e) {
NHibernateSessionManager.Instance.BeginTransaction();
}
private void CommitAndCloseSession(object sender, EventArgs e) {
try {
NHibernateSessionManager.Instance.CommitTransaction();
}
finally {
NHibernateSessionManager.Instance.CloseSession();
}
}
So perhaps I should do something similar in AbstractService. So effectively I'll end up with a session per service invocation. If you examine the NHib best practices article link above, you'll see that the NHibernateSessionManager should deal with everything else, as long as I open and close the session (AbstractService constructor and destructor).
Just a thought. But I'm experiencing errors because my session seems to be hanging around for too long, and I'm getting the infamous error - NHibernate.AssertionFailure: null id in entry (don't flush the Session after an exception occurs).
A: You can also flush the session without actually closing it and it achieves the same thing. I do.
A: We've recently started using an IoC container to manage session lifecycle, as a replacement for the contextual sessions mentioned above. (More details here).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: LaTeX Beamer package, change frame title in \againframe Is it possible to change the frame title when using \againframe from the Beamer package, in LaTeX? I would like to have a previous frame displayed, at a specific slide inside the frame, but with a different title this time.
Thanks.
A: Try doing the following:
\begin{frame}[label=my_frame]
\frametitle<1>{Title to be displayed the first time}
\frametitle<2>{Title to be displayed the second time}
%Other frame contents
\end{frame}
\againframe<2>{my_frame}
This is known as using "overlay specifications." Refer to section 3.10 of the Beamer User Guide (version 3.01).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Using Java-classes with C# I have a project written in Java (>1.5).
Is it possible to write parts of the project with C#?
For instance the GUI and calling the methods and instantiate the classes written in java?
If yes, how?
A: Not without something like ikvm - or using web services etc to communicate between the two sides. Basically it's likely to be much more work than either rewriting your existing project code in C# or writing the GUI in Java.
A: There is something called Java Language Conversion Assistant for .NET. You can convert your Java classes to c# and start coding.
There is also something called JNBridge (not free).
A: It seems like my solution is very limited. and apply only to specific version of java.
I probably will stay with old good C :) Can't imagine how to work without shared libraries :)
This document explain how to create a dll from java and use it in C code. I'm not C# or java expert but i'm sure that you can load external dll's in C# as well. So not a complete solution but good starting point, IMHO.
Generally dll it's a perfect way to mixing languages.
A: In simple way you can pack your java classes to jar file then
In C# use Process class for execute and map IO stream
A: I am author of jni4net, open source intraprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
A: I did some research on this a few years ago (2005 I believe) and I liked JNBridgePro as the best third party product to do this. Check it out here http://www.jnbridge.com/
Good luck!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Delete all tables in Derby DB How do i delete all the tables in the schema on Apache Derby DB using JDBC?
A: Thanks are due to the blog:
Step 1:
Run the SQL statement, but don't forget to replace the schema name 'APP' with your your schema name in the 2 occurrences below:
SELECT
'ALTER TABLE '||S.SCHEMANAME||'.'||T.TABLENAME||' DROP CONSTRAINT '||C.CONSTRAINTNAME||';'
FROM
SYS.SYSCONSTRAINTS C,
SYS.SYSSCHEMAS S,
SYS.SYSTABLES T
WHERE
C.SCHEMAID = S.SCHEMAID
AND
C.TABLEID = T.TABLEID
AND
S.SCHEMANAME = 'APP'
UNION
SELECT 'DROP TABLE ' || schemaname ||'.' || tablename || ';'
FROM SYS.SYSTABLES
INNER JOIN SYS.SYSSCHEMAS ON SYS.SYSTABLES.SCHEMAID = SYS.SYSSCHEMAS.SCHEMAID
where schemaname='APP';
Step 2:
The result of the above execution is a set of SQL statements, copy them to the SQL editor, execute them, then the constraints and the tables are dropped.
A: For actual code that does this, check CleanDatabaseTestSetup.java in the Derby test suite section of the Derby distribution.
A: Do a little method in java in which you execute a
DROP TABLE [tablename]
tablename is passed by parameter.
And another method in which you loop over a record set formed by the query
SELECT tablename FROM SYSTABLES
calling the first method.
Derby latest documentation
A: I think most db providers don't allow DROP TABLE * (or similar).
I think the best way would be to SHOW TABLES and then go through each deleting in a loop via a resultset.
HTH.
A: JDBC allows you to solve your task in a database agnostic way:
*
*Open the connection
*Grab the DatabaseMetaData
*Use it to list all tables in your database JavaDoc
*Iterate over the resultset and fire the DROP TABLE for each table
A: *
*you must generate schema and table name from Derby DB system catalog.
*Order all tables by relation.
*Generate java statement for drop all tables
*Use autoCommit() method and set this method to false. for manual commit or rollback transactions when got errors.
*Run you java process.
Good Luck.
A: A simpler solution is to use JDBC to run "drop database foo" then "create database foo". However, this will cause all objects in the DB to be deleted (i.e. not just tables).
A: If you're working from the command prompt rather than through JDBC, this should get you started.
SELECT 'DROP TABLE ' || schemaname ||'.' || tablename || ';'
FROM SYS.SYSTABLES
INNER JOIN SYS.SYSSCHEMAS ON SYS.SYSTABLES.SCHEMAID = SYS.SYSSCHEMAS.SCHEMAID
;
A: A simple solution is to do right click -> disconnect then delete the folder containing your database and reconnect it.
A: Download Squirrel SQL from http://squirrel-sql.sourceforge.net/
Connect to the database.
Expand the TABLE node.
Select the tables that you want to drop.
Right click and select -> Scripts -> Drop table scripts
Run the generated queries
You can even select delete records to empty the selected tables.
A: For those wanting to delete all schemas programmatically without having to manually copy-paste SQL each time, here's code lifted from org.apache.derbyTesting.junit.CleanDatabaseTestSetup and org.apache.derbyTesting.junit.JDBC. You just call dropAllSchemas(connection);
public static void dropAllSchemas(Connection conn) throws SQLException {
DatabaseMetaData dmd = conn.getMetaData();
SQLException sqle = null;
// Loop a number of arbitary times to catch cases
// where objects are dependent on objects in
// different schemas.
for (int count = 0; count < 5; count++) {
// Fetch all the user schemas into a list
List<String> schemas = new ArrayList<String>();
ResultSet rs = dmd.getSchemas();
while (rs.next()) {
String schema = rs.getString("TABLE_SCHEM");
if (schema.startsWith("SYS"))
continue;
if (schema.equals("SQLJ"))
continue;
if (schema.equals("NULLID"))
continue;
schemas.add(schema);
}
rs.close();
// DROP all the user schemas.
sqle = null;
for (String schema : schemas) {
try {
dropSchema(dmd, schema);
} catch (SQLException e) {
sqle = e;
}
}
// No errors means all the schemas we wanted to
// drop were dropped, so nothing more to do.
if (sqle == null)
return;
}
throw sqle;
}
/**
* Constant to pass to DatabaseMetaData.getTables() to fetch
* just tables.
*/
public static final String[] GET_TABLES_TABLE = new String[] {"TABLE"};
/**
* Constant to pass to DatabaseMetaData.getTables() to fetch
* just views.
*/
public static final String[] GET_TABLES_VIEW = new String[] {"VIEW"};
/**
* Constant to pass to DatabaseMetaData.getTables() to fetch
* just synonyms.
*/
public static final String[] GET_TABLES_SYNONYM =
new String[] {"SYNONYM"};
/**
* Drop a database schema by dropping all objects in it
* and then executing DROP SCHEMA. If the schema is
* APP it is cleaned but DROP SCHEMA is not executed.
*
* TODO: Handle dependencies by looping in some intelligent
* way until everything can be dropped.
*
*
* @param dmd DatabaseMetaData object for database
* @param schema Name of the schema
* @throws SQLException database error
*/
public static void dropSchema(DatabaseMetaData dmd, String schema) throws SQLException{
Connection conn = dmd.getConnection();
Statement s = dmd.getConnection().createStatement();
// Triggers
PreparedStatement pstr = conn.prepareStatement(
"SELECT TRIGGERNAME FROM SYS.SYSSCHEMAS S, SYS.SYSTRIGGERS T "
+ "WHERE S.SCHEMAID = T.SCHEMAID AND SCHEMANAME = ?");
pstr.setString(1, schema);
ResultSet trrs = pstr.executeQuery();
while (trrs.next()) {
String trigger = trrs.getString(1);
s.execute("DROP TRIGGER " + escape(schema, trigger));
}
trrs.close();
pstr.close();
// Functions - not supported by JDBC meta data until JDBC 4
// Need to use the CHAR() function on A.ALIASTYPE
// so that the compare will work in any schema.
PreparedStatement psf = conn.prepareStatement(
"SELECT ALIAS FROM SYS.SYSALIASES A, SYS.SYSSCHEMAS S" +
" WHERE A.SCHEMAID = S.SCHEMAID " +
" AND CHAR(A.ALIASTYPE) = ? " +
" AND S.SCHEMANAME = ?");
psf.setString(1, "F" );
psf.setString(2, schema);
ResultSet rs = psf.executeQuery();
dropUsingDMD(s, rs, schema, "ALIAS", "FUNCTION");
// Procedures
rs = dmd.getProcedures((String) null,
schema, (String) null);
dropUsingDMD(s, rs, schema, "PROCEDURE_NAME", "PROCEDURE");
// Views
rs = dmd.getTables((String) null, schema, (String) null,
GET_TABLES_VIEW);
dropUsingDMD(s, rs, schema, "TABLE_NAME", "VIEW");
// Tables
rs = dmd.getTables((String) null, schema, (String) null,
GET_TABLES_TABLE);
dropUsingDMD(s, rs, schema, "TABLE_NAME", "TABLE");
// At this point there may be tables left due to
// foreign key constraints leading to a dependency loop.
// Drop any constraints that remain and then drop the tables.
// If there are no tables then this should be a quick no-op.
ResultSet table_rs = dmd.getTables((String) null, schema, (String) null,
GET_TABLES_TABLE);
while (table_rs.next()) {
String tablename = table_rs.getString("TABLE_NAME");
rs = dmd.getExportedKeys((String) null, schema, tablename);
while (rs.next()) {
short keyPosition = rs.getShort("KEY_SEQ");
if (keyPosition != 1)
continue;
String fkName = rs.getString("FK_NAME");
// No name, probably can't happen but couldn't drop it anyway.
if (fkName == null)
continue;
String fkSchema = rs.getString("FKTABLE_SCHEM");
String fkTable = rs.getString("FKTABLE_NAME");
String ddl = "ALTER TABLE " +
escape(fkSchema, fkTable) +
" DROP FOREIGN KEY " +
escape(fkName);
s.executeUpdate(ddl);
}
rs.close();
}
table_rs.close();
conn.commit();
// Tables (again)
rs = dmd.getTables((String) null, schema, (String) null,
GET_TABLES_TABLE);
dropUsingDMD(s, rs, schema, "TABLE_NAME", "TABLE");
// drop UDTs
psf.setString(1, "A" );
psf.setString(2, schema);
rs = psf.executeQuery();
dropUsingDMD(s, rs, schema, "ALIAS", "TYPE");
// drop aggregates
psf.setString(1, "G" );
psf.setString(2, schema);
rs = psf.executeQuery();
dropUsingDMD(s, rs, schema, "ALIAS", "DERBY AGGREGATE");
psf.close();
// Synonyms - need work around for DERBY-1790 where
// passing a table type of SYNONYM fails.
rs = dmd.getTables((String) null, schema, (String) null,
GET_TABLES_SYNONYM);
dropUsingDMD(s, rs, schema, "TABLE_NAME", "SYNONYM");
// sequences
if ( sysSequencesExists( conn ) )
{
psf = conn.prepareStatement
(
"SELECT SEQUENCENAME FROM SYS.SYSSEQUENCES A, SYS.SYSSCHEMAS S" +
" WHERE A.SCHEMAID = S.SCHEMAID " +
" AND S.SCHEMANAME = ?");
psf.setString(1, schema);
rs = psf.executeQuery();
dropUsingDMD(s, rs, schema, "SEQUENCENAME", "SEQUENCE");
psf.close();
}
// Finally drop the schema if it is not APP
if (!schema.equals("APP")) {
s.executeUpdate("DROP SCHEMA " + escape(schema) + " RESTRICT");
}
conn.commit();
s.close();
}
/**
* Return true if the SYSSEQUENCES table exists.
*/
private static boolean sysSequencesExists( Connection conn ) throws SQLException
{
PreparedStatement ps = null;
ResultSet rs = null;
try {
ps = conn.prepareStatement
(
"select count(*) from sys.systables t, sys.sysschemas s\n" +
"where t.schemaid = s.schemaid\n" +
"and ( cast(s.schemaname as varchar(128)))= 'SYS'\n" +
"and ( cast(t.tablename as varchar(128))) = 'SYSSEQUENCES'" );
rs = ps.executeQuery();
rs.next();
return ( rs.getInt( 1 ) > 0 );
}
finally
{
if ( rs != null ) { rs.close(); }
if ( ps != null ) { ps.close(); }
}
}
/**
* Escape a non-qualified name so that it is suitable
* for use in a SQL query executed by JDBC.
*/
public static String escape(String name)
{
StringBuffer buffer = new StringBuffer(name.length() + 2);
buffer.append('"');
for (int i = 0; i < name.length(); i++) {
char c = name.charAt(i);
// escape double quote characters with an extra double quote
if (c == '"') buffer.append('"');
buffer.append(c);
}
buffer.append('"');
return buffer.toString();
}
/**
* Escape a schema-qualified name so that it is suitable
* for use in a SQL query executed by JDBC.
*/
public static String escape(String schema, String name)
{
return escape(schema) + "." + escape(name);
}
/**
* DROP a set of objects based upon a ResultSet from a
* DatabaseMetaData call.
*
* TODO: Handle errors to ensure all objects are dropped,
* probably requires interaction with its caller.
*
* @param s Statement object used to execute the DROP commands.
* @param rs DatabaseMetaData ResultSet
* @param schema Schema the objects are contained in
* @param mdColumn The column name used to extract the object's
* name from rs
* @param dropType The keyword to use after DROP in the SQL statement
* @throws SQLException database errors.
*/
private static void dropUsingDMD(
Statement s, ResultSet rs, String schema,
String mdColumn,
String dropType) throws SQLException
{
String dropLeadIn = "DROP " + dropType + " ";
// First collect the set of DROP SQL statements.
ArrayList<String> ddl = new ArrayList<String>();
while (rs.next())
{
String objectName = rs.getString(mdColumn);
String raw = dropLeadIn + escape(schema, objectName);
if (
"TYPE".equals( dropType ) ||
"SEQUENCE".equals( dropType ) ||
"DERBY AGGREGATE".equals( dropType )
)
{ raw = raw + " restrict "; }
ddl.add( raw );
}
rs.close();
if (ddl.isEmpty())
return;
// Execute them as a complete batch, hoping they will all succeed.
s.clearBatch();
int batchCount = 0;
for (Iterator i = ddl.iterator(); i.hasNext(); )
{
Object sql = i.next();
if (sql != null) {
s.addBatch(sql.toString());
batchCount++;
}
}
int[] results;
boolean hadError;
try {
results = s.executeBatch();
//Assert.assertNotNull(results);
//Assert.assertEquals("Incorrect result length from executeBatch", batchCount, results.length);
hadError = false;
} catch (BatchUpdateException batchException) {
results = batchException.getUpdateCounts();
//Assert.assertNotNull(results);
//Assert.assertTrue("Too many results in BatchUpdateException", results.length <= batchCount);
hadError = true;
}
// Remove any statements from the list that succeeded.
boolean didDrop = false;
for (int i = 0; i < results.length; i++)
{
int result = results[i];
if (result == Statement.EXECUTE_FAILED)
hadError = true;
else if (result == Statement.SUCCESS_NO_INFO || result >= 0) {
didDrop = true;
ddl.set(i, null);
}
//else
//Assert.fail("Negative executeBatch status");
}
s.clearBatch();
if (didDrop) {
// Commit any work we did do.
s.getConnection().commit();
}
// If we had failures drop them as individual statements
// until there are none left or none succeed. We need to
// do this because the batch processing stops at the first
// error. This copes with the simple case where there
// are objects of the same type that depend on each other
// and a different drop order will allow all or most
// to be dropped.
if (hadError) {
do {
hadError = false;
didDrop = false;
for (ListIterator<String> i = ddl.listIterator(); i.hasNext();) {
String sql = i.next();
if (sql != null) {
try {
s.executeUpdate(sql);
i.set(null);
didDrop = true;
} catch (SQLException e) {
hadError = true;
}
}
}
if (didDrop)
s.getConnection().commit();
} while (hadError && didDrop);
}
}
PS: This code came in handy for when I migrated my database from H2 that does support DROP ALL OBJECTS, to Apache Derby which does not (headache). The only reason I migrated away from H2 is that it's a fully in-memory database and was getting too big for my server's RAM, so I decided to try Apache Derby. H2 is far easier and more user-friendly than Derby, I highly recommend it. I'm sad that I can't afford the RAM to keep using H2.
By the way, for those affected by Derby's lack of LIMIT or UPSERT, see this post about substituting FETCH NEXT instead of LIMIT and this one about correctly using MERGE INTO.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Interface "recursion" and reference counting I have a small problem with interfaces. Here it is in Pseudo code :
type
Interface1 = interface
end;
Interface2 = interface
end;
TParentClass = class(TInterfacedObject, Interface1)
private
fChild : Interface2;
public
procedure AddChild(aChild : Interface2);
end;
TChildClass = class(TInterfacedObject, Interface2)
private
fParent : Interface2;
public
constructor Create(aPArent : Interface1);
end;
Can anyone see the flaw? I need the child to have a reference to it's parent, but the reference counting doesn't work in this situation. If I create a ParentClass instance, and add a child, then the parent class is never released. I can see why. How do I get round it?
A: Well, the reference counting of course does work in this situation - it just doesn't solve the problem.
That's the biggest problem with reference counting - when you have a circular reference, you have to explicitely 'break' it (set one interface reference to 'nil', for example). That's also why reference counting is not really a replacement for garbage collection - garbage collectors are aware that cycles may exist and can release such cyclic structures when they are not referenced from the 'outside'.
A: A reference-counted reference has two semantics: it acts as a share of ownership as well as a means of navigating the object graph.
Typically, you don't need both of these semantics on all links in a cycle in the graph of references. Perhaps only parents own children, and not the other way around? If that is the case, you can make the child references to the parent weak links, by storing them as pointers, like this:
TChildClass = class(TInterfacedObject, Interface2)
private
fParent : Pointer;
function GetParent: Interface1;
public
constructor Create(aPArent : Interface1);
property Parent: Interface1 read GetParent;
end;
function TChildClass.GetParent: Interface1;
begin
Result := Interface1(fParent);
end;
constructor TChildClass.Create(AParent: Interface1);
begin
fParent := Pointer(AParent);
end;
This is safe if the root of the tree of instances is guaranteed to be kept alive somewhere, i.e. you are not relying on only keeping a reference to a branch of the tree and still being able to navigate the whole of it.
A: You must make a method that explicitly unlinks the right references. There is no way to get the automatic reference counting working properly in this case.
A: With the use of a function pointer in the first example then the cyclic reference problem doesn't exist. .NET uses delegates, and VB6 uses events. All of which have the benefit of not incrementing the reference count of the object being pointed too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: monitoring memcached stats with openNMS did anyone tried monitoring memcached statistics with openNMS?
If you did, what did you use?
A: Since version 1.7.4, there'd a memcached monitor bundled with OpenNMS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Visitor Pattern + Open/Closed Principle Is it possible to implement the Visitor Pattern respecting the Open/Closed Principle, but still be able to add new visitable classes?
The Open/Closed Principle states that "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".
struct ConcreteVisitable1;
struct ConcreteVisitable2;
struct AbstractVisitor
{
virtual void visit(ConcreteVisitable1& concrete1) = 0;
virtual void visit(ConcreteVisitable2& concrete2) = 0;
};
struct AbstractVisitable
{
virtual void accept(AbstractVisitor& visitor) = 0;
};
struct ConcreteVisitable1 : AbstractVisitable
{
virtual void accept(AbstractVisitor& visitor)
{
visitor.visit(*this);
}
};
struct ConcreteVisitable2 : AbstractVisitable
{
virtual void accept(AbstractVisitor& visitor)
{
visitor.visit(*this);
}
};
You can implement any number of classes which derives from AbstractVisitor: It is open for extension. You cannot add a new visitable class as the classes derived from AbstractVisitor will not compile: It closed for modification.
The AbstractVisitor class tree respects the Open/Closed Principle.
The AbstractVisitable class tree does not respect the Open/Closed Principle, as it cannot be extended.
Is there any other solution than to extend the AbstractVisitor and AbstractVisitable as below?
struct ConcreteVisitable3;
struct AbstractVisitor2 : AbstractVisitor
{
virtual void visit(ConcreteVisitable3& concrete3) = 0;
};
struct AbstractVisitable2 : AbstractVisitable
{
virtual void accept(AbstractVisitor2& visitor) = 0;
};
struct ConcreteVisitable3 : AbstractVisitable2
{
virtual void accept(AbstractVisitor2& visitor)
{
visitor.visit(*this);
}
};
A: In C++, Acyclic Visitor (pdf) gets you what you want.
A: You might want to check out research on "the expression problem", see e.g.
http://lambda-the-ultimate.org/node/2232
I think the problem is mostly academic, but it's something that has been studied a lot, so there's a bit of stuff you can read about different ways to implement it in existing languages or with various language extensions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: What is the best way to get all the divisors of a number? Here's the very dumb way:
def divisorGenerator(n):
for i in xrange(1,n/2+1):
if n%i == 0: yield i
yield n
The result I'd like to get is similar to this one, but I'd like a smarter algorithm (this one it's too much slow and dumb :-)
I can find prime factors and their multiplicity fast enough.
I've an generator that generates factor in this way:
(factor1, multiplicity1)
(factor2, multiplicity2)
(factor3, multiplicity3)
and so on...
i.e. the output of
for i in factorGenerator(100):
print i
is:
(2, 2)
(5, 2)
I don't know how much is this useful for what I want to do (I coded it for other problems), anyway I'd like a smarter way to make
for i in divisorGen(100):
print i
output this:
1
2
4
5
10
20
25
50
100
UPDATE: Many thanks to Greg Hewgill and his "smart way" :)
Calculating all divisors of 100000000 took 0.01s with his way against the 39s that the dumb way took on my machine, very cool :D
UPDATE 2: Stop saying this is a duplicate of this post. Calculating the number of divisor of a given number doesn't need to calculate all the divisors. It's a different problem, if you think it's not then look for "Divisor function" on wikipedia. Read the questions and the answer before posting, if you do not understand what is the topic just don't add not useful and already given answers.
A: I like Greg solution, but I wish it was more python like.
I feel it would be faster and more readable;
so after some time of coding I came out with this.
The first two functions are needed to make the cartesian product of lists.
And can be reused whnever this problem arises.
By the way, I had to program this myself, if anyone knows of a standard solution for this problem, please feel free to contact me.
"Factorgenerator" now returns a dictionary. And then the dictionary is fed into "divisors", who uses it to generate first a list of lists, where each list is the list of the factors of the form p^n with p prime.
Then we make the cartesian product of those lists, and we finally use Greg' solution to generate the divisor.
We sort them, and return them.
I tested it and it seem to be a bit faster than the previous version. I tested it as part of a bigger program, so I can't really say how much is it faster though.
Pietro Speroni (pietrosperoni dot it)
from math import sqrt
##############################################################
### cartesian product of lists ##################################
##############################################################
def appendEs2Sequences(sequences,es):
result=[]
if not sequences:
for e in es:
result.append([e])
else:
for e in es:
result+=[seq+[e] for seq in sequences]
return result
def cartesianproduct(lists):
"""
given a list of lists,
returns all the possible combinations taking one element from each list
The list does not have to be of equal length
"""
return reduce(appendEs2Sequences,lists,[])
##############################################################
### prime factors of a natural ##################################
##############################################################
def primefactors(n):
'''lists prime factors, from greatest to smallest'''
i = 2
while i<=sqrt(n):
if n%i==0:
l = primefactors(n/i)
l.append(i)
return l
i+=1
return [n] # n is prime
##############################################################
### factorization of a natural ##################################
##############################################################
def factorGenerator(n):
p = primefactors(n)
factors={}
for p1 in p:
try:
factors[p1]+=1
except KeyError:
factors[p1]=1
return factors
def divisors(n):
factors = factorGenerator(n)
divisors=[]
listexponents=[map(lambda x:k**x,range(0,factors[k]+1)) for k in factors.keys()]
listfactors=cartesianproduct(listexponents)
for f in listfactors:
divisors.append(reduce(lambda x, y: x*y, f, 1))
divisors.sort()
return divisors
print divisors(60668796879)
P.S.
it is the first time I am posting to stackoverflow.
I am looking forward for any feedback.
A: Given your factorGenerator function, here is a divisorGen that should work:
def divisorGen(n):
factors = list(factorGenerator(n))
nfactors = len(factors)
f = [0] * nfactors
while True:
yield reduce(lambda x, y: x*y, [factors[x][0]**f[x] for x in range(nfactors)], 1)
i = 0
while True:
f[i] += 1
if f[i] <= factors[i][1]:
break
f[i] = 0
i += 1
if i >= nfactors:
return
The overall efficiency of this algorithm will depend entirely on the efficiency of the factorGenerator.
A: Here is a smart and fast way to do it for numbers up to and around 10**16 in pure Python 3.6,
from itertools import compress
def primes(n):
""" Returns a list of primes < n for n > 2 """
sieve = bytearray([True]) * (n//2)
for i in range(3,int(n**0.5)+1,2):
if sieve[i//2]:
sieve[i*i//2::i] = bytearray((n-i*i-1)//(2*i)+1)
return [2,*compress(range(3,n,2), sieve[1:])]
def factorization(n):
""" Returns a list of the prime factorization of n """
pf = []
for p in primeslist:
if p*p > n : break
count = 0
while not n % p:
n //= p
count += 1
if count > 0: pf.append((p, count))
if n > 1: pf.append((n, 1))
return pf
def divisors(n):
""" Returns an unsorted list of the divisors of n """
divs = [1]
for p, e in factorization(n):
divs += [x*p**k for k in range(1,e+1) for x in divs]
return divs
n = 600851475143
primeslist = primes(int(n**0.5)+1)
print(divisors(n))
A: To expand on what Shimi has said, you should only be running your loop from 1 to the square root of n. Then to find the pair, do n / i, and this will cover the whole problem space.
As was also noted, this is a NP, or 'difficult' problem. Exhaustive search, the way you are doing it, is about as good as it gets for guaranteed answers. This fact is used by encryption algorithms and the like to help secure them. If someone were to solve this problem, most if not all of our current 'secure' communication would be rendered insecure.
Python code:
import math
def divisorGenerator(n):
large_divisors = []
for i in xrange(1, int(math.sqrt(n) + 1)):
if n % i == 0:
yield i
if i*i != n:
large_divisors.append(n / i)
for divisor in reversed(large_divisors):
yield divisor
print list(divisorGenerator(100))
Which should output a list like:
[1, 2, 4, 5, 10, 20, 25, 50, 100]
A: I think you can stop at math.sqrt(n) instead of n/2.
I will give you example so that you can understand it easily. Now the sqrt(28) is 5.29 so ceil(5.29) will be 6. So I if I will stop at 6 then I will can get all the divisors. How?
First see the code and then see image:
import math
def divisors(n):
divs = [1]
for i in xrange(2,int(math.sqrt(n))+1):
if n%i == 0:
divs.extend([i,n/i])
divs.extend([n])
return list(set(divs))
Now, See the image below:
Lets say I have already added 1 to my divisors list and I start with i=2 so
So at the end of all the iterations as I have added the quotient and the divisor to my list all the divisors of 28 are populated.
Source: How to determine the divisors of a number
A: If your PC has tons of memory, a brute single line can be fast enough with numpy:
N = 10000000; tst = np.arange(1, N); tst[np.mod(N, tst) == 0]
Out:
array([ 1, 2, 4, 5, 8, 10, 16,
20, 25, 32, 40, 50, 64, 80,
100, 125, 128, 160, 200, 250, 320,
400, 500, 625, 640, 800, 1000, 1250,
1600, 2000, 2500, 3125, 3200, 4000, 5000,
6250, 8000, 10000, 12500, 15625, 16000, 20000,
25000, 31250, 40000, 50000, 62500, 78125, 80000,
100000, 125000, 156250, 200000, 250000, 312500, 400000,
500000, 625000, 1000000, 1250000, 2000000, 2500000, 5000000])
Takes less than 1s on my slow PC.
A: Although there are already many solutions to this, I really have to post this :)
This one is:
*
*readable
*short
*self contained, copy & paste ready
*quick (in cases with a lot of prime factors and divisors, > 10 times faster than the accepted solution)
*python3, python2 and pypy compliant
Code:
def divisors(n):
# get factors and their counts
factors = {}
nn = n
i = 2
while i*i <= nn:
while nn % i == 0:
factors[i] = factors.get(i, 0) + 1
nn //= i
i += 1
if nn > 1:
factors[nn] = factors.get(nn, 0) + 1
primes = list(factors.keys())
# generates factors from primes[k:] subset
def generate(k):
if k == len(primes):
yield 1
else:
rest = generate(k+1)
prime = primes[k]
for factor in rest:
prime_to_i = 1
# prime_to_i iterates prime**i values, i being all possible exponents
for _ in range(factors[prime] + 1):
yield factor * prime_to_i
prime_to_i *= prime
# in python3, `yield from generate(0)` would also work
for factor in generate(0):
yield factor
A: Adapted from CodeReview, here is a variant which works with num=1 !
from itertools import product
import operator
def prod(ls):
return reduce(operator.mul, ls, 1)
def powered(factors, powers):
return prod(f**p for (f,p) in zip(factors, powers))
def divisors(num) :
pf = dict(prime_factors(num))
primes = pf.keys()
#For each prime, possible exponents
exponents = [range(i+1) for i in pf.values()]
return (powered(primes,es) for es in product(*exponents))
A: Old question, but here is my take:
def divs(n, m):
if m == 1: return [1]
if n % m == 0: return [m] + divs(n, m - 1)
return divs(n, m - 1)
You can proxy with:
def divisorGenerator(n):
for x in reversed(divs(n, n)):
yield x
NOTE: For languages that support, this could be tail recursive.
A: An illustrative Pythonic one-liner:
from itertools import chain
from math import sqrt
def divisors(n):
return set(chain.from_iterable((i,n//i) for i in range(1,int(sqrt(n))+1) if n%i == 0))
But better yet, just use sympy:
from sympy import divisors
A: Assuming that the factors function returns the factors of n (for instance, factors(60) returns the list [2, 2, 3, 5]), here is a function to compute the divisors of n:
function divisors(n)
divs := [1]
for fact in factors(n)
temp := []
for div in divs
if fact * div not in divs
append fact * div to temp
divs := divs + temp
return divs
A: Here's my solution. It seems to be dumb but works well...and I was trying to find all proper divisors so the loop started from i = 2.
import math as m
def findfac(n):
faclist = [1]
for i in range(2, int(m.sqrt(n) + 2)):
if n%i == 0:
if i not in faclist:
faclist.append(i)
if n/i not in faclist:
faclist.append(n/i)
return facts
A: If you only care about using list comprehensions and nothing else matters to you!
from itertools import combinations
from functools import reduce
def get_devisors(n):
f = [f for f,e in list(factorGenerator(n)) for i in range(e)]
fc = [x for l in range(len(f)+1) for x in combinations(f, l)]
devisors = [1 if c==() else reduce((lambda x, y: x * y), c) for c in set(fc)]
return sorted(devisors)
A: My solution via generator function is:
def divisor(num):
for x in range(1, num + 1):
if num % x == 0:
yield x
while True:
yield None
A: Try to calculate square root a given number and then loop range(1,square_root+1).
number = int(input("Enter a Number: "))
square_root = round(number ** (1.0 / 2))
print(square_root)
divisor_list = []
for i in range(1,square_root+1):
if number % i == 0: # Check if mod return 0 if yes then append i and number/i in the list
divisor_list.append(i)
divisor_list.append(int(number/i))
print(divisor_list)
A: def divisorGen(n): v = n last = [] for i in range(1, v+1) : if n % i == 0 : last.append(i)
A: I don´t understand why there are so many complicated solutions to this problem.
Here is my take on it:
def divisors(n):
lis =[1]
s = math.ceil(math.sqrt(n))
for g in range(s,1, -1):
if n % g == 0:
lis.append(g)
lis.append(int(n / g))
return (set(lis))
A: return [x for x in range(n+1) if n/x==int(n/x)]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "132"
} |
Q: How can I force a complete load along a navigation relationship in Entity Framework? Okay, so I'm doing my first foray into using the ADO.NET Entity Framework.
My test case right now includes a SQL Server 2008 database with 2 tables, Member and Profile, with a 1:1 relationship.
I then used the Entity Data Model wizard to auto-generate the EDM from the database. It generated a model with the correct association. Now I want to do this:
ObjectQuery<Member> members = entities.Member;
IQueryable<Member> membersQuery = from m in members select m;
foreach (Member m in membersQuery)
{
Profile p = m.Profile;
...
}
Which halfway works. I am able to iterate through all of the Members. But the problem I'm having is that m.Profile is always null. The examples for LINQ to Entities on the MSDN library seem to suggest that I will be able to seamlessly follow the navigation relationships like that, but it doesn't seem to work that way. I found that if I first load the profiles in a separate call somehow, such as using entities.Profile.ToList, then m.Profile will point to a valid Profile.
So my question is, is there an elegant way to force the framework to automatically load the data along the navigation relationships, or do I need to do that explicitly with a join or something else?
Thanks
A: Okay I managed to find the answer I needed here http://msdn.microsoft.com/en-us/magazine/cc507640.aspx. The following query will make sure that the Profile entity is loaded:
IQueryable<Member> membersQuery = from m in members.Include("Profile") select m;
A: I used this technique on a 1 to many relationship and works well. I have a Survey class and many questions as part of that from a different db table and using this technique managed to extract the related questions ...
context.Survey.Include("SurveyQuestion").Where(x => x.Id == id).First()
(context being the generated ObjectContext).
context.Survey.Include<T>().Where(x => x.Id == id).First()
I just spend 10mins trying to put together an extention method to do this, the closest I could come up with is ...
public static ObjectQuery<T> Include<T,U>(this ObjectQuery<T> context)
{
string path = typeof(U).ToString();
string[] split = path.Split('.');
return context.Include(split[split.Length - 1]);
}
Any pointers for the improvements would be most welcome :-)
A: On doing a bit more research found this ... StackOverflow link which has a post to Func link which is a lot better than my extension method attempt :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/171771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.