qid int64 1 74.7M | question stringlengths 15 58.3k | date stringlengths 10 10 | metadata list | response_j stringlengths 4 30.2k | response_k stringlengths 11 36.5k |
|---|---|---|---|---|---|
16,737,372 | I Have a `gridview` in ASP, having `template Fields` containing `select`, `edit` and `delete` links each row, and the footer containing `insert` link.
There are two `dropdownlists` in each row, let's say: `Category` and `sub-category`, when I change the contents of `category DropDownList` the `sub-category DropDownList` should automatically display the corresponding contents.
I've tried to write a `onSelectedIndexChanged` handler, but I don't know how to continue. **Any Ideas?** *(bearing in mind that I did all the rowDataBound() codes to fill the Drop down lists)*
**In other words, how to populate a dropdownlist other than in row\_databound()**
code:
```
protected void grdBulkScheduler_RowDataBound(object sender, GridViewRowEventArgs e)
{
try
{
if (e.Row.RowType == DataControlRowType.DataRow)
{
DropDownList ddlCategory = (DropDownList)e.Row.FindControl("ddlCategory");
if (ddlCategory != null)
{
ddlCategory .DataSource = cData.GetCategory();
ddlCategory .DataValueField = "c_ID";
ddlCategory .DataTextField = "c_Text";
ddlCategory .DataBind();
}
```
Here I am finding the drop down list **category** from **GridViewRowEventArgs**
**in the selectedIndexChanged handler, how can I find the DropDownList?** *since
`DropDownList ddlCategory = (DropDownList)e.Row.FindControl("ddlCategory")` is not working* | 2013/05/24 | [
"https://Stackoverflow.com/questions/16737372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1522782/"
] | FindControl is not recursive and you have a number of `TableCell` controls in the `Cells` property on your `GridViewRow` before you get to the level where your control is, so changing it to something like this will work:
```
TableCell cell = (TableCell)e.Row.FindControl("idOfMyCellIfItHasOne");
DropDownList ddlCategory = (DropDownList)cell.FindControl("ddlCategory");
```
Alternatively, if your cell/column has no ID and/or you know the position of the cell in the table isn't going to change, you can use the indexer on the `Cells` property:
```
DropDownList ddlCategory = (DropDownList)e.Row.Cells[cellIndex].FindControl("ddlCategory");
``` | use in your dropdown **AutoPostBack="true"**
like that:
```
<asp:DropDownList ID="footer_id" AutoPostBack="true"
OnSelectedIndexChanged="footer_id_SelectedIndexChanged"
runat="server"></asp:DropDownList>
``` |
16,737,372 | I Have a `gridview` in ASP, having `template Fields` containing `select`, `edit` and `delete` links each row, and the footer containing `insert` link.
There are two `dropdownlists` in each row, let's say: `Category` and `sub-category`, when I change the contents of `category DropDownList` the `sub-category DropDownList` should automatically display the corresponding contents.
I've tried to write a `onSelectedIndexChanged` handler, but I don't know how to continue. **Any Ideas?** *(bearing in mind that I did all the rowDataBound() codes to fill the Drop down lists)*
**In other words, how to populate a dropdownlist other than in row\_databound()**
code:
```
protected void grdBulkScheduler_RowDataBound(object sender, GridViewRowEventArgs e)
{
try
{
if (e.Row.RowType == DataControlRowType.DataRow)
{
DropDownList ddlCategory = (DropDownList)e.Row.FindControl("ddlCategory");
if (ddlCategory != null)
{
ddlCategory .DataSource = cData.GetCategory();
ddlCategory .DataValueField = "c_ID";
ddlCategory .DataTextField = "c_Text";
ddlCategory .DataBind();
}
```
Here I am finding the drop down list **category** from **GridViewRowEventArgs**
**in the selectedIndexChanged handler, how can I find the DropDownList?** *since
`DropDownList ddlCategory = (DropDownList)e.Row.FindControl("ddlCategory")` is not working* | 2013/05/24 | [
"https://Stackoverflow.com/questions/16737372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1522782/"
] | It sounds like you want to bind data to the subcategory dropdownlist using the value you have selected in the category dropdownlist. You can do:
```
protected void ddlCategory_SelectedIndexChanged(object sender, EventArgs e)
{
GridViewRow row = (GridViewRow)((DropDownList)sender).Parent.Parent;
DropDownList ddlSubCategory = (DropDownList)row.FindControl("ddlSubCategory");
ddlSubCategory.DataSource = //whatever you want to bind, e.g. based on the selected value, using((DropDownList)sender).SelectedValue;
ddlSubCategory.DataBind();
}
```
If I have misunderstood you, please correct me in a comment. | use in your dropdown **AutoPostBack="true"**
like that:
```
<asp:DropDownList ID="footer_id" AutoPostBack="true"
OnSelectedIndexChanged="footer_id_SelectedIndexChanged"
runat="server"></asp:DropDownList>
``` |
63,088,206 | I want to replace:
```
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
app:backgroundTint = "..."
...
android:orientation="horizontal">
```
with
```
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
style="@style/My.Bg.Snackbar.DayNight"
...
android:orientation="horizontal">
```
and
```
<style name="My.Bg.Dark" parent="">
<item name="app:backgroundTint">@color/og_background_dark</item>
</style>
```
and
```
<declare-styleable name="My">
<!-- The background color. -->
<attr name="app:backgroundTint" format="color" />
</declare-styleable>
</resources>
```
but I get an error:
```
error: resource app:attr/backgroundTint not found.
``` | 2020/07/25 | [
"https://Stackoverflow.com/questions/63088206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/311130/"
] | You can use:
```
<style name="My.Bg.Dark" parent="">
<item name="backgroundTint">@color/og_background_dark</item>
</style>
``` | just use this code
```
<style name="MyStyleName" parent="">
<item name="backgroundTint">@color/green</item>
</style>
``` |
67,630,701 | Hello I am new to Django, I saw a line `path('dashboard/(?P<user>.*)/$'),` . I didn't understand this. I want to learn this part. But I don't know how to search on google and youtube about this. | 2021/05/21 | [
"https://Stackoverflow.com/questions/67630701",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14793957/"
] | Mmm, `enumerate` should do the trick.
```
names = ['Ada Log\n', 'Ena Blue\n', 'Kin Wall\n', 'Kin Wall\n', 'Foxy Rex\n', 'Esk Brown']
#Note second argument in enumerate tells us where to start
for count,name in enumerate(names,1):
print(f'[{count}] {name}')
```
output
```
[1] Ada Log
[2] Ena Blue
[3] Kin Wall
[4] Kin Wall
[5] Foxy Rex
[6] Esk Brown
``` | Similar to @BuddyBob's solution:
```
names = ['Ada Log\n', 'Ena Blue\n', 'Kin Wall\n', 'Kin Wall\n', 'Foxy Rex\n', 'Esk Brown']
for count,name in enumerate(names, 1):
print(f'[{count}] {name}')
``` |
67,630,701 | Hello I am new to Django, I saw a line `path('dashboard/(?P<user>.*)/$'),` . I didn't understand this. I want to learn this part. But I don't know how to search on google and youtube about this. | 2021/05/21 | [
"https://Stackoverflow.com/questions/67630701",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14793957/"
] | Using a list comprehension with enumeration:
```py
inp = ['Ada Log\n', 'Ena Blue\n', 'Kin Wall\n', 'Foxy Rex\n', 'Esk Brown']
output = ''.join(['[' + str(ind + 1) + '] ' + x for ind, x in enumerate(inp)])
print(output)
```
This prints:
```
[1] Ada Log
[2] Ena Blue
[3] Kin Wall
[4] Foxy Rex
[5] Esk Brown
``` | Similar to @BuddyBob's solution:
```
names = ['Ada Log\n', 'Ena Blue\n', 'Kin Wall\n', 'Kin Wall\n', 'Foxy Rex\n', 'Esk Brown']
for count,name in enumerate(names, 1):
print(f'[{count}] {name}')
``` |
271,107 | Let $f$ be a real-valued continuous function on the interval $[0,1]$ and satisfy the following estimate
$$
\left|\int\_0^1 f(t) e^{st}dt\right|\le Cs^{\frac12},\quad s>1,
$$
where the constant $C$ is independent of $s$.
Can we assert that $f$ is identically zero on $[0,1]$? | 2017/05/31 | [
"https://mathoverflow.net/questions/271107",
"https://mathoverflow.net",
"https://mathoverflow.net/users/33232/"
] | Michael has essentially answered this in his comment, but let me make this more explicit.
In fact, a stronger statement is true: If $F(z)=\int\_0^1 f(t)e^{tz}\, dt$ satisfies $|F(s)|\lesssim e^{(a+\epsilon)s}$ for $s>1$ and all $\epsilon>0$ (but with possibly $\epsilon$ dependent implied constants), then $f=0$ on $[a,1]$. (This is of course extremely plausible right away, or how could there be cancellations between the various exponentials for large $s>1$?)
By splitting $0\le t\le 1$ into the two parts $[0,a+\epsilon]$ and $[a+\epsilon, 1]$, we see that the claim is equivalent to the following variant of it: If $G(z)=\int\_0^b g(t)e^{tz}\, dt$ is bounded for $z=s\ge 0$, then $g\equiv 0$.
Since $G$ is of exponential type, the [Phragmen-Lindelof principle](https://en.wikipedia.org/wiki/Phragm%C3%A9n%E2%80%93Lindel%C3%B6f_principle#Phragm.C3.A9n.E2.80.93Lindel.C3.B6f_principle_for_a_sector_in_the_complex_plane) applies to all sectors of opening $<\pi$, and in particular, it applies to quarter planes. Since $G$ is bounded on the imaginary axis and on $z=s\ge 0$, it is bounded on the right half plane. It is also, trivially, bounded on the left half plane. Thus $G$ is constant, and the constant is zero since $G$ is also square integrable on the imaginary axis. | The answer is, yes.
Suppose $f\not\equiv0$. By [Stone–Weierstrass theorem](https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem), there is a sequence of non-trivial polynomials $P\_n(t)$ converging uniformly to $f(t)$. That means, the inequality
$$
\left|\int\_0^1 P\_n(t) e^{st}dt\right|\le C's^{\frac12},\quad s>1,
$$
must persist for some polynomial $P\_n(t)$ (perhaps $n$ large). However, it is easily checked that the integral against a polynomial grows exponentially with $s$. A contradiction. Thus, the claim follows. |
44,856,152 | I'm using Leaflet library for maps and I run into a small problem.
I have simple non-geographical map and on it I have a simple line connecting two coordinates. When someone clicks anywhere on the line, a Popup opens up. I am trying to display the coordinates of the clicked spot in the popup itself.
I tried doing something like this, but I'm always getting undefined error.
```
L.polyline([xy([296.4, -235.1]), xy([1426.3, 100.3])], {color: 'red'}).bindPopup('Coordinates: ' + L.getPopup.getLatLng).addTo(myLayerGroup);
```
I understand that I'm supposed to call the `getLatLng()` method on the popup itself, but how do I do that exactly? How do I reference the popup itself since I never defined it as a separate variable. I'm having hundreds of lines on my map so declaring each line and popup as a separate variable is not really the optimal solution.
Thanks. | 2017/06/30 | [
"https://Stackoverflow.com/questions/44856152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6484612/"
] | Please follow the `xml` convention for defining `string` in `strings.xml`
```
<string name="kick_info_pertama">Kick info pertama</string>
```
Please detect if your string really existed in `strings.xml` and use this code in your `Fragment`
```
getActivity().getString(R.string.your_resource_id);
``` | You should not use uppercase letters in a string value :
change KickInfoPertama to kickinfopertama and :
Try this :
kickInfo.setText(getString(R.string.kickinfopertama)); |
21,692,645 | I have a `View` and it was giving me screen coordinate respect to its parent.
I have tried `getTop()` and `getLeft()` method of view, still it is not working for me.
I want a coordinate from main screen? | 2014/02/11 | [
"https://Stackoverflow.com/questions/21692645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3295536/"
] | Just tested it out, and the easiest way I can see (which seems to work in Chrome at least, but may need further testing) is setting a cookie.
On logout do something like `setcookie('loggedout',1)`. You'll also need to do the opposite on login - `unset($_COOKIE['loggedout'])`
Then you just need some simple Javascript...
```
function readCookie(name) {
var nameEQ = escape(name) + "=";
var ca = document.cookie.split(';');
for (var i = 0; i < ca.length; i++) {
var c = ca[i];
while (c.charAt(0) === ' ') c = c.substring(1, c.length);
if (c.indexOf(nameEQ) === 0) return unescape(c.substring(nameEQ.length, c.length));
}
return null;
}
window.setInterval(function() {
if(readCookie('loggedout')==1) {
window.location.assign('loggedout.html')
//Or whatever else you want!
}
},1000)
```
That'll check each second to see if the cookie is set. Magic. | Here is my below code to solve the issue. I set the cookie While login and deleted at logout. viceversa at Logout and Login also.
Script:
-------
```
function readCookie(name) {
var nameEQ = escape(name) + "=";
var ca = document.cookie.split(';');
for (var i = 0; i < ca.length; i++) {
var c = ca[i];
while (c.charAt(0) === ' ') c = c.substring(1, c.length);
if (c.indexOf(nameEQ) === 0) return unescape(c.substring(nameEQ.length, c.length));
}
return null;
}
function setCookie(cname,cvalue,exdays)
{
var d = new Date();
d.setTime(d.getTime()+(exdays*24*60*60*1000));
var expires = "expires="+d.toGMTString();
document.cookie = cname + "=" + cvalue + "; " + expires;
}
window.setInterval(function() {
if(readCookie('loggedout')==1) {
window.location.reload();
setCookie('loggedout',2,3);
//Or whatever else you want!
}
else if(readCookie('loggedin')==1) {
window.location.reload();
setCookie('loggedin',2,3);
//Or whatever else you want!
}
},2000)
```
Controller:
Login:
------
```
$this->load->helper('cookie');
$cookie = array(
'name' => 'loggedin',
'value' => '1',
'expire' => '86500'
);
set_cookie($cookie);
$domain= preg_replace("/^[\w]{2,6}:\/\/([\w\d\.\-]+).*$/","$1", $this->config->slash_item('base_url'));
$path = explode($domain,base_url());
delete_cookie('loggedout');
delete_cookie('loggedout',$domain, $path[1] );
```
Logout:
-------
```
$cookie = array(
'name' => 'loggedout',
'value' => '1',
'expire' => '86500'
);
set_cookie($cookie);
$domain= preg_replace("/^[\w]{2,6}:\/\/([\w\d\.\-]+).*$/","$1", $this->config- >slash_item('base_url'));
$path = explode($domain,base_url());
delete_cookie('loggedin');
delete_cookie('loggedin','localhost', '/<!-- Your path -->/');
delete_cookie('loggedin',$domain, $path[1] );
``` |
2,462,514 | I could not understand this question on my textbook. It's in the chapter of Metric Spaces, and I do think it's refering to a metric induced by a function $f$.
What is the maximum number of points that a subspace $X \subset \mathbb{R}^2$ can have so that $\mathbb{R}^2$ induces the discrete metric in $X$? | 2017/10/08 | [
"https://math.stackexchange.com/questions/2462514",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/488911/"
] | Take three points in $A,B,C\in\mathbb{R}^2$ such that the distance between any two of them is $1$. For instance, you can take $A=(0,0)$, $B=(1,0)$, and $C=\left(\frac12,\frac{\sqrt3}2\right)$. Could there be a fourth point $D\in\mathbb{R}^2$ whose distance to the other three is also $1$? No, because then it would belong to the circle centered at $A$ with radius $1$ and also to the circle centered at $B$ with radius $1$. There are only two points at the intersection of these circles. One of them is $C$ and the distance from the other one to $C$ is $\sqrt3$, which is not $1$.
Therefore, the answer is three. | Suppose we have such an $X$. Note that $\mathbb{R}^2$ has a countable base, so every subspace too. And a discrete second countable space has at most countably many points as all sets $\{x\}$ must be in any base for $X$.
Directly: for every $x \in X$ pick rationals $q\_1,q\_2,r\_1, r\_2$ such that $\{x\} = ((q\_1,q\_2) \times (r\_1, r\_2)) \cap X$, which can be done as $\{x\}$ is open in the subspace. Then this defines an injection of $X$ into the countable set $\mathbb{Q}^4$.
As David Hartley noted we want the discrete **metric** on $X$ in which case the answer is $3$, a equilateral triangle. We cannot have $4$ by geometric reasons. (We can have a regular tetrahedron in $\mathbb{R}^3$ of course). |
8,537,119 | Is it possible to show on **page source** ( **ctrl+U** ) the HTML elements I've added in JavaScript and jQuery codes? | 2011/12/16 | [
"https://Stackoverflow.com/questions/8537119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1101391/"
] | **No.**
The page source will always show you the HTML retrieved from the server.
Inspect the generated DOM tree instead, e.g. with Firebug (Firefox) or the Developer Tools (Chrome, Safari). | Nope, you just can see it on you firebug, developer tools, etc... |
8,537,119 | Is it possible to show on **page source** ( **ctrl+U** ) the HTML elements I've added in JavaScript and jQuery codes? | 2011/12/16 | [
"https://Stackoverflow.com/questions/8537119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1101391/"
] | **No.**
The page source will always show you the HTML retrieved from the server.
Inspect the generated DOM tree instead, e.g. with Firebug (Firefox) or the Developer Tools (Chrome, Safari). | No, but ever modern browser has a way/extension to see the *current* sourcecode (actually, the DOM tree), i.e. including everything done by JavaScript. |
8,537,119 | Is it possible to show on **page source** ( **ctrl+U** ) the HTML elements I've added in JavaScript and jQuery codes? | 2011/12/16 | [
"https://Stackoverflow.com/questions/8537119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1101391/"
] | **No.**
The page source will always show you the HTML retrieved from the server.
Inspect the generated DOM tree instead, e.g. with Firebug (Firefox) or the Developer Tools (Chrome, Safari). | Depending on the browser (i like chrome / firefox / safari for this) you want to look at developer tools. In firefox you can use firebug, in chrome it's Developer Tools and in Safari you have to turn on Developer menu through preferences. In all three cases, you want to look at the DOM inspector. |
8,537,119 | Is it possible to show on **page source** ( **ctrl+U** ) the HTML elements I've added in JavaScript and jQuery codes? | 2011/12/16 | [
"https://Stackoverflow.com/questions/8537119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1101391/"
] | Nope, you just can see it on you firebug, developer tools, etc... | No, but ever modern browser has a way/extension to see the *current* sourcecode (actually, the DOM tree), i.e. including everything done by JavaScript. |
8,537,119 | Is it possible to show on **page source** ( **ctrl+U** ) the HTML elements I've added in JavaScript and jQuery codes? | 2011/12/16 | [
"https://Stackoverflow.com/questions/8537119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1101391/"
] | Nope, you just can see it on you firebug, developer tools, etc... | Depending on the browser (i like chrome / firefox / safari for this) you want to look at developer tools. In firefox you can use firebug, in chrome it's Developer Tools and in Safari you have to turn on Developer menu through preferences. In all three cases, you want to look at the DOM inspector. |
13,919,535 | I am using following VB.NET code to copy an excel sheet in a same workbook, but the sheet names are written with (2) every time, what is wrong here?
```
Dim inp as Integer
inp=Val(Textbox1.Text)
oWB = oXL.Workbooks.Open("D:\testfile.xlsx")
oSheet = oWB.Worksheets("base")
With oWB
For i = 0 To inp - 1
oSheet.Copy(Before:=.Worksheets(i + 1))
With oSheet
.Name = "INP" & i + 1
End With
Next
End With
```
How to get rid of "(2)" on the sheet name?
Thanks | 2012/12/17 | [
"https://Stackoverflow.com/questions/13919535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461150/"
] | Try this (assuming your time index is POSIXct):
```
library(zoo)
st <- as.POSIXct("2012-01-21 18:45")
g <- seq(st, end(z), by = "15 min") # grid
na.approx(z, xout = g)
```
See `?na.approx.zoo` for more info.
**Note:** Since the question did not provide the data in reproducible form we do so here:
```
Lines <- "Id date Time humid humtemp prtemp press t1
1 2012-01-21 18:41:50 47.7 14.12 13.870 1005.70 -0.05277778
1 2012-01-21 18:46:43 44.5 15.37 15.100 1005.20 0.02861111
1 2012-01-21 18:51:35 43.2 15.88 15.576 1005.10 0.10972222
1 2012-01-21 18:56:28 42.5 16.17 15.833 1004.90 0.19111111
1 2012-01-21 19:01:21 42.2 16.31 15.986 1004.80 0.27250000
1 2012-01-21 19:06:14 41.8 16.47 16.118 1004.60 0.35388889
1 2012-01-21 19:11:07 41.6 16.51 16.177 1004.60 0.43527778"
library(zoo)
z <- read.zoo(text = Lines, header = TRUE, index = 2:3, tz = "")
st <- as.POSIXct("2012-01-21 18:45")
g <- seq(st, end(z), by = "15 min") # grid
na.approx(z, xout = g)
```
giving:
```
Id humid humtemp prtemp press t1
2012-01-21 18:45:00 1 45.62491 14.93058 14.66761 1005.376 -1.501706e-09
2012-01-21 19:00:00 1 42.28294 16.27130 15.94370 1004.828 2.500000e-01
``` | I can't find a function in xts package(or zoo ) that approximate the ts given dates.
So, my idea is to insert NA in the original ts , for the given dates.
```
ids <- as.POSIXct( align.time(index(dat.xts),60*5)) # range dates
# I create an xts with NA
y <- xts(x=matrix(data=NA,nrow=dim(dat.xts)[1],
ncol=dim(dat.xts)[2]),
order.by=ids)
rbind(y,dat.xts)
```
```
humid humtemp prtemp press t
2012-01-21 18:41:50 47.7 14.12 13.870 1005.7 -0.05277778
2012-01-21 18:45:00 NA NA NA NA NA
2012-01-21 18:46:43 44.5 15.37 15.100 1005.2 0.02861111
2012-01-21 18:50:00 NA NA NA NA NA
2012-01-21 18:51:35 43.2 15.88 15.576 1005.1 0.10972222
2012-01-21 18:55:00 NA NA NA NA NA
```
Now you can use na.approx or na.spline like this
```
na.approx(rbind(y,dat.xts))[index(y)]
humid humtemp prtemp press t
2012-01-21 18:45:00 45.62 14.93 14.67 1005.38 0.00
2012-01-21 18:50:00 43.62 15.71 15.42 1005.13 0.08
2012-01-21 18:55:00 42.71 16.08 15.76 1004.96 0.17
2012-01-21 19:00:00 42.28 16.27 15.94 1004.83 0.25
2012-01-21 19:05:00 41.90 16.43 16.08 1004.65 0.33
2012-01-21 19:10:00 41.65 16.50 16.16 1004.60 0.42
``` |
13,919,535 | I am using following VB.NET code to copy an excel sheet in a same workbook, but the sheet names are written with (2) every time, what is wrong here?
```
Dim inp as Integer
inp=Val(Textbox1.Text)
oWB = oXL.Workbooks.Open("D:\testfile.xlsx")
oSheet = oWB.Worksheets("base")
With oWB
For i = 0 To inp - 1
oSheet.Copy(Before:=.Worksheets(i + 1))
With oSheet
.Name = "INP" & i + 1
End With
Next
End With
```
How to get rid of "(2)" on the sheet name?
Thanks | 2012/12/17 | [
"https://Stackoverflow.com/questions/13919535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461150/"
] | You can see the process as follow:
1. Build a *sequence* based on the *data* ranges.
2. Merge the *sequence* and the *data*.
3. Interpolate the values: constant or linear method.
**Creating the dataset:**
```
data1 <- read.table(text="1 2012-01-21 18:41:50 47.7 14.12 13.870 1005.70 -0.05277778
1 2012-01-21 18:46:43 44.5 15.37 15.100 1005.20 0.02861111
1 2012-01-21 18:51:35 43.2 15.88 15.576 1005.10 0.10972222
1 2012-01-21 18:56:28 42.5 16.17 15.833 1004.90 0.19111111
1 2012-01-21 19:01:21 42.2 16.31 15.986 1004.80 0.27250000
1 2012-01-21 19:06:14 41.8 16.47 16.118 1004.60 0.35388889
1 2012-01-21 19:11:07 41.6 16.51 16.177 1004.60 0.43527778",
col.names=c("Id","date","Time","humid","humtemp","prtemp","press","t1"))
data1$datetime <- strptime(as.character(paste(d$date,d$Time, sep=" ")),"%Y-%m-%d %H:%M:%S")
```
Library zoo:
```
library(zoo)
```
**Step 1:**
```
# sequence interval 5 seconds
seq1 <- zoo(order.by=(as.POSIXlt( seq(min(data1$datetime), max(data1$datetime), by=5) )))
```
**Step 2:**
```
mer1 <- merge(zoo(x=data1[4:7],order.by=data1$datetime), seq1)
```
**Step 3:**
```
#Constant interpolation
dataC <- na.approx(mer1, method="constant")
#Linear interpolation
dataL <- na.approx(mer1)
```
**Visualizing**
```
head(dataC)
humid humtemp prtemp press
2012-01-21 18:41:50 47.7 14.12 13.87 1005.7
2012-01-21 18:41:55 47.7 14.12 13.87 1005.7
2012-01-21 18:42:00 47.7 14.12 13.87 1005.7
2012-01-21 18:42:05 47.7 14.12 13.87 1005.7
2012-01-21 18:42:10 47.7 14.12 13.87 1005.7
2012-01-21 18:42:15 47.7 14.12 13.87 1005.7
head(dataL)
humid humtemp prtemp press
2012-01-21 18:41:50 47.70000 14.12000 13.87000 1005.700
2012-01-21 18:41:55 47.64539 14.14133 13.89099 1005.691
2012-01-21 18:42:00 47.59078 14.16266 13.91198 1005.683
2012-01-21 18:42:05 47.53618 14.18399 13.93297 1005.674
2012-01-21 18:42:10 47.48157 14.20532 13.95396 1005.666
2012-01-21 18:42:15 47.42696 14.22666 13.97495 1005.657
``` | I can't find a function in xts package(or zoo ) that approximate the ts given dates.
So, my idea is to insert NA in the original ts , for the given dates.
```
ids <- as.POSIXct( align.time(index(dat.xts),60*5)) # range dates
# I create an xts with NA
y <- xts(x=matrix(data=NA,nrow=dim(dat.xts)[1],
ncol=dim(dat.xts)[2]),
order.by=ids)
rbind(y,dat.xts)
```
```
humid humtemp prtemp press t
2012-01-21 18:41:50 47.7 14.12 13.870 1005.7 -0.05277778
2012-01-21 18:45:00 NA NA NA NA NA
2012-01-21 18:46:43 44.5 15.37 15.100 1005.2 0.02861111
2012-01-21 18:50:00 NA NA NA NA NA
2012-01-21 18:51:35 43.2 15.88 15.576 1005.1 0.10972222
2012-01-21 18:55:00 NA NA NA NA NA
```
Now you can use na.approx or na.spline like this
```
na.approx(rbind(y,dat.xts))[index(y)]
humid humtemp prtemp press t
2012-01-21 18:45:00 45.62 14.93 14.67 1005.38 0.00
2012-01-21 18:50:00 43.62 15.71 15.42 1005.13 0.08
2012-01-21 18:55:00 42.71 16.08 15.76 1004.96 0.17
2012-01-21 19:00:00 42.28 16.27 15.94 1004.83 0.25
2012-01-21 19:05:00 41.90 16.43 16.08 1004.65 0.33
2012-01-21 19:10:00 41.65 16.50 16.16 1004.60 0.42
``` |
13,919,535 | I am using following VB.NET code to copy an excel sheet in a same workbook, but the sheet names are written with (2) every time, what is wrong here?
```
Dim inp as Integer
inp=Val(Textbox1.Text)
oWB = oXL.Workbooks.Open("D:\testfile.xlsx")
oSheet = oWB.Worksheets("base")
With oWB
For i = 0 To inp - 1
oSheet.Copy(Before:=.Worksheets(i + 1))
With oSheet
.Name = "INP" & i + 1
End With
Next
End With
```
How to get rid of "(2)" on the sheet name?
Thanks | 2012/12/17 | [
"https://Stackoverflow.com/questions/13919535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461150/"
] | Try this (assuming your time index is POSIXct):
```
library(zoo)
st <- as.POSIXct("2012-01-21 18:45")
g <- seq(st, end(z), by = "15 min") # grid
na.approx(z, xout = g)
```
See `?na.approx.zoo` for more info.
**Note:** Since the question did not provide the data in reproducible form we do so here:
```
Lines <- "Id date Time humid humtemp prtemp press t1
1 2012-01-21 18:41:50 47.7 14.12 13.870 1005.70 -0.05277778
1 2012-01-21 18:46:43 44.5 15.37 15.100 1005.20 0.02861111
1 2012-01-21 18:51:35 43.2 15.88 15.576 1005.10 0.10972222
1 2012-01-21 18:56:28 42.5 16.17 15.833 1004.90 0.19111111
1 2012-01-21 19:01:21 42.2 16.31 15.986 1004.80 0.27250000
1 2012-01-21 19:06:14 41.8 16.47 16.118 1004.60 0.35388889
1 2012-01-21 19:11:07 41.6 16.51 16.177 1004.60 0.43527778"
library(zoo)
z <- read.zoo(text = Lines, header = TRUE, index = 2:3, tz = "")
st <- as.POSIXct("2012-01-21 18:45")
g <- seq(st, end(z), by = "15 min") # grid
na.approx(z, xout = g)
```
giving:
```
Id humid humtemp prtemp press t1
2012-01-21 18:45:00 1 45.62491 14.93058 14.66761 1005.376 -1.501706e-09
2012-01-21 19:00:00 1 42.28294 16.27130 15.94370 1004.828 2.500000e-01
``` | You can see the process as follow:
1. Build a *sequence* based on the *data* ranges.
2. Merge the *sequence* and the *data*.
3. Interpolate the values: constant or linear method.
**Creating the dataset:**
```
data1 <- read.table(text="1 2012-01-21 18:41:50 47.7 14.12 13.870 1005.70 -0.05277778
1 2012-01-21 18:46:43 44.5 15.37 15.100 1005.20 0.02861111
1 2012-01-21 18:51:35 43.2 15.88 15.576 1005.10 0.10972222
1 2012-01-21 18:56:28 42.5 16.17 15.833 1004.90 0.19111111
1 2012-01-21 19:01:21 42.2 16.31 15.986 1004.80 0.27250000
1 2012-01-21 19:06:14 41.8 16.47 16.118 1004.60 0.35388889
1 2012-01-21 19:11:07 41.6 16.51 16.177 1004.60 0.43527778",
col.names=c("Id","date","Time","humid","humtemp","prtemp","press","t1"))
data1$datetime <- strptime(as.character(paste(d$date,d$Time, sep=" ")),"%Y-%m-%d %H:%M:%S")
```
Library zoo:
```
library(zoo)
```
**Step 1:**
```
# sequence interval 5 seconds
seq1 <- zoo(order.by=(as.POSIXlt( seq(min(data1$datetime), max(data1$datetime), by=5) )))
```
**Step 2:**
```
mer1 <- merge(zoo(x=data1[4:7],order.by=data1$datetime), seq1)
```
**Step 3:**
```
#Constant interpolation
dataC <- na.approx(mer1, method="constant")
#Linear interpolation
dataL <- na.approx(mer1)
```
**Visualizing**
```
head(dataC)
humid humtemp prtemp press
2012-01-21 18:41:50 47.7 14.12 13.87 1005.7
2012-01-21 18:41:55 47.7 14.12 13.87 1005.7
2012-01-21 18:42:00 47.7 14.12 13.87 1005.7
2012-01-21 18:42:05 47.7 14.12 13.87 1005.7
2012-01-21 18:42:10 47.7 14.12 13.87 1005.7
2012-01-21 18:42:15 47.7 14.12 13.87 1005.7
head(dataL)
humid humtemp prtemp press
2012-01-21 18:41:50 47.70000 14.12000 13.87000 1005.700
2012-01-21 18:41:55 47.64539 14.14133 13.89099 1005.691
2012-01-21 18:42:00 47.59078 14.16266 13.91198 1005.683
2012-01-21 18:42:05 47.53618 14.18399 13.93297 1005.674
2012-01-21 18:42:10 47.48157 14.20532 13.95396 1005.666
2012-01-21 18:42:15 47.42696 14.22666 13.97495 1005.657
``` |
1,728,629 | I've got the following objects:
```
CREATE FUNCTION CONSTFUNC RETURN INT
DETERMINISTIC
AS
BEGIN
RETURN 1;
END;
CREATE TABLE "FUNCTABLE" (
"ID" NUMBER(*,0) NOT NULL,
"VIRT" NUMBER GENERATED ALWAYS AS ("CONSTFUNC"()) NULL
);
```
however, the functable => constfunc dependency is not listed in all\_ or user\_ dependencies. Is there anywhere I can access this dependency information in the dictionary? | 2009/11/13 | [
"https://Stackoverflow.com/questions/1728629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/79439/"
] | I just created your function and table in 11G (11.1) and can confirm your findings. I couldn't find anything in the Oracle docs either.
If you drop the function, the table status remains "VALID", but when you select from the table you get ORA-00904: "CHAMP"."CONSTFUNC": invalid identifier. This suggests that Oracle itself isn't aware of the dependency.
It might be worth asking this question on asktom.oracle.com, because Tom Kyte will have access to more information - he may even raise a bug about it if need be. | The expression used to generate the virtual column is listed in the DATA\_DEFAULT column of the [DBA|ALL|USER]\_TAB\_COLUMNS views. |
23,216,963 | ```
double similarity = matcher.Match(features1, features2);
if (similarity== ?? ) // What sould i write here
{
Application.Exit();
}
```
if feature1 and feature2 matches than application should exit please help me | 2014/04/22 | [
"https://Stackoverflow.com/questions/23216963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2560424/"
] | Since `Double` is a *floating point type* we usually compare `Double` using ***tolerance***, e.g.
```
Double tolerance = 0.001;
// Instead of just features1 == features2
if (Math.Abs(features1 - features2) <= tolerance) {
Application.Exit();
}
``` | You need a boolean not a double.
```
bool similarity = matcher.Match(features1, features2);
if (similarity)
{
Application.Exit();
}
```
Make sure your mather.Match method returns a bool.
If you really need matcher.Match to return a double then please share your code with us so we can understand why you need this and then help you. |
50,354,606 | I want to use linq where clause for the below code. I have tried foreach loop, its working perfect. But large amount of data the foreach loop takes more time to process. Please give any your valuable suggestion to get result by using linq where clause.
```
string path = @"D:/NewFolder";
var _path = Directory.GetFiles(path);
foreach (var file in _path)
{
foundFile = Path.GetFileNameWithoutExtension(file);
if (foundFile == PumpSelectedItem.PumpGuid)
{
fileName = foundFile;
isValidFileFound = true;
}
}
``` | 2018/05/15 | [
"https://Stackoverflow.com/questions/50354606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9610968/"
] | Depends on what is a "better way" for you. An alternative is a [`FOR` loop with an implicit cursor over a `SELECT`](https://docs.oracle.com/cloud/latest/db112/LNPLS/cursor_for_loop_statement.htm#LNPLS1155) .
```
FOR R IN (SELECT COUNT(VALUE_TX) AS COUNTE
FROM VALUE
WHERE TRUNC(DATE) = TRUNC(SYSDATE)
GROUP BY HR_UTC) LOOP
IF R.COUNTE > 0 THEN
DBMS_OUTPUT.PUT_LINE(R.COUNTE);
END IF;
END LOOP;
```
It's a convenient syntactical shortcut, if that's something you count as "better" here. | >
> Is there a better way to do this?
>
>
>
Sure it is.
The problem is this condition: `where trunc(date) = ....`. This prevent RDBMS from using an an index on `date` column. If the table is big, this can cause performance problems. I am not going to explain the reason, you can find an explanation elsewhere, for example here: [Why do functions on columns prevent the use of indexes?](https://stackoverflow.com/questions/37927069/why-do-functions-on-columns-prevent-the-use-of-indexes)
You need to replace this condition with:
```
`where date >= trunc(sysdate) AND date < trunc(sysdate) + 1
```
or
`where date >= trunc(sysdate) AND date < trunc(sysdate) + interval '1' day` |
7,352,856 | What are `:a` and `ta` in sed?
Example:
```
sed -e :a -e '/\\$/N; s/\\\n//; ta'
``` | 2011/09/08 | [
"https://Stackoverflow.com/questions/7352856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/935509/"
] | The best sed [manual](http://www.grymoire.com/Unix/Sed.html#uh-59).
`t label` is [testing](http://www.grymoire.com/Unix/Sed.html#toc-uh-59), which goes to `label` (in your case label is "a") is substitute was performed. | :a and ta are pair. ":a" fist makes a lable . if the sequence is completed before lable "ta",goto ":a" keep executing.Just like the loop. |
5,872,745 | Could anyone explain to me why this does not render "VALUE IS DEFAULT"?
```
<TextBlock Text="{Binding Fail, StringFormat=VALUE IS {0}, FallbackValue=DEFAULT}" />
```
There is something tricky about this syntax I am missing. Thank you in advance. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5872745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265706/"
] | Binding in WPF does not consider *StringFormat* while falling back to *FallbackValue* in case it fails.
You can use what **leon** suggested or go with *PriorityBinding*.
--EDIT--
This should work:
```
<TextBlock DataContext="{Binding Fail, FallbackValue=DEFAULT}" Text="{Binding StringFormat=VALUE IS {0}}" />
``` | The default fallback value is used for priority bindings, if you'd like to display "VALUE IS DEFAULT" for a fallback value, try the following.
```
<TextBlock Text="{Binding Fail, StringFormat=VALUE IS {0}, FallbackValue='VALUE IS DEFAULT'}" />
``` |
5,872,745 | Could anyone explain to me why this does not render "VALUE IS DEFAULT"?
```
<TextBlock Text="{Binding Fail, StringFormat=VALUE IS {0}, FallbackValue=DEFAULT}" />
```
There is something tricky about this syntax I am missing. Thank you in advance. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5872745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265706/"
] | I think it could also work using the runs inside the TextBlock :
```
<TextBlock>
<Run Text="Value is : "/>
<Run Text="{Binding Fail,FallbackValue=Default}"/>
</TextBlock>
```
? | The default fallback value is used for priority bindings, if you'd like to display "VALUE IS DEFAULT" for a fallback value, try the following.
```
<TextBlock Text="{Binding Fail, StringFormat=VALUE IS {0}, FallbackValue='VALUE IS DEFAULT'}" />
``` |
5,872,745 | Could anyone explain to me why this does not render "VALUE IS DEFAULT"?
```
<TextBlock Text="{Binding Fail, StringFormat=VALUE IS {0}, FallbackValue=DEFAULT}" />
```
There is something tricky about this syntax I am missing. Thank you in advance. | 2011/05/03 | [
"https://Stackoverflow.com/questions/5872745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265706/"
] | Binding in WPF does not consider *StringFormat* while falling back to *FallbackValue* in case it fails.
You can use what **leon** suggested or go with *PriorityBinding*.
--EDIT--
This should work:
```
<TextBlock DataContext="{Binding Fail, FallbackValue=DEFAULT}" Text="{Binding StringFormat=VALUE IS {0}}" />
``` | I think it could also work using the runs inside the TextBlock :
```
<TextBlock>
<Run Text="Value is : "/>
<Run Text="{Binding Fail,FallbackValue=Default}"/>
</TextBlock>
```
? |
18,518,455 | I'm running several free, single-dyno apps on Heroku that go to "sleep" when idle for some time. I have a master site that is always awake and links/utilises these other apps that sometimes go to sleep. When I link to the sleeping apps from the main site there is that long loading wait whilst the other app wakes up.
What I want to do is wake all of the potentially sleeping apps the moment a user lands on the main page, sort of like pre-loading them, so it's quick to respond when the user needs it.
I am currently using a simple `$.get(...)` in the background but of course the console throws the 'Access-Control-Allow-Origin' error. I don't need any data or anything, just some sort of response to indicate the app is up and running. Anyone know how I can do this with an AJAX call without errors?
(Or if there's any other way, like through the API or something) | 2013/08/29 | [
"https://Stackoverflow.com/questions/18518455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/465388/"
] | Here is possible solution: <https://coderwall.com/p/u0x3nw>
As answers containing only a link are not welcome here, I'll just duplicate link contents.
---
A common way to work around Heroku's idling policy is to set up a script to send a ping once an hour to keep the dyno alive.
You can use the following to add New Relic's free plan to your account.
```
$ heroku addons:add newrelic:standard
```
Open the New Relic interface:
```
$ heroku addons:open newrelic
```
Under Menu, inside the Reports section, find Availability.
Add your URL, set a ping time of < 1 hour, and you're all set to go.
Using Scheduler
Alternatively, if you don't like or want to use New Relic, you can actually set up a keep-alive dyno ping through Heroku itself, using the Heroku Scheduler.
For instance, if you're using Ruby, you could use a Rake task like:
```
desc "Pings PING_URL to keep a dyno alive"
task :dyno_ping do
require "net/http"
if ENV['PING_URL']
uri = URI(ENV['PING_URL'])
Net::HTTP.get_response(uri)
end
end
```
Add PING\_URL to your Heroku environment:
```
$ heroku config:add PING_URL=http://my-app.herokuapp.com
```
Set up Scheduler:
```
$ heroku addons:add scheduler:standard
$ heroku addons:open scheduler
```
That last command should open the Scheduler interface in your browser. You can now set up your dyno\_ping task to run once an hour:
```
$ rake dyno_ping
```
---
(c) Original blog post by [Aupajo](https://stackoverflow.com/users/10407/aupajo) | The error is because heroku is on a different domain, right? If so, one option is to send a request from your server to Heroku. Even better would be to setup an automated task via cron to regularly poll the server. I recommend curl, because it's already installed in most linux hosts. This would do the same as what Edward recommended, but it wouldn't require using outside systems. |
18,518,455 | I'm running several free, single-dyno apps on Heroku that go to "sleep" when idle for some time. I have a master site that is always awake and links/utilises these other apps that sometimes go to sleep. When I link to the sleeping apps from the main site there is that long loading wait whilst the other app wakes up.
What I want to do is wake all of the potentially sleeping apps the moment a user lands on the main page, sort of like pre-loading them, so it's quick to respond when the user needs it.
I am currently using a simple `$.get(...)` in the background but of course the console throws the 'Access-Control-Allow-Origin' error. I don't need any data or anything, just some sort of response to indicate the app is up and running. Anyone know how I can do this with an AJAX call without errors?
(Or if there's any other way, like through the API or something) | 2013/08/29 | [
"https://Stackoverflow.com/questions/18518455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/465388/"
] | Here is possible solution: <https://coderwall.com/p/u0x3nw>
As answers containing only a link are not welcome here, I'll just duplicate link contents.
---
A common way to work around Heroku's idling policy is to set up a script to send a ping once an hour to keep the dyno alive.
You can use the following to add New Relic's free plan to your account.
```
$ heroku addons:add newrelic:standard
```
Open the New Relic interface:
```
$ heroku addons:open newrelic
```
Under Menu, inside the Reports section, find Availability.
Add your URL, set a ping time of < 1 hour, and you're all set to go.
Using Scheduler
Alternatively, if you don't like or want to use New Relic, you can actually set up a keep-alive dyno ping through Heroku itself, using the Heroku Scheduler.
For instance, if you're using Ruby, you could use a Rake task like:
```
desc "Pings PING_URL to keep a dyno alive"
task :dyno_ping do
require "net/http"
if ENV['PING_URL']
uri = URI(ENV['PING_URL'])
Net::HTTP.get_response(uri)
end
end
```
Add PING\_URL to your Heroku environment:
```
$ heroku config:add PING_URL=http://my-app.herokuapp.com
```
Set up Scheduler:
```
$ heroku addons:add scheduler:standard
$ heroku addons:open scheduler
```
That last command should open the Scheduler interface in your browser. You can now set up your dyno\_ping task to run once an hour:
```
$ rake dyno_ping
```
---
(c) Original blog post by [Aupajo](https://stackoverflow.com/users/10407/aupajo) | just do `curl -I <name of project>.herokuapp.com/<some file like index.html or sometext.txt>`
then add this as a cronjob |
18,518,455 | I'm running several free, single-dyno apps on Heroku that go to "sleep" when idle for some time. I have a master site that is always awake and links/utilises these other apps that sometimes go to sleep. When I link to the sleeping apps from the main site there is that long loading wait whilst the other app wakes up.
What I want to do is wake all of the potentially sleeping apps the moment a user lands on the main page, sort of like pre-loading them, so it's quick to respond when the user needs it.
I am currently using a simple `$.get(...)` in the background but of course the console throws the 'Access-Control-Allow-Origin' error. I don't need any data or anything, just some sort of response to indicate the app is up and running. Anyone know how I can do this with an AJAX call without errors?
(Or if there's any other way, like through the API or something) | 2013/08/29 | [
"https://Stackoverflow.com/questions/18518455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/465388/"
] | The error is because heroku is on a different domain, right? If so, one option is to send a request from your server to Heroku. Even better would be to setup an automated task via cron to regularly poll the server. I recommend curl, because it's already installed in most linux hosts. This would do the same as what Edward recommended, but it wouldn't require using outside systems. | just do `curl -I <name of project>.herokuapp.com/<some file like index.html or sometext.txt>`
then add this as a cronjob |
42,339,173 | I have an issue - with the following code I am trying to find out what is stored at a certain address and how long my static variable is stored at this specific position. (I read that static variables are stored infinitely and was quite surprised - wanted to test if this was true).
The code defines a static variable (its address on my system is 0x1000020c0 - This is probably rather random but was continuously the case)
If I now want to find out what integer value is stored at this address I have to first print out the address with $number, which then gives 0x1000020c0. The recasting/reinterpreting of the address (0x1000020c0) gives 100 only! if the address was printed before or if I use &number in the reinterpreting/recasting.
Can someone explain why this is the case?
```
int static number = 100;
// std::cout << number << std::endl; <- prints 100
// prints 100 and the address 0x1000020c0 in my case
// std::cout << number << " " << &number << std::endl;
// this does not work unless &number is printed previously
// std::cout << "Value is : " << *reinterpret_cast<int*>(0x1000020c0) << std::endl;
// this does work and show the correct value (100)
std::cout << "Value is : " << *reinterpret_cast<int*>(&number) << std::endl;
``` | 2017/02/20 | [
"https://Stackoverflow.com/questions/42339173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7384720/"
] | In any given program, the object might, or might not be stored in the address 0x1000020c0. There are no guarantees either way. The address of the object is decided at compile (or possibly at link) time. A change to the program can change the address.
If you never take the address of the local static object, and never modify it, the compiler may optimize the variable away, so that no memory is used. If the object doesn't exist at all, then it definitely doesn't exist at the memory location 0x1000020c0.
If you use the object in a way that requires the object to exist, it will be in some memory location. Taking the address of the object usually triggers such requirement. This is strikingly similar to [observer effect](https://en.wikipedia.org/wiki/Observer_effect_(physics)) in physics.
If you dereference a pointer which does not point to an object (of appropriate type), the behaviour is undefined.
---
>
> when I print recast/reinterpret the value that is at 0x1000020c0 it prints nothing
>
>
>
As I explained above, the object is not guaranteed to exist at the memory location 0x1000020c0.
>
> even though the object was used since i printed its value via std::cout << number;
>
>
>
Accessing the value of an object doesn't necessarily require the object to exist. The compiler may be able to prove that the value of the object is 100, so it can store that value as a constant and not store static object at all.
Besides, even if the static object did exist, it wouldn't necessarily exist in the address 0x1000020c0, unless you take the address and observe it to be so.
---
As a consequence: Don't ever cast an arbitrary number to a pointer (unless you work on some embedded platform that has hardcoded memory mappings). Seeing that the address of an object in one program is 0x1000020c0, doesn't make 0x1000020c0 non-arbitrary in another program. | 1. Assuming specific address of number is not the best idea.
2. In C++, `static` inside a function body is created at first invocation or when first time C++ program flow encounters the variable. They are never created if never used.
3. If possible, a compiler may choose to optimize the `static` and replace it with the value. |
42,339,173 | I have an issue - with the following code I am trying to find out what is stored at a certain address and how long my static variable is stored at this specific position. (I read that static variables are stored infinitely and was quite surprised - wanted to test if this was true).
The code defines a static variable (its address on my system is 0x1000020c0 - This is probably rather random but was continuously the case)
If I now want to find out what integer value is stored at this address I have to first print out the address with $number, which then gives 0x1000020c0. The recasting/reinterpreting of the address (0x1000020c0) gives 100 only! if the address was printed before or if I use &number in the reinterpreting/recasting.
Can someone explain why this is the case?
```
int static number = 100;
// std::cout << number << std::endl; <- prints 100
// prints 100 and the address 0x1000020c0 in my case
// std::cout << number << " " << &number << std::endl;
// this does not work unless &number is printed previously
// std::cout << "Value is : " << *reinterpret_cast<int*>(0x1000020c0) << std::endl;
// this does work and show the correct value (100)
std::cout << "Value is : " << *reinterpret_cast<int*>(&number) << std::endl;
``` | 2017/02/20 | [
"https://Stackoverflow.com/questions/42339173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7384720/"
] | 1. Assuming specific address of number is not the best idea.
2. In C++, `static` inside a function body is created at first invocation or when first time C++ program flow encounters the variable. They are never created if never used.
3. If possible, a compiler may choose to optimize the `static` and replace it with the value. | You're missing one very major statement here - the platform you're running this code on.
A static variable isn't stored "infinitely"; it's stored in that location for the duration of the program execution. If you're running on an embedded platform where your code jumps to main() at power-up, then you don't have anything else which will get in the way. If you're running on any other platform (Windows, Linux or whatever), that location becomes free when the program completes, and is immediately available for anything else to use.
It's possible that if you run your program, it completes, and you run it again, then maybe you'll get the same chunk of memory for your program to run in. In that case you'll get the same addresses for static variables. If something else has asked for a chunk of memory between your first run finishing and the next run starting (e.g. Chrome needed a bit more space for the pictures you were browsing), then your program won't be given the same chunk of memory and the variable won't be in the same place.
It gets more fun for DLLs. The same kind of rules apply, except they apply for the duration of the DLL being loaded instead of for the duration of program execution. A DLL could be loaded on startup and stay loaded all the way through, or it could be loaded and unloaded by applications as needed.
All this means that you're making some very strange assumptions. If you get the address of a static variable in your program, and then your program checks the contents of that address, you'll always get whatever's in that static variable. That's how static variables work. If you run your code twice, you'll be getting the address of the location for that static variable at your next run, as set up by your program when you run it that second time. In between runs of your program, that address is 100% free for anything else to use.
As others have already pointed out, after this you may also be seeing effects of compiler optimisation in the specific behaviour you're asking about. But the reason you're asking about this specific behaviour is that you seem to have misunderstood something fundamental to how static variables work. |
42,339,173 | I have an issue - with the following code I am trying to find out what is stored at a certain address and how long my static variable is stored at this specific position. (I read that static variables are stored infinitely and was quite surprised - wanted to test if this was true).
The code defines a static variable (its address on my system is 0x1000020c0 - This is probably rather random but was continuously the case)
If I now want to find out what integer value is stored at this address I have to first print out the address with $number, which then gives 0x1000020c0. The recasting/reinterpreting of the address (0x1000020c0) gives 100 only! if the address was printed before or if I use &number in the reinterpreting/recasting.
Can someone explain why this is the case?
```
int static number = 100;
// std::cout << number << std::endl; <- prints 100
// prints 100 and the address 0x1000020c0 in my case
// std::cout << number << " " << &number << std::endl;
// this does not work unless &number is printed previously
// std::cout << "Value is : " << *reinterpret_cast<int*>(0x1000020c0) << std::endl;
// this does work and show the correct value (100)
std::cout << "Value is : " << *reinterpret_cast<int*>(&number) << std::endl;
``` | 2017/02/20 | [
"https://Stackoverflow.com/questions/42339173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7384720/"
] | In any given program, the object might, or might not be stored in the address 0x1000020c0. There are no guarantees either way. The address of the object is decided at compile (or possibly at link) time. A change to the program can change the address.
If you never take the address of the local static object, and never modify it, the compiler may optimize the variable away, so that no memory is used. If the object doesn't exist at all, then it definitely doesn't exist at the memory location 0x1000020c0.
If you use the object in a way that requires the object to exist, it will be in some memory location. Taking the address of the object usually triggers such requirement. This is strikingly similar to [observer effect](https://en.wikipedia.org/wiki/Observer_effect_(physics)) in physics.
If you dereference a pointer which does not point to an object (of appropriate type), the behaviour is undefined.
---
>
> when I print recast/reinterpret the value that is at 0x1000020c0 it prints nothing
>
>
>
As I explained above, the object is not guaranteed to exist at the memory location 0x1000020c0.
>
> even though the object was used since i printed its value via std::cout << number;
>
>
>
Accessing the value of an object doesn't necessarily require the object to exist. The compiler may be able to prove that the value of the object is 100, so it can store that value as a constant and not store static object at all.
Besides, even if the static object did exist, it wouldn't necessarily exist in the address 0x1000020c0, unless you take the address and observe it to be so.
---
As a consequence: Don't ever cast an arbitrary number to a pointer (unless you work on some embedded platform that has hardcoded memory mappings). Seeing that the address of an object in one program is 0x1000020c0, doesn't make 0x1000020c0 non-arbitrary in another program. | You're missing one very major statement here - the platform you're running this code on.
A static variable isn't stored "infinitely"; it's stored in that location for the duration of the program execution. If you're running on an embedded platform where your code jumps to main() at power-up, then you don't have anything else which will get in the way. If you're running on any other platform (Windows, Linux or whatever), that location becomes free when the program completes, and is immediately available for anything else to use.
It's possible that if you run your program, it completes, and you run it again, then maybe you'll get the same chunk of memory for your program to run in. In that case you'll get the same addresses for static variables. If something else has asked for a chunk of memory between your first run finishing and the next run starting (e.g. Chrome needed a bit more space for the pictures you were browsing), then your program won't be given the same chunk of memory and the variable won't be in the same place.
It gets more fun for DLLs. The same kind of rules apply, except they apply for the duration of the DLL being loaded instead of for the duration of program execution. A DLL could be loaded on startup and stay loaded all the way through, or it could be loaded and unloaded by applications as needed.
All this means that you're making some very strange assumptions. If you get the address of a static variable in your program, and then your program checks the contents of that address, you'll always get whatever's in that static variable. That's how static variables work. If you run your code twice, you'll be getting the address of the location for that static variable at your next run, as set up by your program when you run it that second time. In between runs of your program, that address is 100% free for anything else to use.
As others have already pointed out, after this you may also be seeing effects of compiler optimisation in the specific behaviour you're asking about. But the reason you're asking about this specific behaviour is that you seem to have misunderstood something fundamental to how static variables work. |
305,320 | I used the following command to name my tables in the supplementary file. A question is how can I remove the space in 'Table S 1', so that it becomes 'Table S1'?
```
\documentclass[12pt,a4paper]{article}
\begin{document}
\renewcommand{\tablename}{Table S}
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular*}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular*}
\end{table}
\end{document}
``` | 2016/04/20 | [
"https://tex.stackexchange.com/questions/305320",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/91455/"
] | The proper way to do this should not be to change the *table name* but the *table number*. Replace `\renewcommand{\tablename}{Table S}` with
```
\renewcommand{\thetable}{S\arabic{table}}
```
If you change the appearance of the regular table numbers, then the following code retains that look and just adds an S, which might or might not be what you want.
```
\let\oldthetable\thetable
\renewcommand{\thetable}{S\oldthetable}
```
Changing the table number also affects `\ref`, giving S1, while changing the table name gives just the number 1.
---
Another method: Replace `\renewcommand{\tablename}{Table S}` with
```
\makeatletter
\renewcommand{\tablename}{Table S\@gobble}
\makeatother
```
The command `\@gobble` removes the next token (which in this case is the space).
---
A problem with @samcarters version is that if the caption is longer than one line the space is stretched somewhat, and the negative space doesn't account for that, leaving a small gap between S and 1 (negative space in red, `\@gobble` in black).
[](https://i.stack.imgur.com/ZJXMO.png) | Add a negative space of the size of a space:
```
\documentclass[12pt,a4paper]{article}
\begin{document}
\renewcommand{\tablename}{Table S\hskip-\the\fontdimen2\font\space }
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular*}{\textwidth}{lcc}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular*}
\end{table}
\end{document}
```
[](https://i.stack.imgur.com/SS1z9.png) |
305,320 | I used the following command to name my tables in the supplementary file. A question is how can I remove the space in 'Table S 1', so that it becomes 'Table S1'?
```
\documentclass[12pt,a4paper]{article}
\begin{document}
\renewcommand{\tablename}{Table S}
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular*}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular*}
\end{table}
\end{document}
``` | 2016/04/20 | [
"https://tex.stackexchange.com/questions/305320",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/91455/"
] | If you want that the “S” also appears in cross references, the correct way is to add it to `\thetable`:
```
\renewcommand{\thetable}{S\arabic{table}}
```
would do good in `article`, but it would not be good in `book`. For a “class independent” solution, add
```
\usepackage{etoolbox}
```
to your set of packages and
```
\preto\thetable{S}
```
in the settings section of the preamble.
If you don't need the “S” in cross references, you can use the `caption` package:
```
\documentclass[12pt,a4paper]{article}
\usepackage{caption}
\DeclareCaptionLabelFormat{addS}{#1 S#2}
\captionsetup[table]{labelformat=addS}
\begin{document}
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular}{lcc}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular}
\end{table}
\end{document}
```
[](https://i.stack.imgur.com/ZAEQu.png) | Add a negative space of the size of a space:
```
\documentclass[12pt,a4paper]{article}
\begin{document}
\renewcommand{\tablename}{Table S\hskip-\the\fontdimen2\font\space }
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular*}{\textwidth}{lcc}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular*}
\end{table}
\end{document}
```
[](https://i.stack.imgur.com/SS1z9.png) |
305,320 | I used the following command to name my tables in the supplementary file. A question is how can I remove the space in 'Table S 1', so that it becomes 'Table S1'?
```
\documentclass[12pt,a4paper]{article}
\begin{document}
\renewcommand{\tablename}{Table S}
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular*}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular*}
\end{table}
\end{document}
``` | 2016/04/20 | [
"https://tex.stackexchange.com/questions/305320",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/91455/"
] | The proper way to do this should not be to change the *table name* but the *table number*. Replace `\renewcommand{\tablename}{Table S}` with
```
\renewcommand{\thetable}{S\arabic{table}}
```
If you change the appearance of the regular table numbers, then the following code retains that look and just adds an S, which might or might not be what you want.
```
\let\oldthetable\thetable
\renewcommand{\thetable}{S\oldthetable}
```
Changing the table number also affects `\ref`, giving S1, while changing the table name gives just the number 1.
---
Another method: Replace `\renewcommand{\tablename}{Table S}` with
```
\makeatletter
\renewcommand{\tablename}{Table S\@gobble}
\makeatother
```
The command `\@gobble` removes the next token (which in this case is the space).
---
A problem with @samcarters version is that if the caption is longer than one line the space is stretched somewhat, and the negative space doesn't account for that, leaving a small gap between S and 1 (negative space in red, `\@gobble` in black).
[](https://i.stack.imgur.com/ZJXMO.png) | If you want that the “S” also appears in cross references, the correct way is to add it to `\thetable`:
```
\renewcommand{\thetable}{S\arabic{table}}
```
would do good in `article`, but it would not be good in `book`. For a “class independent” solution, add
```
\usepackage{etoolbox}
```
to your set of packages and
```
\preto\thetable{S}
```
in the settings section of the preamble.
If you don't need the “S” in cross references, you can use the `caption` package:
```
\documentclass[12pt,a4paper]{article}
\usepackage{caption}
\DeclareCaptionLabelFormat{addS}{#1 S#2}
\captionsetup[table]{labelformat=addS}
\begin{document}
\begin{table}
\centering
\caption{Remove the space in 'Table S 1', so that it becomes 'Table S1'}
\begin{tabular}{lcc}
\hline
& p.adjusted & Gene Ratio \\ \hline
TBA & 3.28E-04 & 12/89 \\
\end{tabular}
\end{table}
\end{document}
```
[](https://i.stack.imgur.com/ZAEQu.png) |
25,543,233 | The following code unable to send an emails to customers and it not throwing any exception. The code it not send any email or exception but executed.I am completely new about the asp.net. Some one can help me how to resolve the problem.
**Code:**
```
try
{
String userName = "ramesh";
String passWord = "123456";
String sendr = "ramesh@gmail.com";
String recer = "customer@yahoo.com";
String subject = "Comformation ";
String body = "Dear Customer";
MailMessage msgMail = new MailMessage(sendr, recer, subject, body);
int PortNumber = 25;
SmtpClient smtp = new SmtpClient("smtp.test.com", PortNumber);
msgMail.IsBodyHtml = true;
smtp.DeliveryMethod = SmtpDeliveryMethod.Network;
smtp.Credentials = new System.Net.NetworkCredential(userName, passWord);
smtp.Send(msgMail);
MsgLP.Text = "Emailed to Customer..";
LogInLink.Visible = true;
}
catch (Exception ex){
AuditLog.LogError("ErrorE-mail " + ex.Message);
}
``` | 2014/08/28 | [
"https://Stackoverflow.com/questions/25543233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2830120/"
] | You have to set `smtp.EnableSsl=true` and use port number `587`. Your final code will be this:
```
try
{
String userName = "ramesh";
String passWord = "123456";
String sendr = "ramesh@gmail.com";
String recer = "customer@yahoo.com";
String subject = "Comformation ";
String body = "Dear Customer";
MailMessage msgMail = new MailMessage(sendr, recer, subject, body);
int PortNumber = 587; //change port number to 587
SmtpClient smtp = new SmtpClient("smtp.gmail.com", PortNumber); //change from test to gmail
smtp.EnableSsl = true; //set EnableSsl to true
msgMail.IsBodyHtml = true;
smtp.DeliveryMethod = SmtpDeliveryMethod.Network;
smtp.Credentials = new System.Net.NetworkCredential(userName, passWord);
smtp.Send(msgMail);
MsgLP.Text = "Emailed to Customer..";
LogInLink.Visible = true;
}
catch (Exception ex){
AuditLog.LogError("ErrorE-mail " + ex.Message);
}
```
I tested this code with my credentials and it works fine. | ```
System.Net.Mail.MailMessage mm = new System.Net.Mail.MailMessage();
mm.From = new MailAddress("email@gmail.com");
mm.To.Add("email@gmail.com");
System.Net.Mail.Attachment attachment;
string strFileName;
strFileName = "Uploadfile/" + "200814062455PM_Admin_Screenshot (10).JPEG";
attachment = new System.Net.Mail.Attachment(Server.MapPath(strFileName));
mm.Attachments.Add(attachment);
mm.Body = ("<html><head><body><table><tr><td>Hi</td></tr></table></body></html><br/>"); ;
mm.IsBodyHtml = true;
mm.Subject = "Candidate " + Name + " for your Requirement " + Jobtt + " ";
System.Net.Mail.SmtpClient client = new System.Net.Mail.SmtpClient("smtp.gmail.com", 587);
client.UseDefaultCredentials = false;
client.Credentials = new System.Net.NetworkCredential("email@gmail.com", "password");
client.Port = 587;
client.Host = "smtp.gmail.com";
client.EnableSsl = true;
object userstate = mm;
client.Send(mm);
``` |
80,851 | As we all know when you have a configurable product such as a T-Shirt in blue and red, you would create 2 simple products and 1 configurable product and each simple product would have its own stock level as you have a certain number of red t-shirts and a certain number of blue t-shirts.
My issue is that I want to sell cans in either singles or multiple of 10's but deduct from the same stock. Therefore ideally I would like to set up a configurable product in the normal way but for the stock level to be set at the configurable product level rather than the simple product level. Does anyone know if this is possible? | 2015/09/03 | [
"https://magento.stackexchange.com/questions/80851",
"https://magento.stackexchange.com",
"https://magento.stackexchange.com/users/6414/"
] | Change
```
setLocation('<?php echo $this->getAddToCartUrl($_product) ?>')
```
to
```
setLocation('<?php echo Mage::helper('checkout/cart')->getAddUrl($_product) ?>')
``` | **app/design/frontend/default/ezzy/template/catalog/product/view/addtocart.phtml**
```
<?php $_product = $this->getProduct(); ?>
<?php $buttonTitle = $this->__('Add to cart'); ?>
<?php if($_product->isSaleable()): ?>
<div class="add-to-cart">
<?php if(!$_product->isGrouped()): ?>
<div class="qty-block">
<label for="qty"><?php echo $this->__('Qty:') ?></label>
<input type="text" name="qty" id="qty" maxlength="12" value="1" title="<?php echo $this->__('Qty') ?>" class="input-text qty" />
</div>
<?php endif; ?>
<button type="button" title="<?php echo $buttonTitle ?>" class="button btn-cart" onclick="productAddToCartForm.submit(this)"><span><span><?php echo $buttonTitle ?></span></span></button>
<?php echo $this->getChildHtml('', true, true) ?>
</div>
<?php endif; ?>
``` |
15,262,121 | My iOS app crashes when pressing a button found in a custom view for the rightBarButtonItem. A custom view is used because the barButtonItem design requires more than just a button.
Here is the output of the crash:
```
[UIViewControllerWrapperView buttonPressed:]: unrecognized selector sent to instance 0x7669430]
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[UIViewControllerWrapperView buttonPressed:]: unrecognized selector sent to instance 0x7669430'
```
The custom view is defined in a separate view controller's xib, RightBarButtonItemVC, which also contains this linked method:
```
- (IBAction)buttonPressed:(id)sender {
NSLog(@"button pressed");
}
```
The rightBarButtonItemVC is used in viewDidLoad, for all views controllers that need the item:
```
- (void)viewDidLoad
{
[super viewDidLoad];
RightBarButtonItemVC *rightBarButtonItemVC = [[RightBarButtonItemVC alloc] initWithNibName:@"RightBarButtonItemVC" bundle:nil];
UIBarButtonItem *rightBarButtonItem = [[UIBarButtonItem alloc] initWithCustomView:rightBarButtonItemVC.view];
self.navigationItem.rightBarButtonItem = rightBarButtonItem;
}
```
Notice how I am assigning rightBarButtonItemVC's view as the view for rightBarButtonItem.
Question
========
1. Why is an instance of UIViewControllerWrapperView calling my selector instead of my instance of rightBarButtonItemVC?
2. How can I prevent this from happening and get the button to work? Should I write a category for UIViewControllerWrapperView? If so, where to import the file? | 2013/03/07 | [
"https://Stackoverflow.com/questions/15262121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/869936/"
] | `UIViewControllerWrapperView` is not calling your selector; your button is calling `-buttonPressed:` on the `UIViewControllerWrapperView`. Try [enabling zombies](https://stackoverflow.com/questions/2190227/how-do-i-set-up-nszombieenabled-in-xcode-4).
It looks like you're using `RightBarButtonItemVC` simply as a view loader (I assume you're using ARC, or it would leak). This is expensive, and strange things can happen unless you set `rightBarButtonItemVC.view = nil` before using the view elsewhere (I forget exactly what). I present a better way to load views from nibs [here](https://stackoverflow.com/questions/3524051/load-view-from-nib-file-from-several-different-uiviewcontrollers/3525335#3525335) (I don't know if Interface Builder supports nibs owned by a protocol, which would be ideal).
There are two main reasons your code might be crashing:
* In the NIB, `-buttonPressed:` is connected to the wrong thing. I don't think this is likely.
* `-buttonPressed:` would get sent to the `RightBarButtonItemVC`, except the `RightBarButtonItemVC` is not retained by anything so it gets dealloced. It gets sent to the next object that is allocated at the same address, which happens to be a `UIViewControllerWrapperView`.
There are two easy fixes:
* Remove the connection in Interface Builder and add it programmatically with [`-addTarget:action:forControlEvents:`](http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIControl_Class/Reference/Reference.html#//apple_ref/occ/instm/UIControl/addTarget:action:forControlEvents:). This requires finding the button in the view hierarchy.
* Create it programmatically in the first place.
I prefer the latter; in the long run it seems to be *far* easier to maintain UI in code, and is much easier to localize since you only need to translate a single strings file. | Direct Answers:
---------------
1. As suggested by @tc.'s answer, there is a disconnect somewhere between defining the view in a xib and using a View Controller (RightBarButtonItemVC) to define a custom view on a UIBarButtonItem, which is evident in the fact that UIViewControllerWrapperView receives the buttonPressed call instead of RightBarButtonItemVC. It looks like something is not being retained, although I'm not sure what.
2. What follows is the specific working solution that I implemented. I did make a category, but not for UIViewControllerWrapperView as previously mentioned.
Specific Solution:
------------------
First create BarButtonItemLoader, an Objective-C category on UIViewController:
```
@interface UIViewController (BarButtonItemLoader)
```
In UIViewController+BarButtonItemLoader.h, define this method:
```
- (UIBarButtonItem *) rightBarButtonItem;
```
Since you can't keep track of state in a category, define a UIBarButtonItem in AppDelegate.h:
```
@property (strong, nonatomic) UIBarButtonItem *rightBarButtonItem;
```
Next, start implementing the category's rightBarButtonItem method by lazy loading the rightBarButtonItem from the AppDelegate (don't forget to #import "AppDelegate.h"). This ensures only one rightBarButtonItem will be created and retained in the AppDelegate:
```
- (UIBarButtonItem *) rightBarButtonItem {
AppDelegate *appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];
if(!appDelegate.rightBarButtonItem) {
//create a rightBarButtonItem (see below)
}
return appDelegate.rightBarButtonItem;
}
```
Start assembling a UIView/UIBarButtonItem that will be set to the rightBarButtonItem. Transfer each element/configuration from the old Interface Builder / xib implementation. Most importantly take note of the frame information in the Size inspector so you can programmatically position your subviews just how you had them manually positioned in the .xib file.
```
- (UIBarButtonItem *) rightBarButtonItem {
AppDelegate *appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];
if(!appDelegate.rightBarButtonItem) {
UIView *rightBarView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 264, 44)];
UIBarButtonItem *rightBarButtonItem = [[UIBarButtonItem alloc] initWithCustomView:rightBarView];
UIImageView *textHeader = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"textHeader.png"]];
textHeader.frame = CGRectMake(2, 14, 114, 20);
[rightBarView addSubview:textHeader];
UIButton *button1 = [[UIButton alloc] initWithFrame:CGRectMake(100, 2, 70, 44)];
[button1 setImage:[UIImage imageNamed:@"button1.png"] forState:UIControlStateNormal];
[button1 setImage:[UIImage imageNamed:@"button1Highlighted.png"] forState:UIControlStateHighlighted];
[button1 addTarget:self action:@selector(button1Pressed) forControlEvents:UIControlEventTouchUpInside];
[rightBarView addSubview:button1];
UIButton *button2 = [[UIButton alloc] initWithFrame:CGRectMake(194, 2, 70, 44)];
[button2 setImage:[UIImage imageNamed:@"button2.png"] forState:UIControlStateNormal];
[button2 setImage:[UIImage imageNamed:@"button2Highlighted.png"] forState:UIControlStateHighlighted];
[button2 addTarget:self action:@selector(button2Pressed) forControlEvents:UIControlEventTouchUpInside];
[rightBarView addSubview:button2];
appDelegate.rightBarButtonItem = rightBarButtonItem;
}
return appDelegate.rightBarButtonItem;
}
```
Finally, implement the buttonXPressed methods in UIViewController+BarButtonItemLoader.m to your purpose:
```
- (void) button1Pressed {
NSLog(@"button1 Pressed");
}
- (void) button2Pressed {
NSLog(@"button2 Pressed");
}
```
...
Use the category by adding this code to any UIViewController or subclass thereof:
```
#import "UIViewController+BarButtonItemLoader.h"
- (void)viewDidLoad {
[super viewDidLoad];
self.navigationItem.rightBarButtonItem = [self rightBarButtonItem];
}
```
Summary
-------
This approach allows you to add a UIBarButtonItem on-the-fly to any UIViewController. The drawback is that you must add the above code to all UIViewControllers you create.
Another Option
--------------
If you want to further encapsulate the addition of UIBarButtonItems (or anything else), avoiding the need to add code in each View Controller, you should create a BaseViewController from which you then subclass all of your other View Controllers. From there you can consider other items that you want to include in all your View Controllers. Choosing the Category or Subclass route then becomes a question of granularity. |
44,287,903 | Currently I have my viewcontroller as below
```
--highestView--
--topView--
--tableView--
```
I would like to make the `topView` dissappear when I scroll down which means `tableView` will be exactly below the `highestView`.
So upon scrolling up, I would like them to go back to original view which is like above.
My code is as below:-
```
-(void)scrollViewDidScroll:(UIScrollView *)scrollView
{
CGFloat scrollPos = self.tableView.contentOffset.y ;
if(scrollPos >= self.currentOffset ){
//Fully hide your toolbar
[UIView animateWithDuration:2.25 animations:^{
self.topView.hidden = YES;
self.topViewTopConstraint.active = NO;
self.theNewConstraint2.active = NO;
self.theNewConstraint = [NSLayoutConstraint constraintWithItem:self.tableView attribute:NSLayoutAttributeTop relatedBy:NSLayoutRelationEqual toItem:self.highestView attribute:NSLayoutAttributeBottom multiplier:1.0 constant:0.0];
self.theNewConstraint.active = YES;
}];
} else {
//Slide it up incrementally, etc.
self.theNewConstraint.active = NO;
self.topView.hidden = NO;
self.topViewTopConstraint.active = YES;
self.theNewConstraint2 = [NSLayoutConstraint constraintWithItem:self.tableView attribute:NSLayoutAttributeTop relatedBy:NSLayoutRelationEqual toItem:self.topView attribute:NSLayoutAttributeBottom multiplier:1.0 constant:0.0];
self.theNewConstraint2.active = YES;
//self.topView.hidden = NO;
}
}
```
Scrolling down works exactly like how I want but scrolling up fails. Currently, the `topView` shows up at the back of `tableView` upon scrolling up. How can I fix this ? | 2017/05/31 | [
"https://Stackoverflow.com/questions/44287903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6259538/"
] | You simply cannot. The installation process of [`uwp`](https://stackoverflow.com/questions/tagged/uwp) apps is standartized to work on all devices supported by the OS. Custom install actions do not make sense when writing the app for e.g. Hololense, PC, mobile and XBOX. You'll get nowhere.
>
> detect the installed .net version
>
>
>
Why would you worry about this? You ship your application compiled for a certain CLR version. If a PC is missing this version, the administrators are at fault.
>
> uninstalling of an older build before installing the new one
>
>
>
This will happen automatically when installing a newer version.
>
> allow system administers to do a mass install to PCs within their network
>
>
>
This is possible, but not an integrated part of the installation process. Your administrators have to apply a certain deployment process in order to role out the app to all computers. | You're essentially going to have to understand the ins-and-outs of sideloading your UWP LOB App. Once that is understood, you can simply write an installer the way you used to, and the installer would call all of the appropriate powershell commands for most of the workflows, and setup scheduled tasks for when the user logs in to accomplish the other workflows. (I recommend wix for your msi, msiexec to run that msi, psexec to run some of the msiexec commands headlessly under the system account, and a vm of win10 to test your msi)
First off, you'll need to ensure your LOB App is packaged with a "signing certificate" issued to your organization by a certificate authority. That will tell windows that your application is "signed off" to be actually built by the company that it says it is.
Second, you need to make sure the target machines are in sideload mode.
>
> Configure PCs for Sideloading Requirements:
> <https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/sideload-apps-with-dism-s14#span-idsideloadingrequirementsspanspan-idsideloadingrequirementsspanspan-idsideloadingrequirementsspanconfigure-pcs-for-sideloading-requirements>
>
>
>
Since you want to "allow system administers to do a mass install to PCs within their network" they're going to want to be able to do the installs headlessly, and for all users. So, you need to include provisioning the application into the installer.
Provisioning LOB Apps
=====================
Its important to understand that you can provision the application to the "online" windows 10 image of the target machine(s), or the "offline" windows 10 image as it is prepared for creation. Online edits to the image are what the admins will want, since the machines they deploy to will be already running in this scenario.
Provisioning will provide the UWP LOB App to a windows user as they log on, if the application isn't already there in the first place. This falls short when updates need to happen though -- leaving updating the app up to another party. It is simply a way to provide a single version of a LOB App to a user, one time. It also has restrictions, one being that when the provisioning is done there cannot be any users actively logged onto the machine, so it must be done headless with tools like SCCM or PSExec and must be using the SYSTEM account. Another restriction is that the image can only have a total of 24 provisioned apps.
Adding a provisioned LOB App
----------------------------
Provisioning can be done via powershell cmdlet (Must be 64bit version of powershell being called if its a 64 bit system)
```
Add-AppxProvisionedPackage -Online -PackagePath <yourpackagepath> -DependencyPackagePath <yourdependencypackagepath> -SkipLicense
```
^ Be logged out from all users - so run this via psexec as SYSTEM or from SCCM
>
> <https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/sideload-apps-with-dism-s14>
>
>
>
Updating a provisioned LOB App
------------------------------
The newer version of the provisioned LOB App
***can only be applied*** by the following cmdlet (via powershell) ***for each user*** that has signed into the PC running the Windows image.
```
> Add-AppxPackage
```
^ be logged in as the target user when running that
Removing a provisioned LOB App
------------------------------
Remove the provisioned LOB App from the image:
```
> Remove-AppxProvisionedPackage -Online -PackageName MyAppxPkg
```
^ Be logged out from all users - so run this via psexec as SYSTEM or from SCCM
Uninstall occurrences of the old version the application from each user that has been active:
```
> Remove-AppxPackage MyAppxPkg
```
^ be logged in as the target user when running that
**tl;dr** - *Lots of challenges to overcome for a typical installer of a UWP LOB App, but it can be done if you want!* |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | Just use `MultipleActiveResultSets=True` in your connection string. | Add `MultipleActiveResultSets=true` to the provider part of your connection string
example in the file appsettings.json
```
"ConnectionStrings": {
"EmployeeDBConnection": "server=(localdb)\\MSSQLLocalDB;database=YourDatabasename;Trusted_Connection=true;MultipleActiveResultSets=true"}
``` |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | Add `MultipleActiveResultSets=true` to the provider part of your connection string
example in the file appsettings.json
```
"ConnectionStrings": {
"EmployeeDBConnection": "server=(localdb)\\MSSQLLocalDB;database=YourDatabasename;Trusted_Connection=true;MultipleActiveResultSets=true"}
``` | You have to close the reader on top of your else condition. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | Add `MultipleActiveResultSets=true` to the provider part of your connection string
example in the file appsettings.json
```
"ConnectionStrings": {
"EmployeeDBConnection": "server=(localdb)\\MSSQLLocalDB;database=YourDatabasename;Trusted_Connection=true;MultipleActiveResultSets=true"}
``` | There is another potential reason for this - missing `await` keyword. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | You are using the same connection for the `DataReader` and the `ExecuteNonQuery`. This is not supported, [according to MSDN](http://msdn.microsoft.com/en-us/library/haa3afyz(v=vs.80).aspx):
>
> Note that while a DataReader is open, the Connection is in use
> exclusively by that DataReader. You cannot execute any commands for
> the Connection, including creating another DataReader, until the
> original DataReader is closed.
>
>
>
**Updated 2018**: link to [MSDN](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/retrieving-data-using-a-datareader) | Add `MultipleActiveResultSets=true` to the provider part of your connection string
example in the file appsettings.json
```
"ConnectionStrings": {
"EmployeeDBConnection": "server=(localdb)\\MSSQLLocalDB;database=YourDatabasename;Trusted_Connection=true;MultipleActiveResultSets=true"}
``` |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | You are trying to to an Insert (with `ExecuteNonQuery()`) on a SQL connection that is used by this reader already:
```
while (myReader.Read())
```
Either read all the values in a list first, close the reader and then do the insert, or use a new SQL connection. | You have to close the reader on top of your else condition. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | You are trying to to an Insert (with `ExecuteNonQuery()`) on a SQL connection that is used by this reader already:
```
while (myReader.Read())
```
Either read all the values in a list first, close the reader and then do the insert, or use a new SQL connection. | The issue you are running into is that you are starting up a second `MySqlCommand` while still reading back data with the `DataReader`. The MySQL connector only allows one concurrent query. You need to read the data into some structure, then close the reader, then process the data. Unfortunately you can't process the data as it is read if your processing involves further SQL queries. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | You are using the same connection for the `DataReader` and the `ExecuteNonQuery`. This is not supported, [according to MSDN](http://msdn.microsoft.com/en-us/library/haa3afyz(v=vs.80).aspx):
>
> Note that while a DataReader is open, the Connection is in use
> exclusively by that DataReader. You cannot execute any commands for
> the Connection, including creating another DataReader, until the
> original DataReader is closed.
>
>
>
**Updated 2018**: link to [MSDN](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/retrieving-data-using-a-datareader) | This exception also happens if you don't use transaction properly. In my case, I put `transaction.Commit()` right after `command.ExecuteReaderAsync()`, did not wait with the transaction commiting until `reader.ReadAsync()` was called. The proper order:
1. Create transaction.
2. Create reader.
3. Read the data.
4. Commit the transaction. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | You are using the same connection for the `DataReader` and the `ExecuteNonQuery`. This is not supported, [according to MSDN](http://msdn.microsoft.com/en-us/library/haa3afyz(v=vs.80).aspx):
>
> Note that while a DataReader is open, the Connection is in use
> exclusively by that DataReader. You cannot execute any commands for
> the Connection, including creating another DataReader, until the
> original DataReader is closed.
>
>
>
**Updated 2018**: link to [MSDN](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/retrieving-data-using-a-datareader) | There is another potential reason for this - missing `await` keyword. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | Always, always, always put disposable objects inside of using statements. I can't see how you've instantiated your DataReader but you should do it like this:
```
using (Connection c = ...)
{
using (DataReader dr = ...)
{
//Work with dr in here.
}
}
//Now the connection and reader have been closed and disposed.
```
Now, to answer your question, the reader is using the same connection as the command you're trying to `ExecuteNonQuery` on. You need to use a separate connection since the DataReader keeps the connection open and reads data as you need it. | You have to close the reader on top of your else condition. |
5,440,168 | I have below code and I am getting exception:
>
> There is already an open `DataReader` associated with this `Connection` which must be closed first.
>
>
>
I am using Visual Studio 2010/.Net 4.0 and MySQL for this project. Basically I am trying to run another SQL statement while using data reader to do my other task. I am getting exception at line `cmdInserttblProductFrance.ExecuteNonQuery();`
```
SQL = "Select * from tblProduct";
//Create Connection/Command/MySQLDataReader
MySqlConnection myConnection = new MySqlConnection(cf.GetConnectionString());
myConnection.Open();
MySqlCommand myCommand = new MySqlCommand(SQL, myConnection);
MySqlDataReader myReader = myCommand.ExecuteReader();
myCommand.Dispose();
if (myReader.HasRows)
{
int i = 0;
// Always call Read before accessing data.
while (myReader.Read())
{
if (myReader["frProductid"].ToString() == "") //there is no productid exist for this item
{
strInsertSQL = "Insert Into tblProduct_temp (Productid) Values('this istest') ";
MySqlCommand cmdInserttblProductFrance = new MySqlCommand(strInsertSQL, myConnection);
cmdInserttblProductFrance.ExecuteNonQuery(); //<=====THIS LINE THROWS "C# mySQL There is already an open DataReader associated with this Connection which must be closed first."
}
}
}
``` | 2011/03/26 | [
"https://Stackoverflow.com/questions/5440168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/552918/"
] | You are trying to to an Insert (with `ExecuteNonQuery()`) on a SQL connection that is used by this reader already:
```
while (myReader.Read())
```
Either read all the values in a list first, close the reader and then do the insert, or use a new SQL connection. | This exception also happens if you don't use transaction properly. In my case, I put `transaction.Commit()` right after `command.ExecuteReaderAsync()`, did not wait with the transaction commiting until `reader.ReadAsync()` was called. The proper order:
1. Create transaction.
2. Create reader.
3. Read the data.
4. Commit the transaction. |
20,109,788 | I have two query's in C# running in a datagridview, one is to show all data. the other is set to display in the footer. The footer is showing, just not displaying my query.
**query one (with footer)**
```
protected void Button2_Click(object sender, EventArgs e)
{
MySqlCommand cmd = new MySqlCommand("SELECT * FROM Customer", cs);
cs.Open();
MySqlDataReader dgl = cmd.ExecuteReader();
dg.ShowFooter = true;
dg.DataSource = dgl;
dg.DataBind();
cs.Close();
}
**query two(footer query)**
protected void dg_DataBound(object sender, EventArgs e)
{
MySqlCommand cmd = new MySqlCommand("SELECT SUM(Donation) AS Total_Donation FROM Customer", cs);
cs.Open();
String totalDonations = Convert.ToString(cmd.ExecuteScalar());
cs.Close();
dg.FooterRow.Cells[3].Text = totalDonations;
}
```
the datagrid shows query one works well, the footer even shows but it has not got any data. | 2013/11/21 | [
"https://Stackoverflow.com/questions/20109788",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2975042/"
] | PHP's XPath Processor only supports XPath 1.0, which does not allow alternations in path steps. A valid **XPath 2.0** expression would have been
```none
//table/tbody/tr/td/(input, textarea)
```
---
**XPath 1.0** requires you to either provide full paths like this:
```none
//table/tbody/tr/td/input | //table/tbody/tr/td/textarea
```
or use a predicate with name-test while using the wildcard node test:
```none
//table/tbody/tr/td/*[local-name() = 'input' or local-name() = 'textarea']
```
The latter version will be probably preferable regarding performance as the XML file will only be scanned once. | Untested in Behat/PHP, but this is how it would look if following the XPath syntax.
```
//table/tbody/tr/td/input | //table/tbody/tr/td/textarea
``` |
7,921,689 | In my application some values internally have their range from 0. But user should see this range starting from 1. I thought it would be appropriate to move this offseting stuff into presentation. In this case it is JSpinner component. So that I could specify in contructor if there is an offset (not all values have it). But if I override `getValue()` of JSpinner or `getValue()` of model to be something like that (+1 is just for test)
```
public Object getValue() {
Number value = (Number)super.getValue();
Number newValue=value;
if (value instanceof Double){
newValue=value.doubleValue()+1;
}
else if (value instanceof Integer){
newValue = value.intValue()+1;
}
return newValue;
}
```
it goes into infinite loop. I guess, it fires state change event for some reason here. Calls `getValue` again, increments more, fires event, increments and so on.
How could this be solved? Thanks | 2011/10/27 | [
"https://Stackoverflow.com/questions/7921689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/966758/"
] | Don't mingle your program's data model and the spinner's number model; keep them separate. Delegate to a private `SpinnerNumberModel` having the correct presentation range, 1..*n*. Provide an accessor that returns values in the desired range, 0..*n*–1.
>
> Provide the method `getAdjustedValue()`, which is basically `getValue()-offset`, that all clients should use instead of `getValue()`?
>
>
>
Yes. The `SpinnerModel` serves the `JSpinner` view. If your application's model uses different units, some transformation must occur. You'll have to decide where that makes most sense. As a concrete example, this [model](https://sites.google.com/site/drjohnbmatthews/kineticmodel)'s [`ControlPanel`](https://sites.google.com/site/drjohnbmatthews/kineticmodel/code#ControlPanel) has a spinner that adjusts a frequency in *Hz*, while the application's `Timer` requires a period in milliseconds. | I think that [CyclingSpinnerListModel](http://download.oracle.com/javase/tutorial/uiswing/examples/components/SpinnerDemoProject/src/components/CyclingSpinnerListModel.java) can do that |
72,761,100 | I want to print a javascript array of images in random order but expect the middle one I want this one g.jpg to stay in it is positions
right now all of them are shuffle, how to separate or absolute g.jpg position.
I think I need to add a different class name for g.jpg but I don't know how to do it.
```html
<html>
<head>
<meta charset='utf-8'>
<title></title>
<style>
.ppl{
width: 250px;
}
</style>
</head>
<body>
<div id="root"></div>
<script type="text/javascript">
const images = [
'images/1.jpg',
'images/2.jpg',
'images/g.jpg',
'images/3.jpg',
'images/4.jpg'
]
const root = document.querySelector('#root')
const shuffle = ([...array]) => {
let currentIndex = array.length
let temporaryValue
let randomIndex
// While there remain elements to shuffle...
while (currentIndex !== 0) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex)
currentIndex -= 1
// And swap it with the current element.
temporaryValue = array[currentIndex]
array[currentIndex] = array[randomIndex]
array[randomIndex] = temporaryValue
}
return array
}
const shuffledImages = shuffle(images)
shuffledImages.forEach(src => {
const image = document.createElement('img')
image.src = src
image.alt = src
image.classList.add('ppl')
image.classList.add('pos')
root.appendChild(image)
})
</script>
</body>
</html>
``` | 2022/06/26 | [
"https://Stackoverflow.com/questions/72761100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15098759/"
] | ```js
curarr.splice(3, 0, ...otherObj)
``` | With the splice method, you can add or delete elements from a specific index.
For more resources.
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice>
```js
const currarr = [{id:1,name:"abc"},{id:2,name:"efg"},{id:3,name:"hij"},{id:4,name:"klm"},{id:5,name:"nop"}];
const otherObj = [{id:6,name:"fdf"},{id:7,name:"gfg"}]
currarr.splice(3, 0, ...otherObj)
console.log(currarr)
``` |
72,761,100 | I want to print a javascript array of images in random order but expect the middle one I want this one g.jpg to stay in it is positions
right now all of them are shuffle, how to separate or absolute g.jpg position.
I think I need to add a different class name for g.jpg but I don't know how to do it.
```html
<html>
<head>
<meta charset='utf-8'>
<title></title>
<style>
.ppl{
width: 250px;
}
</style>
</head>
<body>
<div id="root"></div>
<script type="text/javascript">
const images = [
'images/1.jpg',
'images/2.jpg',
'images/g.jpg',
'images/3.jpg',
'images/4.jpg'
]
const root = document.querySelector('#root')
const shuffle = ([...array]) => {
let currentIndex = array.length
let temporaryValue
let randomIndex
// While there remain elements to shuffle...
while (currentIndex !== 0) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex)
currentIndex -= 1
// And swap it with the current element.
temporaryValue = array[currentIndex]
array[currentIndex] = array[randomIndex]
array[randomIndex] = temporaryValue
}
return array
}
const shuffledImages = shuffle(images)
shuffledImages.forEach(src => {
const image = document.createElement('img')
image.src = src
image.alt = src
image.classList.add('ppl')
image.classList.add('pos')
root.appendChild(image)
})
</script>
</body>
</html>
``` | 2022/06/26 | [
"https://Stackoverflow.com/questions/72761100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15098759/"
] | ```js
curarr.splice(3, 0, ...otherObj)
``` | The slice() method returns a shallow copy of a portion of an array into a new array object selected from start to end (end not included) where start and end represent the index of items in that array. The original array will not be modified.
for more checkout below links
[Mozila](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice#syntax)
[W3c](https://www.w3schools.com/jsref/jsref_slice_array.asp) |
39,330,609 | I am new to Angularjs and wondering how to check the `token`'s expire date and time before sending any request.
I googled and found there are concepts like `interceptors` and `decorators` in angular but I am a bit confused which one to use and how. Or is there any better way to do it.
**What am I doing right now?**
I have created a service that has `GET, POST` functions take url, data and config as parameters and there I am checking the token. I know this is not the right approach. | 2016/09/05 | [
"https://Stackoverflow.com/questions/39330609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4221433/"
] | Using php's [output buffering](http://php.net/manual/en/book.outcontrol.php)
```
// start output buffering
ob_start();
// create your page
// once the page is ready, measure the size of the output buffer
$length = ob_get_length();
// and emit the page, stop buffering and flush the buffer
ob_get_flush();
```
As usual with php, these functions are pretty well documented in the standard documentation, don't forget to read the user contributed notes. | You can see this in your webserver's access log file.
But you can also code some php to get an answer like this:
```
ob_start();
echo "your content"
$data = ob_get_contents();
$size = strlen($data);
```
see also: [Measure string size in Bytes in php](https://stackoverflow.com/questions/7568949/measure-string-size-in-bytes-in-php) |
12,958,749 | I am learning operator overloading. "out" is being used instead of "cout" when overloading "<<" operator. I don't understand why.
```
ostream &operator<<( ostream &out, const IntList &L ) {
out << "[ ";
for (int k=0; k< L.numItems; k++) {
out << L.Items[k] << ' ';
}
out << ']';
}
```
I want to ask differences between cout and out and what happens if I use cout instead of out.
Thanks for answers. | 2012/10/18 | [
"https://Stackoverflow.com/questions/12958749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559792/"
] | What you are looking at is a overloaded "stream insertion" operator, allowing some custom class to be written to an `ostream` object using the typical `cout << myObject` syntax.
The variable in this case is called `out` because that's the name they've given to the `ostream` object being passed into the function, which may be *any* output stream, whether it's `cout` or an `fstream` or a `stringstream`. It's just a variable name, and they could have called it `blah` and written:
```
ostream &operator<<( ostream &blah, const IntList &L ) {
blah << "[ ";
// ...
}
```
Typically you choose a variable name which is descriptive, and `out` as a name for an output stream is pretty descriptive.
`cout` would be an especially *bad* variable name, as it is strongly associated with `std::cout`, used for writing specifically to the standard output stream. This code doesn't write specificially to standard output, it writes to *any* `ostream` object via the `<<` operator so they've chosen a more generic name for their `ostream` argument. | >
> I want to ask differences between cout and out and what happens if I use cout instead of out. Thanks for answers.
>
>
>
In this case, an `ostream&` (`out`) is a parameter passed to the function. This allows the `operator<<` to work on *any* `ostream`.
[`cout`](http://www.cplusplus.com/reference/iostream/cout/) is a *specific* `ostream` instance - the standard output stream. If they used `cout` here, you wouldn't be able to use the `<<` operator on `cerr` (the standard error stream) or any other `ostream`. If you replaced the `out` with `cout` in the body, any time you used this on a different ostream, it'd be written to `cout`. (Of course, if you changed the parameter to be named `cout`, that wouldn't happen - but it would be very misleading to anybody looking at this code, as people would *expect* that the code writes to the standard output stream, not to the stream being passed in.)
In general, you only would want to use `cout` as a name if you are specifically referring to `std::cout` - the standard output stream, as using it in other contexts would be very confusing. |
12,958,749 | I am learning operator overloading. "out" is being used instead of "cout" when overloading "<<" operator. I don't understand why.
```
ostream &operator<<( ostream &out, const IntList &L ) {
out << "[ ";
for (int k=0; k< L.numItems; k++) {
out << L.Items[k] << ' ';
}
out << ']';
}
```
I want to ask differences between cout and out and what happens if I use cout instead of out.
Thanks for answers. | 2012/10/18 | [
"https://Stackoverflow.com/questions/12958749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559792/"
] | What you are looking at is a overloaded "stream insertion" operator, allowing some custom class to be written to an `ostream` object using the typical `cout << myObject` syntax.
The variable in this case is called `out` because that's the name they've given to the `ostream` object being passed into the function, which may be *any* output stream, whether it's `cout` or an `fstream` or a `stringstream`. It's just a variable name, and they could have called it `blah` and written:
```
ostream &operator<<( ostream &blah, const IntList &L ) {
blah << "[ ";
// ...
}
```
Typically you choose a variable name which is descriptive, and `out` as a name for an output stream is pretty descriptive.
`cout` would be an especially *bad* variable name, as it is strongly associated with `std::cout`, used for writing specifically to the standard output stream. This code doesn't write specificially to standard output, it writes to *any* `ostream` object via the `<<` operator so they've chosen a more generic name for their `ostream` argument. | out is the name of the ostream object passed to the overloaded operator (inside the implementation of the operator).
The overloaded operator allows you to write code like this
```
IntList i;
cout<<i;
```
or
```
cerr<<i;
```
In the implementation if you substituted *out* with *cout*, then the second call
```
cerr<<i;
```
would print to standard output whereas it should have printed to standard error. |
12,958,749 | I am learning operator overloading. "out" is being used instead of "cout" when overloading "<<" operator. I don't understand why.
```
ostream &operator<<( ostream &out, const IntList &L ) {
out << "[ ";
for (int k=0; k< L.numItems; k++) {
out << L.Items[k] << ' ';
}
out << ']';
}
```
I want to ask differences between cout and out and what happens if I use cout instead of out.
Thanks for answers. | 2012/10/18 | [
"https://Stackoverflow.com/questions/12958749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559792/"
] | What you are looking at is a overloaded "stream insertion" operator, allowing some custom class to be written to an `ostream` object using the typical `cout << myObject` syntax.
The variable in this case is called `out` because that's the name they've given to the `ostream` object being passed into the function, which may be *any* output stream, whether it's `cout` or an `fstream` or a `stringstream`. It's just a variable name, and they could have called it `blah` and written:
```
ostream &operator<<( ostream &blah, const IntList &L ) {
blah << "[ ";
// ...
}
```
Typically you choose a variable name which is descriptive, and `out` as a name for an output stream is pretty descriptive.
`cout` would be an especially *bad* variable name, as it is strongly associated with `std::cout`, used for writing specifically to the standard output stream. This code doesn't write specificially to standard output, it writes to *any* `ostream` object via the `<<` operator so they've chosen a more generic name for their `ostream` argument. | The critical thing here is really the types in the function signature: as long as it's a freestanding function with two parameters - one of type `std::ostream&` and the other able to be matched by the value to be streamed, then the function body will be invoked. It should return a reference to the stream to allow chaining (as in `if (cout << firstIntList << secondIntList)`).
The actual parameter names are whatever you feel like, as long as they're not reserved words. I tend to use "os", as in output-stream. |
12,958,749 | I am learning operator overloading. "out" is being used instead of "cout" when overloading "<<" operator. I don't understand why.
```
ostream &operator<<( ostream &out, const IntList &L ) {
out << "[ ";
for (int k=0; k< L.numItems; k++) {
out << L.Items[k] << ' ';
}
out << ']';
}
```
I want to ask differences between cout and out and what happens if I use cout instead of out.
Thanks for answers. | 2012/10/18 | [
"https://Stackoverflow.com/questions/12958749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559792/"
] | >
> I want to ask differences between cout and out and what happens if I use cout instead of out. Thanks for answers.
>
>
>
In this case, an `ostream&` (`out`) is a parameter passed to the function. This allows the `operator<<` to work on *any* `ostream`.
[`cout`](http://www.cplusplus.com/reference/iostream/cout/) is a *specific* `ostream` instance - the standard output stream. If they used `cout` here, you wouldn't be able to use the `<<` operator on `cerr` (the standard error stream) or any other `ostream`. If you replaced the `out` with `cout` in the body, any time you used this on a different ostream, it'd be written to `cout`. (Of course, if you changed the parameter to be named `cout`, that wouldn't happen - but it would be very misleading to anybody looking at this code, as people would *expect* that the code writes to the standard output stream, not to the stream being passed in.)
In general, you only would want to use `cout` as a name if you are specifically referring to `std::cout` - the standard output stream, as using it in other contexts would be very confusing. | out is the name of the ostream object passed to the overloaded operator (inside the implementation of the operator).
The overloaded operator allows you to write code like this
```
IntList i;
cout<<i;
```
or
```
cerr<<i;
```
In the implementation if you substituted *out* with *cout*, then the second call
```
cerr<<i;
```
would print to standard output whereas it should have printed to standard error. |
12,958,749 | I am learning operator overloading. "out" is being used instead of "cout" when overloading "<<" operator. I don't understand why.
```
ostream &operator<<( ostream &out, const IntList &L ) {
out << "[ ";
for (int k=0; k< L.numItems; k++) {
out << L.Items[k] << ' ';
}
out << ']';
}
```
I want to ask differences between cout and out and what happens if I use cout instead of out.
Thanks for answers. | 2012/10/18 | [
"https://Stackoverflow.com/questions/12958749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559792/"
] | >
> I want to ask differences between cout and out and what happens if I use cout instead of out. Thanks for answers.
>
>
>
In this case, an `ostream&` (`out`) is a parameter passed to the function. This allows the `operator<<` to work on *any* `ostream`.
[`cout`](http://www.cplusplus.com/reference/iostream/cout/) is a *specific* `ostream` instance - the standard output stream. If they used `cout` here, you wouldn't be able to use the `<<` operator on `cerr` (the standard error stream) or any other `ostream`. If you replaced the `out` with `cout` in the body, any time you used this on a different ostream, it'd be written to `cout`. (Of course, if you changed the parameter to be named `cout`, that wouldn't happen - but it would be very misleading to anybody looking at this code, as people would *expect* that the code writes to the standard output stream, not to the stream being passed in.)
In general, you only would want to use `cout` as a name if you are specifically referring to `std::cout` - the standard output stream, as using it in other contexts would be very confusing. | The critical thing here is really the types in the function signature: as long as it's a freestanding function with two parameters - one of type `std::ostream&` and the other able to be matched by the value to be streamed, then the function body will be invoked. It should return a reference to the stream to allow chaining (as in `if (cout << firstIntList << secondIntList)`).
The actual parameter names are whatever you feel like, as long as they're not reserved words. I tend to use "os", as in output-stream. |
12,958,749 | I am learning operator overloading. "out" is being used instead of "cout" when overloading "<<" operator. I don't understand why.
```
ostream &operator<<( ostream &out, const IntList &L ) {
out << "[ ";
for (int k=0; k< L.numItems; k++) {
out << L.Items[k] << ' ';
}
out << ']';
}
```
I want to ask differences between cout and out and what happens if I use cout instead of out.
Thanks for answers. | 2012/10/18 | [
"https://Stackoverflow.com/questions/12958749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559792/"
] | out is the name of the ostream object passed to the overloaded operator (inside the implementation of the operator).
The overloaded operator allows you to write code like this
```
IntList i;
cout<<i;
```
or
```
cerr<<i;
```
In the implementation if you substituted *out* with *cout*, then the second call
```
cerr<<i;
```
would print to standard output whereas it should have printed to standard error. | The critical thing here is really the types in the function signature: as long as it's a freestanding function with two parameters - one of type `std::ostream&` and the other able to be matched by the value to be streamed, then the function body will be invoked. It should return a reference to the stream to allow chaining (as in `if (cout << firstIntList << secondIntList)`).
The actual parameter names are whatever you feel like, as long as they're not reserved words. I tend to use "os", as in output-stream. |
4,875,075 | I'm working on a program that does heavy read/write random access on huge file (till 64 GB). Files are specifically structured and to make access on them I've created a framework; after a while I tried to test performance on it and I've noticed that on preallocated file sequential write operations are too slow to be acceptable.
After many tests I replicated the behavior without my framework (only FileStream methods); here's the portion of code that (with my hardware) replicates the issue:
```
FileStream fs = new FileStream("test1.vhd", FileMode.Open);
byte[] buffer = new byte[256 * 1024];
Random rand = new Random();
rand.NextBytes(buffer);
DateTime start, end;
double ellapsed = 0.0;
long startPos, endPos;
BinaryReader br = new BinaryReader(fs);
br.ReadUInt32();
br.ReadUInt32();
for (int i = 0; i < 65536; i++)
br.ReadUInt16();
br = null;
startPos = 0; // 0
endPos = 4294967296; // 4GB
for (long index = startPos; index < endPos; index += buffer.Length)
{
start = DateTime.Now;
fs.Write(buffer, 0, buffer.Length);
end = DateTime.Now;
ellapsed += (end - start).TotalMilliseconds;
}
```
Unfortunately the issue seems to be unpredictable, so sometimes it "works", sometimes it doesn't.
However, using Process Monitor I've caught the following events:
```
Operation Result Detail
WriteFile SUCCESS Offset: 1.905.655.816, Length: 262.144
WriteFile SUCCESS Offset: 1.905.917.960, Length: 262.144
WriteFile SUCCESS Offset: 1.906.180.104, Length: 262.144
WriteFile SUCCESS Offset: 1.906.442.248, Length: 262.144
WriteFile SUCCESS Offset: 1.906.704.392, Length: 262.144
WriteFile SUCCESS Offset: 1.906.966.536, Length: 262.144
ReadFile SUCCESS Offset: 1.907.228.672, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.228.680, Length: 262.144
ReadFile SUCCESS Offset: 1.907.355.648, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.490.816, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.490.824, Length: 262.144
ReadFile SUCCESS Offset: 1.907.617.792, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.752.960, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.752.968, Length: 262.144
```
That is, after over-writing almost 2 GB, `FileStream.Write` starts to call `ReadFile` after every `WriteFile`, and this issue continue till the end of the process; also, the offset at which the issue begins seems to be random.
I've debugged step-by-step inside the `FileStream.Write` method and I've verified that actually is the `WriteFile` (Win32 API) that, internally, calls `ReadFile`.
Last note; I don't think it is a file fragmentation issue: I've defragmented the file personally with contig! | 2011/02/02 | [
"https://Stackoverflow.com/questions/4875075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/600055/"
] | Overrides must be the same signature as what they are overriding, so you can't change the type of a field. | Another option than trying to override in this fashion is to make your class generic;
```
public abstract class MyClass<T>
{
public T MyValue{ get; set;}
}
public class MyIntClass : MyClass<int>
{}
public class MyLongClass : MyClass<long>
{}
``` |
4,875,075 | I'm working on a program that does heavy read/write random access on huge file (till 64 GB). Files are specifically structured and to make access on them I've created a framework; after a while I tried to test performance on it and I've noticed that on preallocated file sequential write operations are too slow to be acceptable.
After many tests I replicated the behavior without my framework (only FileStream methods); here's the portion of code that (with my hardware) replicates the issue:
```
FileStream fs = new FileStream("test1.vhd", FileMode.Open);
byte[] buffer = new byte[256 * 1024];
Random rand = new Random();
rand.NextBytes(buffer);
DateTime start, end;
double ellapsed = 0.0;
long startPos, endPos;
BinaryReader br = new BinaryReader(fs);
br.ReadUInt32();
br.ReadUInt32();
for (int i = 0; i < 65536; i++)
br.ReadUInt16();
br = null;
startPos = 0; // 0
endPos = 4294967296; // 4GB
for (long index = startPos; index < endPos; index += buffer.Length)
{
start = DateTime.Now;
fs.Write(buffer, 0, buffer.Length);
end = DateTime.Now;
ellapsed += (end - start).TotalMilliseconds;
}
```
Unfortunately the issue seems to be unpredictable, so sometimes it "works", sometimes it doesn't.
However, using Process Monitor I've caught the following events:
```
Operation Result Detail
WriteFile SUCCESS Offset: 1.905.655.816, Length: 262.144
WriteFile SUCCESS Offset: 1.905.917.960, Length: 262.144
WriteFile SUCCESS Offset: 1.906.180.104, Length: 262.144
WriteFile SUCCESS Offset: 1.906.442.248, Length: 262.144
WriteFile SUCCESS Offset: 1.906.704.392, Length: 262.144
WriteFile SUCCESS Offset: 1.906.966.536, Length: 262.144
ReadFile SUCCESS Offset: 1.907.228.672, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.228.680, Length: 262.144
ReadFile SUCCESS Offset: 1.907.355.648, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.490.816, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.490.824, Length: 262.144
ReadFile SUCCESS Offset: 1.907.617.792, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.752.960, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.752.968, Length: 262.144
```
That is, after over-writing almost 2 GB, `FileStream.Write` starts to call `ReadFile` after every `WriteFile`, and this issue continue till the end of the process; also, the offset at which the issue begins seems to be random.
I've debugged step-by-step inside the `FileStream.Write` method and I've verified that actually is the `WriteFile` (Win32 API) that, internally, calls `ReadFile`.
Last note; I don't think it is a file fragmentation issue: I've defragmented the file personally with contig! | 2011/02/02 | [
"https://Stackoverflow.com/questions/4875075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/600055/"
] | Overrides must be the same signature as what they are overriding, so you can't change the type of a field. | You're not overriding the `MyInt` field, you're creating a new field, through which you have to specified `new`:
```
protected new Long MyInt = 0;
```
If you code is accessing the class as an instance of your base class, it will access it as `Int32`, if you call your subclass directly, it will access it as `Long`:
```
public class MyClass
{
protected int MyValue = 0;
}
public class MySubclass : MyClass
{
protected new long MyValue = 0;
}
void Test()
{
MyClass instance = new MyClass();
instance.MyValue = 10; // int
MySubclass instance2 = new MySubclass();
instance2.MyValue = 10; // long
MyClass instance3 = (MyClass)instance2;
int value = instance3.MyValue; // int - value is 0.
}
``` |
4,875,075 | I'm working on a program that does heavy read/write random access on huge file (till 64 GB). Files are specifically structured and to make access on them I've created a framework; after a while I tried to test performance on it and I've noticed that on preallocated file sequential write operations are too slow to be acceptable.
After many tests I replicated the behavior without my framework (only FileStream methods); here's the portion of code that (with my hardware) replicates the issue:
```
FileStream fs = new FileStream("test1.vhd", FileMode.Open);
byte[] buffer = new byte[256 * 1024];
Random rand = new Random();
rand.NextBytes(buffer);
DateTime start, end;
double ellapsed = 0.0;
long startPos, endPos;
BinaryReader br = new BinaryReader(fs);
br.ReadUInt32();
br.ReadUInt32();
for (int i = 0; i < 65536; i++)
br.ReadUInt16();
br = null;
startPos = 0; // 0
endPos = 4294967296; // 4GB
for (long index = startPos; index < endPos; index += buffer.Length)
{
start = DateTime.Now;
fs.Write(buffer, 0, buffer.Length);
end = DateTime.Now;
ellapsed += (end - start).TotalMilliseconds;
}
```
Unfortunately the issue seems to be unpredictable, so sometimes it "works", sometimes it doesn't.
However, using Process Monitor I've caught the following events:
```
Operation Result Detail
WriteFile SUCCESS Offset: 1.905.655.816, Length: 262.144
WriteFile SUCCESS Offset: 1.905.917.960, Length: 262.144
WriteFile SUCCESS Offset: 1.906.180.104, Length: 262.144
WriteFile SUCCESS Offset: 1.906.442.248, Length: 262.144
WriteFile SUCCESS Offset: 1.906.704.392, Length: 262.144
WriteFile SUCCESS Offset: 1.906.966.536, Length: 262.144
ReadFile SUCCESS Offset: 1.907.228.672, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.228.680, Length: 262.144
ReadFile SUCCESS Offset: 1.907.355.648, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.490.816, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.490.824, Length: 262.144
ReadFile SUCCESS Offset: 1.907.617.792, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.752.960, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.752.968, Length: 262.144
```
That is, after over-writing almost 2 GB, `FileStream.Write` starts to call `ReadFile` after every `WriteFile`, and this issue continue till the end of the process; also, the offset at which the issue begins seems to be random.
I've debugged step-by-step inside the `FileStream.Write` method and I've verified that actually is the `WriteFile` (Win32 API) that, internally, calls `ReadFile`.
Last note; I don't think it is a file fragmentation issue: I've defragmented the file personally with contig! | 2011/02/02 | [
"https://Stackoverflow.com/questions/4875075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/600055/"
] | Overrides must be the same signature as what they are overriding, so you can't change the type of a field. | You can use either the [override](http://msdn.microsoft.com/de-de/library/ebca9ah3%28v=vs.80%29.aspx) or the [new](http://msdn.microsoft.com/en-us/library/435f1dw2%28v=vs.80%29.aspx) modifier for the property to override it. Still, the two properties need to be the same type. The difference is that `override` is also called if you call the property on a `Base` class which is in fact the `Derived` class and `new` is only used if you call it on the `derived` class. See [this](http://blogs.msdn.com/b/csharpfaq/archive/2004/03/12/what-s-the-difference-between-code-override-code-and-code-new-code.aspx) interesting article by Jon Skeet explaining the difference. As Jon says there, if you write
```
Base b = new Derived();
b.SomeMethod();
```
and `SomeMethod` was overriden using the `override` keyword, the method of the derived class would be called. If you would use the `new` keyword, the method `SomeMethod` from the base class would be called. |
4,875,075 | I'm working on a program that does heavy read/write random access on huge file (till 64 GB). Files are specifically structured and to make access on them I've created a framework; after a while I tried to test performance on it and I've noticed that on preallocated file sequential write operations are too slow to be acceptable.
After many tests I replicated the behavior without my framework (only FileStream methods); here's the portion of code that (with my hardware) replicates the issue:
```
FileStream fs = new FileStream("test1.vhd", FileMode.Open);
byte[] buffer = new byte[256 * 1024];
Random rand = new Random();
rand.NextBytes(buffer);
DateTime start, end;
double ellapsed = 0.0;
long startPos, endPos;
BinaryReader br = new BinaryReader(fs);
br.ReadUInt32();
br.ReadUInt32();
for (int i = 0; i < 65536; i++)
br.ReadUInt16();
br = null;
startPos = 0; // 0
endPos = 4294967296; // 4GB
for (long index = startPos; index < endPos; index += buffer.Length)
{
start = DateTime.Now;
fs.Write(buffer, 0, buffer.Length);
end = DateTime.Now;
ellapsed += (end - start).TotalMilliseconds;
}
```
Unfortunately the issue seems to be unpredictable, so sometimes it "works", sometimes it doesn't.
However, using Process Monitor I've caught the following events:
```
Operation Result Detail
WriteFile SUCCESS Offset: 1.905.655.816, Length: 262.144
WriteFile SUCCESS Offset: 1.905.917.960, Length: 262.144
WriteFile SUCCESS Offset: 1.906.180.104, Length: 262.144
WriteFile SUCCESS Offset: 1.906.442.248, Length: 262.144
WriteFile SUCCESS Offset: 1.906.704.392, Length: 262.144
WriteFile SUCCESS Offset: 1.906.966.536, Length: 262.144
ReadFile SUCCESS Offset: 1.907.228.672, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.228.680, Length: 262.144
ReadFile SUCCESS Offset: 1.907.355.648, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.490.816, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.490.824, Length: 262.144
ReadFile SUCCESS Offset: 1.907.617.792, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.752.960, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.752.968, Length: 262.144
```
That is, after over-writing almost 2 GB, `FileStream.Write` starts to call `ReadFile` after every `WriteFile`, and this issue continue till the end of the process; also, the offset at which the issue begins seems to be random.
I've debugged step-by-step inside the `FileStream.Write` method and I've verified that actually is the `WriteFile` (Win32 API) that, internally, calls `ReadFile`.
Last note; I don't think it is a file fragmentation issue: I've defragmented the file personally with contig! | 2011/02/02 | [
"https://Stackoverflow.com/questions/4875075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/600055/"
] | Another option than trying to override in this fashion is to make your class generic;
```
public abstract class MyClass<T>
{
public T MyValue{ get; set;}
}
public class MyIntClass : MyClass<int>
{}
public class MyLongClass : MyClass<long>
{}
``` | You can use either the [override](http://msdn.microsoft.com/de-de/library/ebca9ah3%28v=vs.80%29.aspx) or the [new](http://msdn.microsoft.com/en-us/library/435f1dw2%28v=vs.80%29.aspx) modifier for the property to override it. Still, the two properties need to be the same type. The difference is that `override` is also called if you call the property on a `Base` class which is in fact the `Derived` class and `new` is only used if you call it on the `derived` class. See [this](http://blogs.msdn.com/b/csharpfaq/archive/2004/03/12/what-s-the-difference-between-code-override-code-and-code-new-code.aspx) interesting article by Jon Skeet explaining the difference. As Jon says there, if you write
```
Base b = new Derived();
b.SomeMethod();
```
and `SomeMethod` was overriden using the `override` keyword, the method of the derived class would be called. If you would use the `new` keyword, the method `SomeMethod` from the base class would be called. |
4,875,075 | I'm working on a program that does heavy read/write random access on huge file (till 64 GB). Files are specifically structured and to make access on them I've created a framework; after a while I tried to test performance on it and I've noticed that on preallocated file sequential write operations are too slow to be acceptable.
After many tests I replicated the behavior without my framework (only FileStream methods); here's the portion of code that (with my hardware) replicates the issue:
```
FileStream fs = new FileStream("test1.vhd", FileMode.Open);
byte[] buffer = new byte[256 * 1024];
Random rand = new Random();
rand.NextBytes(buffer);
DateTime start, end;
double ellapsed = 0.0;
long startPos, endPos;
BinaryReader br = new BinaryReader(fs);
br.ReadUInt32();
br.ReadUInt32();
for (int i = 0; i < 65536; i++)
br.ReadUInt16();
br = null;
startPos = 0; // 0
endPos = 4294967296; // 4GB
for (long index = startPos; index < endPos; index += buffer.Length)
{
start = DateTime.Now;
fs.Write(buffer, 0, buffer.Length);
end = DateTime.Now;
ellapsed += (end - start).TotalMilliseconds;
}
```
Unfortunately the issue seems to be unpredictable, so sometimes it "works", sometimes it doesn't.
However, using Process Monitor I've caught the following events:
```
Operation Result Detail
WriteFile SUCCESS Offset: 1.905.655.816, Length: 262.144
WriteFile SUCCESS Offset: 1.905.917.960, Length: 262.144
WriteFile SUCCESS Offset: 1.906.180.104, Length: 262.144
WriteFile SUCCESS Offset: 1.906.442.248, Length: 262.144
WriteFile SUCCESS Offset: 1.906.704.392, Length: 262.144
WriteFile SUCCESS Offset: 1.906.966.536, Length: 262.144
ReadFile SUCCESS Offset: 1.907.228.672, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.228.680, Length: 262.144
ReadFile SUCCESS Offset: 1.907.355.648, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.490.816, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.490.824, Length: 262.144
ReadFile SUCCESS Offset: 1.907.617.792, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
ReadFile SUCCESS Offset: 1.907.752.960, Length: 32.768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O, Priority: Normal
WriteFile SUCCESS Offset: 1.907.752.968, Length: 262.144
```
That is, after over-writing almost 2 GB, `FileStream.Write` starts to call `ReadFile` after every `WriteFile`, and this issue continue till the end of the process; also, the offset at which the issue begins seems to be random.
I've debugged step-by-step inside the `FileStream.Write` method and I've verified that actually is the `WriteFile` (Win32 API) that, internally, calls `ReadFile`.
Last note; I don't think it is a file fragmentation issue: I've defragmented the file personally with contig! | 2011/02/02 | [
"https://Stackoverflow.com/questions/4875075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/600055/"
] | You're not overriding the `MyInt` field, you're creating a new field, through which you have to specified `new`:
```
protected new Long MyInt = 0;
```
If you code is accessing the class as an instance of your base class, it will access it as `Int32`, if you call your subclass directly, it will access it as `Long`:
```
public class MyClass
{
protected int MyValue = 0;
}
public class MySubclass : MyClass
{
protected new long MyValue = 0;
}
void Test()
{
MyClass instance = new MyClass();
instance.MyValue = 10; // int
MySubclass instance2 = new MySubclass();
instance2.MyValue = 10; // long
MyClass instance3 = (MyClass)instance2;
int value = instance3.MyValue; // int - value is 0.
}
``` | You can use either the [override](http://msdn.microsoft.com/de-de/library/ebca9ah3%28v=vs.80%29.aspx) or the [new](http://msdn.microsoft.com/en-us/library/435f1dw2%28v=vs.80%29.aspx) modifier for the property to override it. Still, the two properties need to be the same type. The difference is that `override` is also called if you call the property on a `Base` class which is in fact the `Derived` class and `new` is only used if you call it on the `derived` class. See [this](http://blogs.msdn.com/b/csharpfaq/archive/2004/03/12/what-s-the-difference-between-code-override-code-and-code-new-code.aspx) interesting article by Jon Skeet explaining the difference. As Jon says there, if you write
```
Base b = new Derived();
b.SomeMethod();
```
and `SomeMethod` was overriden using the `override` keyword, the method of the derived class would be called. If you would use the `new` keyword, the method `SomeMethod` from the base class would be called. |
39,051,066 | I have been using `display: table` for my `html` along with `display: table-cell` for my `body` lately, to make all of the contents of my page appear at the very center of the screen. While, this works with paragraphs, headings, inputs and labels etc.(as far as I have tested at least), it doesn't seem to work with `div` elements.
Is there any way to make this work for `div` elements like it does for the other ones? Pure CSS solution would be best.
**Example** ([also on Codepen](http://codepen.io/chalarangelo/pen/pbBkbk))
```css
html {
display: table;
height: 100%;
width: 100%;
text-align: center;
}
body {
display: table-cell;
vertical-align: middle;
}
.hidden {
display: none;
}
.leaf {
border-radius: 15px 2px 15px 2px;
border: 1px solid green;
background-color: green;
width: 250px;
height: 80px;
}
```
```html
<div class="leaf">Why am I not centered?</div>
<p>And why am I centered?</p>
``` | 2016/08/20 | [
"https://Stackoverflow.com/questions/39051066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1650200/"
] | Just add `margin: 0 auto;`.. Working right.
```css
html {
display: table;
height: 100%;
width: 100%;
text-align: center;
}
body {
display: table-cell;
vertical-align: middle;
}
.hidden {
display: none;
}
.leaf {
border-radius: 15px 2px 15px 2px;
border: 1px solid green;
background-color: green;
width: 250px;
height: 80px;
margin:0 auto;
}
```
```html
<div class="leaf">Why am I not centered?</div>
<p>And why am I centered?</p>
``` | Just add *margin: 0 auto;* to .leaf class.
```css
html {
display: table;
height: 100%;
width: 100%;
text-align: center;
}
body {
display: table-cell;
vertical-align: middle;
}
.hidden {
display: none;
}
.leaf {
margin: 0 auto;
border-radius: 15px 2px 15px 2px;
border: 1px solid green;
background-color: green;
width: 250px;
height: 80px;
}
```
```html
<div class="leaf">Why am I not centered?</div>
<p>And why am I centered?</p>
``` |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | If it's an offline app (ie, you've defined a cache manifest) be sure to allow the network request.
See [HTML5 Appcache causing problems with Google Analytics](https://stackoverflow.com/questions/14410974/html5-appcache-causing-problems-with-google-analytics) | I've noticed same thing on my browser some time ago.
Did you sing in to chrome using your Google account maybe? Or did you choose in any way to opt-out from collecting data on Google Analytics ?
Maybe Google remembers that option and uses it on Chrome when you are singed in..
BTW. I can normally open <http://www.google-analytics.com/ga.js> in browser, it just doesn't work when automatically loaded. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | It could also be your hosts file, here's mine:
```
$ grep -ni "google-analytics.com" /etc/hosts
6203:# 127.0.0.1 ssl.google-analytics.com #[disabled = Firefox issues]
6204:127.0.0.1 www.google-analytics.com #[Google Analytics]
``` | I've noticed same thing on my browser some time ago.
Did you sing in to chrome using your Google account maybe? Or did you choose in any way to opt-out from collecting data on Google Analytics ?
Maybe Google remembers that option and uses it on Chrome when you are singed in..
BTW. I can normally open <http://www.google-analytics.com/ga.js> in browser, it just doesn't work when automatically loaded. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | It could also be your hosts file, here's mine:
```
$ grep -ni "google-analytics.com" /etc/hosts
6203:# 127.0.0.1 ssl.google-analytics.com #[disabled = Firefox issues]
6204:127.0.0.1 www.google-analytics.com #[Google Analytics]
``` | The reason you are running into problems is because AdBlock will block this script if and only if it does not go through `https`. Notice the error you get it contains an `http:` protocol reference.
All you need to do is change the snippet to force it to go through an ssl connection by adding an explicit protocol instead of the protocol relative url that is the default.
```html
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXX-XX', 'auto');
ga('send', 'pageview');
</script>
``` |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | **2019 update**
This has become very widespread now.
**Solutions**
1. Ask people to unblock your website, (bad idea from personal experience)
2. Host google analytics script locally (bad idea) because google says so [HERE](https://support.google.com/analytics/answer/1032389?hl=en)
>
> Referencing the JavaScript file from Google's servers (i.e.,
> <https://www.googletagmanager.com/gtag/js>) ensures that you get access
> to new features and product updates as they become available, giving
> you the most accurate data in your reports.
>
>
>
3. Use Server side analytics. This is what people are doing nowadays. If you are on node.js, use a library such as [analytics](https://www.npmjs.com/package/analytics) or [universal-analytics](https://www.npmjs.com/package/universal-analytics) | Ensure [Fiddler](http://www.telerik.com/fiddler) (or similar proxy) is not active. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | **2019 update**
This has become very widespread now.
**Solutions**
1. Ask people to unblock your website, (bad idea from personal experience)
2. Host google analytics script locally (bad idea) because google says so [HERE](https://support.google.com/analytics/answer/1032389?hl=en)
>
> Referencing the JavaScript file from Google's servers (i.e.,
> <https://www.googletagmanager.com/gtag/js>) ensures that you get access
> to new features and product updates as they become available, giving
> you the most accurate data in your reports.
>
>
>
3. Use Server side analytics. This is what people are doing nowadays. If you are on node.js, use a library such as [analytics](https://www.npmjs.com/package/analytics) or [universal-analytics](https://www.npmjs.com/package/universal-analytics) | I've noticed same thing on my browser some time ago.
Did you sing in to chrome using your Google account maybe? Or did you choose in any way to opt-out from collecting data on Google Analytics ?
Maybe Google remembers that option and uses it on Chrome when you are singed in..
BTW. I can normally open <http://www.google-analytics.com/ga.js> in browser, it just doesn't work when automatically loaded. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | The reason you are running into problems is because AdBlock will block this script if and only if it does not go through `https`. Notice the error you get it contains an `http:` protocol reference.
All you need to do is change the snippet to force it to go through an ssl connection by adding an explicit protocol instead of the protocol relative url that is the default.
```html
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXX-XX', 'auto');
ga('send', 'pageview');
</script>
``` | Ensure [Fiddler](http://www.telerik.com/fiddler) (or similar proxy) is not active. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | It was a problem with AdBlock. I disabled it and now it loads it normally.
**yagudaev** suggests (read answers below) that in order to keep AdBlock from blocking Google Analytics, you need to edit the snippet provided and explicitly use `https://` instead of the protocol-relative URL by default. This means changing
`'//www.google-analytics.com/analytics.js'`
into
`'https://www.google-analytics.com/analytics.js'`
Example:
```
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXX-XX', 'auto');
ga('send', 'pageview');
</script>
``` | If it's an offline app (ie, you've defined a cache manifest) be sure to allow the network request.
See [HTML5 Appcache causing problems with Google Analytics](https://stackoverflow.com/questions/14410974/html5-appcache-causing-problems-with-google-analytics) |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | It was a problem with AdBlock. I disabled it and now it loads it normally.
**yagudaev** suggests (read answers below) that in order to keep AdBlock from blocking Google Analytics, you need to edit the snippet provided and explicitly use `https://` instead of the protocol-relative URL by default. This means changing
`'//www.google-analytics.com/analytics.js'`
into
`'https://www.google-analytics.com/analytics.js'`
Example:
```
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXX-XX', 'auto');
ga('send', 'pageview');
</script>
``` | I've noticed same thing on my browser some time ago.
Did you sing in to chrome using your Google account maybe? Or did you choose in any way to opt-out from collecting data on Google Analytics ?
Maybe Google remembers that option and uses it on Chrome when you are singed in..
BTW. I can normally open <http://www.google-analytics.com/ga.js> in browser, it just doesn't work when automatically loaded. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | This error is commonly caused due to one of [*the extensions installed*](https://stackoverflow.com/a/50151073/4058484) within Chrome.
There are a few ways to debug and solve an ERR\_BLOCKED\_BY\_CLIENT message.
>
> * Disable the extension.
> * Whitelist the domain.
> * Debug the issue.
>
>
>
I would recommend to find more detail at [*How to Solve ERR\_BLOCKED\_BY\_CLIENT*](https://www.keycdn.com/support/how-to-solve-err-blocked-by-client/) | I've noticed same thing on my browser some time ago.
Did you sing in to chrome using your Google account maybe? Or did you choose in any way to opt-out from collecting data on Google Analytics ?
Maybe Google remembers that option and uses it on Chrome when you are singed in..
BTW. I can normally open <http://www.google-analytics.com/ga.js> in browser, it just doesn't work when automatically loaded. |
9,618,820 | I'm working on an asp.net project which has a form with a file attachment capability. Since I'm uploading the form on a server, I can't use a physical path. How do I use the asp:FileUpload with a virtual path? | 2012/03/08 | [
"https://Stackoverflow.com/questions/9618820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971115/"
] | It was a problem with AdBlock. I disabled it and now it loads it normally.
**yagudaev** suggests (read answers below) that in order to keep AdBlock from blocking Google Analytics, you need to edit the snippet provided and explicitly use `https://` instead of the protocol-relative URL by default. This means changing
`'//www.google-analytics.com/analytics.js'`
into
`'https://www.google-analytics.com/analytics.js'`
Example:
```
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-XXXXX-XX', 'auto');
ga('send', 'pageview');
</script>
``` | **2019 update**
This has become very widespread now.
**Solutions**
1. Ask people to unblock your website, (bad idea from personal experience)
2. Host google analytics script locally (bad idea) because google says so [HERE](https://support.google.com/analytics/answer/1032389?hl=en)
>
> Referencing the JavaScript file from Google's servers (i.e.,
> <https://www.googletagmanager.com/gtag/js>) ensures that you get access
> to new features and product updates as they become available, giving
> you the most accurate data in your reports.
>
>
>
3. Use Server side analytics. This is what people are doing nowadays. If you are on node.js, use a library such as [analytics](https://www.npmjs.com/package/analytics) or [universal-analytics](https://www.npmjs.com/package/universal-analytics) |
13,000,008 | I've been trying to make this program in c++ with opencv that converts the image to greyscale and rotates the image afterwards, but the output I get is all kinds of messed up.
I have been searching for solutions and looking for help everywhere, but I haven't been able to find out what the heck I have done wrong so if any of you could help me it'd be great
Code:
<http://pastebin.com/FSJKyaeU>
Also, here's a picture of the output I get
<http://i.imgur.com/qpYm1.jpg> | 2012/10/21 | [
"https://Stackoverflow.com/questions/13000008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755692/"
] | Please replace:
```
Mat imRotate(im.cols * 2, im.rows * 2, im.type());
```
to use this one:
```
int size = static_cast<int>(sqrt(im.cols * im.cols/4 + im.rows * im.rows/4) * 2) + 1;
Mat imRotate(size, size , im.type());
```
Then update FuncRotate. I think you need to do something like this:
```
void FuncRotate(Mat im, Mat imRotate, int q, int x, int y, int rows, int columns)
{
double radians = (q * 3.1415)/180; // or try use M_PI instead of 3.1415
double cosr = cos(radians);
double sinr = sin(radians);
for(int i=0; i<columns; i++) //columns of the original image
{
for(int j=0; j<rows; j++) //rows of the original image
{
int NewXPixel = imRotate.cols/2 + ((i-x) * cosr) - ((j-y) * sinr);
int NewYPixel = imRotate.rows/2 + ((i-x) * sinr) + ((j-y) * cosr);
if(NewXPixel < 0 || NewYPixel < 0 || NewXPixel >= imRotate.cols || NewYPixel >= imRotate.rows)
continue;
imRotate.at<unsigned char>(NewYPixel,NewXPixel) = im.at<unsigned char>(j,i);
}
}
}
``` | I think at least there is something wrong on this line:
```
imRotate.at<unsigned char>(i,j) = im.at<unsigned char>(NewXPixel,NewYPixel);
```
... because `i` and `j` are supposed to loop over the original image, right?
Perhaps you meant something like:
```
imRotate.at<unsigned char>(NewXPixel,NewYPixel) = im.at<unsigned char>(i,j);
```
Also, `x` and `y` are not used in `FuncRotate`.
Moreover, I believe you should initialize `imRotate` to zero. |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | You have to present "interesting traffic" to the ASA. There's no command that would bring up the tunnel without traffic. | I second the advice of ynguldyn.
On the ISR series router you can test the VPN by having the router generate traffic for you, but there is no such option on the ASA platform. |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | You have to present "interesting traffic" to the ASA. There's no command that would bring up the tunnel without traffic. | ping inside "ip address at the other end of the tunnel"
Inside interface will have to be in the encryption domain.
This requires that the management-interface command is set to the inside interface - like "management-interface inside".
Let's say you have a bunch of interface mappings in your VPN tunnel to the other end. To test each of them do the following - if you want to test as an example from the dmz interface
management-interface dmz
ping dmz a.b.c.d
where a.b.c.d is on the other end of the tunnel end-point.
Tested on an ASA v.8.3 to ASA 8.2.
By the way, if you have multiple network mappings in the same crypto acl, don't use set reverse-route on the crypto map entry. This may cause issues with the way the ASA uses the crypto ACL to create new tunnel mappings. |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | You have to present "interesting traffic" to the ASA. There's no command that would bring up the tunnel without traffic. | Using 8.4+ we just added a Meinberg Windows NTP server for network time on the receiving end of the tunnel and added this to the remote ASA config:
ntp server xxx.xxx.xxx.xxx source inside prefer
(where xxx.xxx.xxx.xxx is the ip address of the ntp server) - that keeps our tunnels up indefinitely due to NTP generating the interesting traffic right on the remote ASA 5505 |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | I second the advice of ynguldyn.
On the ISR series router you can test the VPN by having the router generate traffic for you, but there is no such option on the ASA platform. | ping inside "ip address at the other end of the tunnel"
Inside interface will have to be in the encryption domain.
This requires that the management-interface command is set to the inside interface - like "management-interface inside".
Let's say you have a bunch of interface mappings in your VPN tunnel to the other end. To test each of them do the following - if you want to test as an example from the dmz interface
management-interface dmz
ping dmz a.b.c.d
where a.b.c.d is on the other end of the tunnel end-point.
Tested on an ASA v.8.3 to ASA 8.2.
By the way, if you have multiple network mappings in the same crypto acl, don't use set reverse-route on the crypto map entry. This may cause issues with the way the ASA uses the crypto ACL to create new tunnel mappings. |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | In Cisco ASA7.0 or greater OS, you can establish the tunnel by simulating interesting traffic with the `packet-tracer` command. Here's an example - substitute IP addresses from your networks:
```
packet-tracer input inside tcp 10.100.0.50 1250 10.200.0.100 80
Source Interface^ | Src IP^ Src Port | |
Protocol^ Dst IP^ Dst Port^
```
You can use the output of the command to help diagnose any issues as to why traffic didn't successfully pass as well, but the command itself will actually stimulate the VPN and establish both the ISAKMP and IPSec sa's. | I second the advice of ynguldyn.
On the ISR series router you can test the VPN by having the router generate traffic for you, but there is no such option on the ASA platform. |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | I second the advice of ynguldyn.
On the ISR series router you can test the VPN by having the router generate traffic for you, but there is no such option on the ASA platform. | Using 8.4+ we just added a Meinberg Windows NTP server for network time on the receiving end of the tunnel and added this to the remote ASA config:
ntp server xxx.xxx.xxx.xxx source inside prefer
(where xxx.xxx.xxx.xxx is the ip address of the ntp server) - that keeps our tunnels up indefinitely due to NTP generating the interesting traffic right on the remote ASA 5505 |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | In Cisco ASA7.0 or greater OS, you can establish the tunnel by simulating interesting traffic with the `packet-tracer` command. Here's an example - substitute IP addresses from your networks:
```
packet-tracer input inside tcp 10.100.0.50 1250 10.200.0.100 80
Source Interface^ | Src IP^ Src Port | |
Protocol^ Dst IP^ Dst Port^
```
You can use the output of the command to help diagnose any issues as to why traffic didn't successfully pass as well, but the command itself will actually stimulate the VPN and establish both the ISAKMP and IPSec sa's. | ping inside "ip address at the other end of the tunnel"
Inside interface will have to be in the encryption domain.
This requires that the management-interface command is set to the inside interface - like "management-interface inside".
Let's say you have a bunch of interface mappings in your VPN tunnel to the other end. To test each of them do the following - if you want to test as an example from the dmz interface
management-interface dmz
ping dmz a.b.c.d
where a.b.c.d is on the other end of the tunnel end-point.
Tested on an ASA v.8.3 to ASA 8.2.
By the way, if you have multiple network mappings in the same crypto acl, don't use set reverse-route on the crypto map entry. This may cause issues with the way the ASA uses the crypto ACL to create new tunnel mappings. |
70,189 | Using a cisco ASA is it possible manually bring up a lan to lan VPN tunnel & SA from the device, rather than having one of the systems that is part of the VPN initiate traffic to start the VPN?
I'd like to avoid having to trigger a ping on one of the systems in a VPN to start the VPN, to make troubleshooting a bit quicker. | 2009/10/01 | [
"https://serverfault.com/questions/70189",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
] | In Cisco ASA7.0 or greater OS, you can establish the tunnel by simulating interesting traffic with the `packet-tracer` command. Here's an example - substitute IP addresses from your networks:
```
packet-tracer input inside tcp 10.100.0.50 1250 10.200.0.100 80
Source Interface^ | Src IP^ Src Port | |
Protocol^ Dst IP^ Dst Port^
```
You can use the output of the command to help diagnose any issues as to why traffic didn't successfully pass as well, but the command itself will actually stimulate the VPN and establish both the ISAKMP and IPSec sa's. | Using 8.4+ we just added a Meinberg Windows NTP server for network time on the receiving end of the tunnel and added this to the remote ASA config:
ntp server xxx.xxx.xxx.xxx source inside prefer
(where xxx.xxx.xxx.xxx is the ip address of the ntp server) - that keeps our tunnels up indefinitely due to NTP generating the interesting traffic right on the remote ASA 5505 |
48,039,835 | I am building an algebra calculator and I'm working on a recursive function to filter like terms from a polynomial. The function below works in that it produces the desired array of arrays of like terms. I can verify this by adding a console.log statement to the function. However, for some reason, the function won't return the output. It returns "undefined".
My thinking is that the chain of recursive calls should terminate with the end condition indicated below, and then pass the returned argument[1] array through the stack.
I've read similar questions on here where the person forgets to put a return statement in one or more places. However, in my code, I have a return statement with the end condition and with the recursive function call. It's probably something simple I'm missing.
```js
var filterLikeTerms = function (terms) { //takes an array of terms, optional second argument is an array of arrays of similar terms
if (!arguments[1]) arguments[1] = []; //Initilizes the second argument if none is given
if (terms.length == 0) return arguments[1]; //End condition
arguments[1].push(terms.filter(term => terms[0].toString() === term.toString())); //Adds similar terms to the 2nd argument array
terms = terms.filter (term => terms[0].toString() !== term.toString()); //shortens the terms array to exclude the like terms filtered above
return filterLikeTerms(terms, arguments[1]); //recursive function call
}
``` | 2017/12/31 | [
"https://Stackoverflow.com/questions/48039835",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9157562/"
] | First off, remember that `const` applies to a specific variable. It prevents assignment of a different value to that variable. The value itself is not `const`. The `const` aspect only applies to the variable itself that is declared `const` and what value it holds. So, in your example, it only applies to the actual `myValue` variable inside that module, not to whatever value is in the variable.
So, with this:
```
const myValue = { … }
export default myValue
```
It is the `myValue` variable (not its value) that is `const` and the `const` aspect means that you cannot assign something different to the `myValue` variable. If you copy that same value to a different non-const variable, then one can freely assign anything you want to that other variable.
When you export the value of that variable, it is being assigned to another variable (in whatever is importing it) and that is not `const` unless it is also declared `const`. The `const` in this module does not have any affect on some other variable in some other module that imports it.
You can logically think of exporting and importing kind of like this as an assignment of the value to another variable (in the importing module):
```
// exported value
const myValue = { … }; // exporting makes it available so others can import it
// imported value
let importedValue = myValue; // importing assigns the exported value to a new variable
// further assignment
importedValue = "foo"; // this is allowed because importedValue is not
// declared as const
```
And, as I presume you already realize, the const-ness of `myValue` does not make `importedValue` const at all. It contains a copy of whatever is in `myValue` and `importedValue` can be assigned any other value you want. It is not declared `const` itself so it is not `const`.
>
> Is there any difference in terms of performance between declaring a constant and export it as default or declaring it directly as a default export?
>
>
>
There is no difference for the exported value because values themselves are not `const` in Javascript, only variables. The difference is only on the local variable that is declared as `const` which is not something the importing module can access to it makes no difference to the importing module.
>
> Or the same using a function
>
>
>
It doesn't matter what the value of the variable is (function, object, primitive, etc...). It's the same with all types. If a variable is declared `const`, then you cannot assign a different value to that variable. But if you copy that value to another variable that is not declared `const`, then you can further assign anything else you want to that non-const variable. It's the variable that is `const`, not the value. You can think of `const` like declaring a read-only variable. | It makes a BIG difference if your implementation fully complies to the ES6 module specification. Modules export bindings, not references. This is explained here:
<https://ponyfoo.com/articles/es6-modules-in-depth#bindings-not-values>
In (almost) all other aspects, Javascript purely uses references. A variable is a pointer to the actual data in memory. Copying one variable to another copies the pointer, not the value. Assigning a new value to a variable creates a new data chunk and moves the variable's pointer to the new data, and the old data is garbage collected. There is a common misperception that primitives are passed by value to functions and objects by reference; they are in fact all passed by reference, and primitives appear to be passed by value because they are immutable -- changing a primitive discards the old value in favor of the new value, rather than changing the original value in-place.
Bindings, however, are the *same* variable. If you export something, you export IT, not a reference to it. If the exported value is changed later in the original module, then this change is reflected in modules that consume it. Even worse, if another module changes the binding, it's reflected back into the original and all other consuming modules.
If you're using a third-party module importer or a rollup tool, then you might not get this behavior, because it's very hard to replicate outside of the engine itself. So you might not see repercussions from this for months or years to come, but it will be a problem in the future.
So it's best practice to ALWAYS EXPORT CONSTANTS to prevent any nasty surprises. |
211,695 | I know the Airdrop uses WiFi direct to send the packet, but what is the radio frequency of it so I can capture the traffic.
Is the WiFi 2.4Ghz, 5Ghz or both? | 2015/10/19 | [
"https://apple.stackexchange.com/questions/211695",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/153173/"
] | You made some mistakes in your plist:
* /Applications/eXist-db/bin/startup.sh probably doesn't exist if you have installed eXist-db 2.2
A valid path is /Applications/eXist-db.app/Contents/Resources/eXist-db/bin/startup.sh
* StandardErorPath and StandardOutputPath are no valid keys
Valid keys are *StandardErrorPath* and *StandardOutPath*
* probably the root <-> launchd problem already addressed by patrix
* the plist doesn't have to be executable
To start the app after logging in with your user simply add it to System Preferences -> Users & Groups -> Your user -> Login Items
---
To start eXist-db 2.0 at boot time and jetty after logging in your user you have to do the following:
If you haven't done this already, first enter:
```
sudo /Applications/eXist-db/tools/wrapper/bin/exist.sh install
```
to install a LaunchDaemon org.tanukisoftware.wrapper.eXist-db.plist in /Library/LaunchDaemons/. If you want to add a StandardErrorPath and StandardOutPath modify the file with `sudo nano /Library/LaunchDaemons/org.tanukisoftware.wrapper.eXist-db.plist`.
It should look like this finally:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Disabled</key>
<true/>
<key>Label</key>
<string>org.tanukisoftware.wrapper.eXist-db</string>
<key>ProgramArguments</key>
<array>
<string>/Applications/eXist-db/tools/wrapper/bin/exist.sh</string>
<string>launchdinternal</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>StandardErrorPath</key>
<string>/tmp/org.tanukisoftware.wrapper.eXist-db.stderr</string>
<key>StandardOutPath</key>
<string>/tmp/org.tanukisoftware.wrapper.eXist-db.stdout</string>
</dict>
</plist>
```
Load the daemon permanently with:
```
sudo launchctl load -w /Library/LaunchDaemons/org.tanukisoftware.wrapper.eXist-db.plist
```
Now create a second file in ~/Library/LaunchDaemons/ named *com.eXist.plist* with nano. It should look like this finally:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.eXist</string>
<key>Program</key>
<string>/Applications/eXist-db/bin/startup.sh</string>
<key>RunAtLoad</key>
<true/>
<key>StandardErrorPath</key>
<string>/tmp/com.eXist.stderr</string>
<key>StandardOutPath</key>
<string>/tmp/com.eXist.stdout</string>
</dict>
</plist>
```
A StandardErrorPath and StandardOutPath was added.
Load the agent permanently with:
```
launchctl load -w ~/Library/LaunchAgents/com.eXist.plist
```
Done.
---
Don't forget to set your (or the) JAVA\_HOME variable properly. If you use a newer eXist-db release (e.g. 2.2) you have to add at least /Contents/Resources/ to the paths of exist.sh and startup.sh in the plist (please check the proper paths by opening the app bundle.
---
***Hint: Don't use TextEdit to modify the plists: otherwise the plist files might be malformed.*** | You can't launch stuff as user `root` from your personal `LaunchAgents` folder as this would create a rather big security hole. There should be a message about this in `/var/log/system.log`.
From [man launchd.plist](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/launchd.plist.5.html):
>
> UserName *string*
>
>
> This optional key specifies the user to run the job as. This key is only applicable when launchd is running as root.
>
>
> |
16,999,763 | I was to understand that MVC templating was used as a means of locking down a view from using any substantial programming logic. Testing the Blade system for Laravel 4, I notice that I am still able to include PHP content into the view.
Can I disable PHP in a Blade template? Essentially turning the file into a .html file with some additional possibilities (eg, Blade looping and sections/includes). | 2013/06/08 | [
"https://Stackoverflow.com/questions/16999763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/446793/"
] | Blade templates compile to php, so you won't be able to eliminate php altogether.
There is something you can work around your project by creating your own compiler, or extending Blade's.
You will need to work your own solution out on how to handle rendering the parts in php that are used in your template, I would just render them as is, for example:
- if someone forgot an `<?php echo $example; ?>` that's what the engine would render. | If you need programming logic in your views you're probably doing something wrong. Try to do the logic in your controller and inject the variables into your views. |
74,230,306 | I can match `#include<stdio.h>` using the following regular expression in c++.
```
regex ("( )*#( )*include( )*<(stdio.h)( )*>( )*")
```
But if I design a regular expression like `regex("( )*#( )*include( )*<(.)*.h( )*>( )*")`
in cpp then I find any type of header file.
But if I want to get a sub string from a header file
like,
```
Suppose I have some header file like,
#include<string.h>
#include<math.h>
#include<stdlib.h>
#include<time.h>
```
```
And from those header file, I just want to get the sub string like,
string.h
math.h
stdlib.h
time.h
```
In simply,
I want to get the string inside this symbol `< >`
Now my Question is how to design a regular expression and write a c++ code so that I can get my expected sub string from any header file?
or
Write a c++ code to print the string inside this symbol `< >`
using this regular expression `regex("( )*#( )*include( )*<(.)*.h( )*>( )*")` ?
I just design the regular expression `regex("( )*#( )*include( )*<(.)*.h( )*>( )*")`.
I can't find any idea to print the string inside this symbol `< >` | 2022/10/28 | [
"https://Stackoverflow.com/questions/74230306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20255740/"
] | If `(item_id, property_type)` is defined as UNIQUE (which is reasonable) then joining 2 Item tables copies is more simple and maybe even more fast:
```sql
SELECT item_id,
i1.property_value fk_prod_id,
i2.property_value fk_cat_id,
p.prod_name,
c.cat_name
FROM Item i1
JOIN Item i2 USING (item_id)
JOIN Product p ON p.prod_id = i1.property_value
JOIN Category c ON c.cat_id = i2.property_value
WHERE i1.property_type = 'fk_prod_id'
AND i2.property_type = 'fk_cat_id'
``` | ```
SELECT
item_id,
fk_prod_id,
fk_cat_id,
prod_name,
cat_name
FROM (
SELECT
item_id,
MAX(CASE WHEN property_type = 'fk_prod_id' THEN property_value END) AS fk_prod_id,
MAX(CASE WHEN property_type = 'fk_cat_id' THEN property_value END) AS fk_cat_id
FROM item AS i
GROUP BY item_id
) AS t1
LEFT JOIN product AS p ON p.prod_id = t1.fk_prod_id
LEFT JOIN category AS c ON c.cat_id = t1.fk_cat_id;
``` |
10,073,261 | I am developing a VB application in which i need to know the native resolution of the monitor and not the one set by the user(current resolution). So i need to read the EDID (extended display identification data) directly from the monitor.
I did try to find the resolution of monitor through some programs...but all it returns is the current resolution. Any help to read the info directly from EDID of monitor is appriciable.
Thanks in advance | 2012/04/09 | [
"https://Stackoverflow.com/questions/10073261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1321866/"
] | After a lot of research i was able to fix my problem..
Thanks for the the valuable info Yahia.
First, we need to find the EDID data . The physical display information is in fact available to the OS, via Extended Display Identification Data(EDID). A copy of the EDID block is kept in the windows registry. But the problem was to obtain the correct EDID, as the registry has information stored about all the monitors which have been, at any point of time, attached to the system. So, first we use a WMI class “Win32\_DesktopMonitor”, and through a simple SQL query grab the PNP device id to find a monitor that is available (not offline). We can then dig into the registry to find the data.
`'for monitor in wmiquery('Select \* from Win32\_DesktopMonitor'):
regkey = ('HKLM\SYSTEM\CurrentControlSet\Enum\' +
monitor.PNPDeviceID + '\Device Parameters\EDID')
edid = get\_regval(regkey)'`
Second, it is necessary to parse the data. The base EDID information of a display is conveyed within a 128-byte data structure that contains pertinent manufacturer and operation-related data. Most of this information is uninteresting to us.
To know the NATIVE resolution we need to start looking in the DTD ( Detailed timing descriptor ) which starts at byte = 54.
Following is the logic for finding the maximum resolution from the EDID
`dtd = 54 # start byte of detailed timing desc.
horizontalRes = ((edid[dtd+4] >> 4) << 8) | edid[dtd+2]
verticalRes = ((edid[dtd+7] >> 4) << 8) | edid[dtd+5]
res=(horizontalRes,verticalRes)`
The values obtained are Hex values which can be converted to Decimal to find the NATIVE RESOLUTION in pixels.
Thanks
Hope it helps
Sachin | For some source code (although C/C++) to read the EDID block see Point 5 at [this link](http://thetweaker.wordpress.com/2011/11/13/reading-monitor-physical-dimensions-or-getting-the-edid-the-right-way/). The only official means to retrieve this information through [Windows Setup API](http://msdn.microsoft.com/en-us/library/cc185682%28VS.85%29.aspx).
For an EDID format description see for example [here](http://en.wikipedia.org/wiki/Extended_display_identification_data). |
10,073,261 | I am developing a VB application in which i need to know the native resolution of the monitor and not the one set by the user(current resolution). So i need to read the EDID (extended display identification data) directly from the monitor.
I did try to find the resolution of monitor through some programs...but all it returns is the current resolution. Any help to read the info directly from EDID of monitor is appriciable.
Thanks in advance | 2012/04/09 | [
"https://Stackoverflow.com/questions/10073261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1321866/"
] | For some source code (although C/C++) to read the EDID block see Point 5 at [this link](http://thetweaker.wordpress.com/2011/11/13/reading-monitor-physical-dimensions-or-getting-the-edid-the-right-way/). The only official means to retrieve this information through [Windows Setup API](http://msdn.microsoft.com/en-us/library/cc185682%28VS.85%29.aspx).
For an EDID format description see for example [here](http://en.wikipedia.org/wiki/Extended_display_identification_data). | 'Here is a complete solution for everything except for actually setting the resolution. This will read out the native resolution settings from the EDID of the active monitor.
Set WshShell = WScript.CreateObject("WScript.Shell")
Const HKEY\_LOCAL\_MACHINE = &H80000002
Const DTD\_INDEX = 54
strComputer = "."
Set objWMIService = GetObject("winmgmts:\" & strComputer & "\root\cimv2")
Set oReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\" & strComputer & "\root\default:StdRegProv")
Set colItems = objWMIService.ExecQuery("Select \* from Win32\_DesktopMonitor",,48)
For Each objItem in colItems 'Gets active monitor EDID registry path
strKeyPath = "SYSTEM\CurrentControlSet\Enum\" & objItem.PNPDeviceID & "\Device Parameters"
Next
oReg.GetBinaryValue HKEY\_LOCAL\_MACHINE,strKeyPath,"EDID",arrRawEDID
hor\_resolution = arrRawEDID(DTD\_INDEX + 2) + (arrRawEDID(DTD\_INDEX + 4) And 240) \* 16
vert\_resolution = arrRawEDID(DTD\_INDEX + 5) + (arrRawEDID(DTD\_INDEX + 7) And 240) \* 16
WshShell.Run "res.exe " & hor\_resolution & " " & vert\_resolution |
10,073,261 | I am developing a VB application in which i need to know the native resolution of the monitor and not the one set by the user(current resolution). So i need to read the EDID (extended display identification data) directly from the monitor.
I did try to find the resolution of monitor through some programs...but all it returns is the current resolution. Any help to read the info directly from EDID of monitor is appriciable.
Thanks in advance | 2012/04/09 | [
"https://Stackoverflow.com/questions/10073261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1321866/"
] | After a lot of research i was able to fix my problem..
Thanks for the the valuable info Yahia.
First, we need to find the EDID data . The physical display information is in fact available to the OS, via Extended Display Identification Data(EDID). A copy of the EDID block is kept in the windows registry. But the problem was to obtain the correct EDID, as the registry has information stored about all the monitors which have been, at any point of time, attached to the system. So, first we use a WMI class “Win32\_DesktopMonitor”, and through a simple SQL query grab the PNP device id to find a monitor that is available (not offline). We can then dig into the registry to find the data.
`'for monitor in wmiquery('Select \* from Win32\_DesktopMonitor'):
regkey = ('HKLM\SYSTEM\CurrentControlSet\Enum\' +
monitor.PNPDeviceID + '\Device Parameters\EDID')
edid = get\_regval(regkey)'`
Second, it is necessary to parse the data. The base EDID information of a display is conveyed within a 128-byte data structure that contains pertinent manufacturer and operation-related data. Most of this information is uninteresting to us.
To know the NATIVE resolution we need to start looking in the DTD ( Detailed timing descriptor ) which starts at byte = 54.
Following is the logic for finding the maximum resolution from the EDID
`dtd = 54 # start byte of detailed timing desc.
horizontalRes = ((edid[dtd+4] >> 4) << 8) | edid[dtd+2]
verticalRes = ((edid[dtd+7] >> 4) << 8) | edid[dtd+5]
res=(horizontalRes,verticalRes)`
The values obtained are Hex values which can be converted to Decimal to find the NATIVE RESOLUTION in pixels.
Thanks
Hope it helps
Sachin | 'Here is a complete solution for everything except for actually setting the resolution. This will read out the native resolution settings from the EDID of the active monitor.
Set WshShell = WScript.CreateObject("WScript.Shell")
Const HKEY\_LOCAL\_MACHINE = &H80000002
Const DTD\_INDEX = 54
strComputer = "."
Set objWMIService = GetObject("winmgmts:\" & strComputer & "\root\cimv2")
Set oReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\" & strComputer & "\root\default:StdRegProv")
Set colItems = objWMIService.ExecQuery("Select \* from Win32\_DesktopMonitor",,48)
For Each objItem in colItems 'Gets active monitor EDID registry path
strKeyPath = "SYSTEM\CurrentControlSet\Enum\" & objItem.PNPDeviceID & "\Device Parameters"
Next
oReg.GetBinaryValue HKEY\_LOCAL\_MACHINE,strKeyPath,"EDID",arrRawEDID
hor\_resolution = arrRawEDID(DTD\_INDEX + 2) + (arrRawEDID(DTD\_INDEX + 4) And 240) \* 16
vert\_resolution = arrRawEDID(DTD\_INDEX + 5) + (arrRawEDID(DTD\_INDEX + 7) And 240) \* 16
WshShell.Run "res.exe " & hor\_resolution & " " & vert\_resolution |
45,272,138 | In `MyComponent`, I am trying to emit another event from an event handler. (This new event will be used by the parent component to take a few actions). I created an event emitter as a member of `MyComponent`, but the event handler method is not able to access the event emitter. It throws `ERROR TypeError: Cannot read property 'emit' of undefined`. I found some related questions on StackOverflow, but could not comprehend much due to being new to `Angular2`.
```
import { Component, Input, Output, OnChanges, SimpleChanges, OnInit, EventEmitter } from '@angular/core';
import YouTubePlayer from 'youtube-player';
@Component({
selector: 'app-my-component',
templateUrl: './my-component.component.html',
styleUrls: ['./my-component.component.css']
})
export class MyComponent implements OnChanges, OnInit {
@Input()
videoURL = '';
player : any;
videoId : any;
@Output()
myEmitter: EventEmitter<number> = new EventEmitter();
ngOnInit(): void {
this.player = YouTubePlayer('video-player', {
videoId: this.videoId,
width: "100%"
});
this.registerEvents();
}
private registerEvents() {
this.player.on("stateChange", this.onStateChangeEvent);
}
private onStateChangeEvent(event: any) {
console.log("reached here: " + event);
this.myEmitter.emit(1); //throws `ERROR TypeError: Cannot read property 'emit' of undefined`
}
}
```
Could someone help me out? Please note that I have to emit events only from `onStateChangeEvent`, because later I will have different types of event emitters for different types of events. So I will put a switch-case inside `onStateChangeEvent` and will use different emitters - one for each type. | 2017/07/24 | [
"https://Stackoverflow.com/questions/45272138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1892348/"
] | >
> Cannot read property 'emit' of undefined
>
>
>
Commonly caused by wrong `this`. Add the arrow lambda syntax `=>`
Fix
```
private onStateChangeEvent = (event: any) => {
console.log("reached here: " + event);
this.myEmitter.emit(1); // now correct this
}
```
More
====
<https://basarat.gitbooks.io/typescript/docs/arrow-functions.html> | For anyone that came here to find this, if Basarats solution did not work, check to make sure that the line above the event emitter does not have any typos. I had an unnamed ViewChild above my undefined EventEmitter. |
20,905,925 | I have defined some custom constraints like that:
```
constraint(a,something).
constraint(a,something1).
constraint(a,something2).
```
and i need this logical conjunction of them all as a result.
( if one constraint fails, the result should fail )
```
result(X) :-
constraint(X,something),
constraint(X,something1),
constraint(X,somethingElse).
```
I'm looking for a more convenient way to avoid this explicit coding of all constraints.
```
result(X) :- ????
``` | 2014/01/03 | [
"https://Stackoverflow.com/questions/20905925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2468041/"
] | At some point, you need a predicate somewhere to actually list all the constraints you wish to apply. You could do something like this:
```
result(X) :-
constraints(X, [something, something1, something2]).
constraints(X, [H|T]) :-
constraint(X, H),
constraints(X, T).
constraints(_, []).
```
This mechanism allows you to generate the constraints dynamically as a list, if desired. You could also have the list of constraints be a fact:
```
constraint_list(a, [something, something1, something2]).
```
And then use that in the `result` predicate:
```
result(X) :-
constraint_list(X, List),
constraints(X, List).
``` | Consider using `maplist/2`:
```
all_true(X) :- maplist(constraint(X), [something, something1, something2]).
``` |
20,388,953 | I have issue here, inside my modal window got DropDownList as below
```
<div class="modal-body">
<div class="row">
<div class="col-lg-8">
<div class="form-horizontal" role="form">
<div class="form-group that">
<asp:Label ID="lblBrand" CssClass="col-sm-2 this" runat="server">Brand</asp:Label>
<div class="col-sm-5">
<asp:DropDownList BackColor="#FFFFFF" CssClass="ddl" runat="server" ID="dropDownListVendor" DataValueField="brandID" DataTextField="brandName" AutoPostBack="true">
<asp:ListItem Selected="True" Text="Choose a Brand..">
</asp:DropDownList>
```
**When i select the item in DropDownList, the Modal window will dismiss automatically.
How to prevent this?** | 2013/12/05 | [
"https://Stackoverflow.com/questions/20388953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2412351/"
] | Partitioning is a rather general concept and can be applied in many contexts. When it considers the partitioning of relational **data**, it usually refers to decomposing your tables either row-wise (horizontally) or column-wise (vertically).
Vertical partitioning, aka row splitting, uses the same splitting techniques as database normalization, but ususally the term (vertical / horizontal) data partitioning refers to a *physical optimization* whereas normalization is an optimization on the *conceptual* level.
Since you ask for a simple demonstration - assume you have a table like this:
```
create table data (
id integer primary key,
status char(1) not null,
data1 varchar2(10) not null,
data2 varchar2(10) not null);
```
One way to partition `data` **vertically**: Split it as follows:
```
create table data_main (
id integer primary key,
status char(1) not null,
data1 varchar2(10) not null );
create table data_rarely_used (
id integer primary key,
data2 varchar2(10) not null,
foreign key (id) references data_main (id) );
```
This kind of partitioning can be applied, for example, when you rarely need column data2 in your queries. Partition data\_main will take less space, hence full table scans will be faster and it is more likely that it fits into the DBMS' page cache. The downside: When you have to query all columns of `data`, you obivously have to join the tables, which will be more expensive that querying the original table.
Notice you are splitting the columns in the same way as you would when you normalize tables. However, in this case `data` could already be normalized to 3NF (and even BCNF and 4NF), but you decide to further split it for the reason of physical optimization.
One way to partition `data` **horizontally**, using Oracle syntax:
```
create table data (
id integer primary key,
status char(1),
data1 varchar2(10),
data2 varchar2(10) )
partition by list (status) (
partition active_data values ( 'A' ),
partition other_data values(default)
);
```
This would tell the DBMS to internally store the table `data` in two segments (like two tables), depending on the value of the column `status`. This way of partitioning `data` can be applied, for example, when you usually query only rows of one partition, e.g., the status 'A' rows (let's call them active rows). Like before, full scans will be faster (particularly if there are only few active rows), the active rows (and the other rows resp.) are stored contiguously (they won't be scattered around pages that they share with rows of a different status value, and it is more likely that the active rows will be in the page cache. | The difference between Normalization and splitting lies in the purpose of doing so.
The main purpose of Normalization is to remove redundant data Where as The purpose of Row splitting is to separate less required data.
eg:- Suppose you have a table All\_Details with columns- id , Emp\_name, Emp\_address, Emp\_phNo ,Emp\_other\_data, Company\_Name , Company\_Address , Company\_revenue.
Now if you want to normalize the table you would create two new table Employee\_Details and Company\_Details and keep a foreign key of company\_id in table Employee\_Details. this way redundant company data will be removed .
Now lets talk about row splitting. Say even after normalization you are only accessing employee\_name and emp\_phNo but you are not accessing emp\_address and emp\_other\_data so frequently. So to improve performance you split the Employee\_Details table into two table . table1 containing the frequently needed data( employee\_name and emp\_phNo ) and table2 containing the less frequently needed data( Emp\_address, Emp\_other\_data) . Both table will have same unique\_key column so that you can recreate any row of table Employee\_Details with unique\_key. This can improve your system performance drasticaly. |
20,388,953 | I have issue here, inside my modal window got DropDownList as below
```
<div class="modal-body">
<div class="row">
<div class="col-lg-8">
<div class="form-horizontal" role="form">
<div class="form-group that">
<asp:Label ID="lblBrand" CssClass="col-sm-2 this" runat="server">Brand</asp:Label>
<div class="col-sm-5">
<asp:DropDownList BackColor="#FFFFFF" CssClass="ddl" runat="server" ID="dropDownListVendor" DataValueField="brandID" DataTextField="brandName" AutoPostBack="true">
<asp:ListItem Selected="True" Text="Choose a Brand..">
</asp:DropDownList>
```
**When i select the item in DropDownList, the Modal window will dismiss automatically.
How to prevent this?** | 2013/12/05 | [
"https://Stackoverflow.com/questions/20388953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2412351/"
] | Partitioning is a rather general concept and can be applied in many contexts. When it considers the partitioning of relational **data**, it usually refers to decomposing your tables either row-wise (horizontally) or column-wise (vertically).
Vertical partitioning, aka row splitting, uses the same splitting techniques as database normalization, but ususally the term (vertical / horizontal) data partitioning refers to a *physical optimization* whereas normalization is an optimization on the *conceptual* level.
Since you ask for a simple demonstration - assume you have a table like this:
```
create table data (
id integer primary key,
status char(1) not null,
data1 varchar2(10) not null,
data2 varchar2(10) not null);
```
One way to partition `data` **vertically**: Split it as follows:
```
create table data_main (
id integer primary key,
status char(1) not null,
data1 varchar2(10) not null );
create table data_rarely_used (
id integer primary key,
data2 varchar2(10) not null,
foreign key (id) references data_main (id) );
```
This kind of partitioning can be applied, for example, when you rarely need column data2 in your queries. Partition data\_main will take less space, hence full table scans will be faster and it is more likely that it fits into the DBMS' page cache. The downside: When you have to query all columns of `data`, you obivously have to join the tables, which will be more expensive that querying the original table.
Notice you are splitting the columns in the same way as you would when you normalize tables. However, in this case `data` could already be normalized to 3NF (and even BCNF and 4NF), but you decide to further split it for the reason of physical optimization.
One way to partition `data` **horizontally**, using Oracle syntax:
```
create table data (
id integer primary key,
status char(1),
data1 varchar2(10),
data2 varchar2(10) )
partition by list (status) (
partition active_data values ( 'A' ),
partition other_data values(default)
);
```
This would tell the DBMS to internally store the table `data` in two segments (like two tables), depending on the value of the column `status`. This way of partitioning `data` can be applied, for example, when you usually query only rows of one partition, e.g., the status 'A' rows (let's call them active rows). Like before, full scans will be faster (particularly if there are only few active rows), the active rows (and the other rows resp.) are stored contiguously (they won't be scattered around pages that they share with rows of a different status value, and it is more likely that the active rows will be in the page cache. | The problems with single database arises when it starts getting huge. So it is required to partition it, to reduce search space, so that it can execute required actions faster.There are various partition strategies available eg: horizontal partitioning, vertical partitioning, hash based partitioning, lookup based partitioning. Horizontal, vertical scaling is different concept compare to these strategies.
1. **Horizontal partitioning** : It splits given table/collection into multiple tables/collections based on some key information which can help in getting right table as horizontal partitioning will have multiple tables on different nodes/machines. eg: region wise users information.
2. **Vertical partitioning** : It divide columns into multiple parts as mentioned in one of the above answers eg: columns related to user info, likes, comments, friends etc in social networking application.
3. **Hash based partitioning** : It uses hash function to decide table/node, and take key elements as input in generating hash. If we change number of tables, it requires re arrangement of data which is costly. So there is a problem when you want to add more table/node.
4. **Lookup based partitioning** : It uses a lookup table which helps in redirecting to different tables/node base on given input fields. We can easily add new table/node in this approach.
**Horizontal scaling vs vertical scaling** :
When we design any application, we need to think of scaling as well. How are we going to handle huge amount of traffic in future? We need to think in terms of memory consumption, latency, cpu usage, fault tolerance, resiliency. Vertical scaling adds more resources eg: cpu, memory to single machine so that it can handle the in coming traffic. But there are limitation with this approach, you can't add more resource than certain limit. Horizontal scaling allow in coming traffic to distribute across multiple nodes. It need to have load balancer at front which basically handle the traffic, and navigate traffic to any one node. Horizontal scaling allow you to add enough number of servers, but you would also need these many nodes. |
20,388,953 | I have issue here, inside my modal window got DropDownList as below
```
<div class="modal-body">
<div class="row">
<div class="col-lg-8">
<div class="form-horizontal" role="form">
<div class="form-group that">
<asp:Label ID="lblBrand" CssClass="col-sm-2 this" runat="server">Brand</asp:Label>
<div class="col-sm-5">
<asp:DropDownList BackColor="#FFFFFF" CssClass="ddl" runat="server" ID="dropDownListVendor" DataValueField="brandID" DataTextField="brandName" AutoPostBack="true">
<asp:ListItem Selected="True" Text="Choose a Brand..">
</asp:DropDownList>
```
**When i select the item in DropDownList, the Modal window will dismiss automatically.
How to prevent this?** | 2013/12/05 | [
"https://Stackoverflow.com/questions/20388953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2412351/"
] | **Horizontal Partitioning in data base**
----------------------------------------
Keeping all the fields EG:Table `Employees` has
* id,
* name,
* Geographical location ,
* email,
* designation,
* phone
EG:1.Keeping all the fields and distributing records in multiple machines.say id= 1-100000 or 100000-200000 records in one machine each and distributing over multiple machines.
EG:2.Keeping separate databases for Regions EG: Asia Pacific,North America
**Key:Picking set of rows based on a criteria**
**Vertical Partitioning in data base**
--------------------------------------
It is similar to Normalization where the same table is divided in to multiple tables and used with joins if required.
EG:
`id`, `name`, `designation` is put in one table and
`phone` , `email` which may not be frequently accessed are put in another.
**Key:Picking set of columns based on a criteria.**
* ***Horizontal/Vertical Scaling is different from partitioning***
**Horizontal Scaling:**
-----------------------
is about **adding more machines to** enable improved responsiveness and availability of any system including database.The idea is to distribute the work load to multiple machines.
**Vertical Scaling:**
---------------------
is about adding more capability in the form of CPU,Memory to existing machine or machines to enable improved responsiveness and availability of any system including database.In a virtual machine set up it can be configured virtually instead of adding real physical machines.
Sameer Sukumaran | The difference between Normalization and splitting lies in the purpose of doing so.
The main purpose of Normalization is to remove redundant data Where as The purpose of Row splitting is to separate less required data.
eg:- Suppose you have a table All\_Details with columns- id , Emp\_name, Emp\_address, Emp\_phNo ,Emp\_other\_data, Company\_Name , Company\_Address , Company\_revenue.
Now if you want to normalize the table you would create two new table Employee\_Details and Company\_Details and keep a foreign key of company\_id in table Employee\_Details. this way redundant company data will be removed .
Now lets talk about row splitting. Say even after normalization you are only accessing employee\_name and emp\_phNo but you are not accessing emp\_address and emp\_other\_data so frequently. So to improve performance you split the Employee\_Details table into two table . table1 containing the frequently needed data( employee\_name and emp\_phNo ) and table2 containing the less frequently needed data( Emp\_address, Emp\_other\_data) . Both table will have same unique\_key column so that you can recreate any row of table Employee\_Details with unique\_key. This can improve your system performance drasticaly. |
20,388,953 | I have issue here, inside my modal window got DropDownList as below
```
<div class="modal-body">
<div class="row">
<div class="col-lg-8">
<div class="form-horizontal" role="form">
<div class="form-group that">
<asp:Label ID="lblBrand" CssClass="col-sm-2 this" runat="server">Brand</asp:Label>
<div class="col-sm-5">
<asp:DropDownList BackColor="#FFFFFF" CssClass="ddl" runat="server" ID="dropDownListVendor" DataValueField="brandID" DataTextField="brandName" AutoPostBack="true">
<asp:ListItem Selected="True" Text="Choose a Brand..">
</asp:DropDownList>
```
**When i select the item in DropDownList, the Modal window will dismiss automatically.
How to prevent this?** | 2013/12/05 | [
"https://Stackoverflow.com/questions/20388953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2412351/"
] | **Horizontal Partitioning in data base**
----------------------------------------
Keeping all the fields EG:Table `Employees` has
* id,
* name,
* Geographical location ,
* email,
* designation,
* phone
EG:1.Keeping all the fields and distributing records in multiple machines.say id= 1-100000 or 100000-200000 records in one machine each and distributing over multiple machines.
EG:2.Keeping separate databases for Regions EG: Asia Pacific,North America
**Key:Picking set of rows based on a criteria**
**Vertical Partitioning in data base**
--------------------------------------
It is similar to Normalization where the same table is divided in to multiple tables and used with joins if required.
EG:
`id`, `name`, `designation` is put in one table and
`phone` , `email` which may not be frequently accessed are put in another.
**Key:Picking set of columns based on a criteria.**
* ***Horizontal/Vertical Scaling is different from partitioning***
**Horizontal Scaling:**
-----------------------
is about **adding more machines to** enable improved responsiveness and availability of any system including database.The idea is to distribute the work load to multiple machines.
**Vertical Scaling:**
---------------------
is about adding more capability in the form of CPU,Memory to existing machine or machines to enable improved responsiveness and availability of any system including database.In a virtual machine set up it can be configured virtually instead of adding real physical machines.
Sameer Sukumaran | The problems with single database arises when it starts getting huge. So it is required to partition it, to reduce search space, so that it can execute required actions faster.There are various partition strategies available eg: horizontal partitioning, vertical partitioning, hash based partitioning, lookup based partitioning. Horizontal, vertical scaling is different concept compare to these strategies.
1. **Horizontal partitioning** : It splits given table/collection into multiple tables/collections based on some key information which can help in getting right table as horizontal partitioning will have multiple tables on different nodes/machines. eg: region wise users information.
2. **Vertical partitioning** : It divide columns into multiple parts as mentioned in one of the above answers eg: columns related to user info, likes, comments, friends etc in social networking application.
3. **Hash based partitioning** : It uses hash function to decide table/node, and take key elements as input in generating hash. If we change number of tables, it requires re arrangement of data which is costly. So there is a problem when you want to add more table/node.
4. **Lookup based partitioning** : It uses a lookup table which helps in redirecting to different tables/node base on given input fields. We can easily add new table/node in this approach.
**Horizontal scaling vs vertical scaling** :
When we design any application, we need to think of scaling as well. How are we going to handle huge amount of traffic in future? We need to think in terms of memory consumption, latency, cpu usage, fault tolerance, resiliency. Vertical scaling adds more resources eg: cpu, memory to single machine so that it can handle the in coming traffic. But there are limitation with this approach, you can't add more resource than certain limit. Horizontal scaling allow in coming traffic to distribute across multiple nodes. It need to have load balancer at front which basically handle the traffic, and navigate traffic to any one node. Horizontal scaling allow you to add enough number of servers, but you would also need these many nodes. |
20,388,953 | I have issue here, inside my modal window got DropDownList as below
```
<div class="modal-body">
<div class="row">
<div class="col-lg-8">
<div class="form-horizontal" role="form">
<div class="form-group that">
<asp:Label ID="lblBrand" CssClass="col-sm-2 this" runat="server">Brand</asp:Label>
<div class="col-sm-5">
<asp:DropDownList BackColor="#FFFFFF" CssClass="ddl" runat="server" ID="dropDownListVendor" DataValueField="brandID" DataTextField="brandName" AutoPostBack="true">
<asp:ListItem Selected="True" Text="Choose a Brand..">
</asp:DropDownList>
```
**When i select the item in DropDownList, the Modal window will dismiss automatically.
How to prevent this?** | 2013/12/05 | [
"https://Stackoverflow.com/questions/20388953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2412351/"
] | The problems with single database arises when it starts getting huge. So it is required to partition it, to reduce search space, so that it can execute required actions faster.There are various partition strategies available eg: horizontal partitioning, vertical partitioning, hash based partitioning, lookup based partitioning. Horizontal, vertical scaling is different concept compare to these strategies.
1. **Horizontal partitioning** : It splits given table/collection into multiple tables/collections based on some key information which can help in getting right table as horizontal partitioning will have multiple tables on different nodes/machines. eg: region wise users information.
2. **Vertical partitioning** : It divide columns into multiple parts as mentioned in one of the above answers eg: columns related to user info, likes, comments, friends etc in social networking application.
3. **Hash based partitioning** : It uses hash function to decide table/node, and take key elements as input in generating hash. If we change number of tables, it requires re arrangement of data which is costly. So there is a problem when you want to add more table/node.
4. **Lookup based partitioning** : It uses a lookup table which helps in redirecting to different tables/node base on given input fields. We can easily add new table/node in this approach.
**Horizontal scaling vs vertical scaling** :
When we design any application, we need to think of scaling as well. How are we going to handle huge amount of traffic in future? We need to think in terms of memory consumption, latency, cpu usage, fault tolerance, resiliency. Vertical scaling adds more resources eg: cpu, memory to single machine so that it can handle the in coming traffic. But there are limitation with this approach, you can't add more resource than certain limit. Horizontal scaling allow in coming traffic to distribute across multiple nodes. It need to have load balancer at front which basically handle the traffic, and navigate traffic to any one node. Horizontal scaling allow you to add enough number of servers, but you would also need these many nodes. | The difference between Normalization and splitting lies in the purpose of doing so.
The main purpose of Normalization is to remove redundant data Where as The purpose of Row splitting is to separate less required data.
eg:- Suppose you have a table All\_Details with columns- id , Emp\_name, Emp\_address, Emp\_phNo ,Emp\_other\_data, Company\_Name , Company\_Address , Company\_revenue.
Now if you want to normalize the table you would create two new table Employee\_Details and Company\_Details and keep a foreign key of company\_id in table Employee\_Details. this way redundant company data will be removed .
Now lets talk about row splitting. Say even after normalization you are only accessing employee\_name and emp\_phNo but you are not accessing emp\_address and emp\_other\_data so frequently. So to improve performance you split the Employee\_Details table into two table . table1 containing the frequently needed data( employee\_name and emp\_phNo ) and table2 containing the less frequently needed data( Emp\_address, Emp\_other\_data) . Both table will have same unique\_key column so that you can recreate any row of table Employee\_Details with unique\_key. This can improve your system performance drasticaly. |
21,588,692 | I'm new to Assembly Programming (x86), and cannot figure out where I am going wrong in my program. After I redisplay the value that was moved into the array, I then want to display the current 'SUM'. I thought that by using the 'ebx' register, since it is used nowhere else in the program except in Loop2, that the value would not be overwritten and thus each 'add' statement would add the new array position value to my 'SUM'.
Can anyone spot what I'm doing wrong?

```
INCLUDE Irvine32.inc
COUNT = 3
.data
inputMsg BYTE "Input an integer: ", 0
outputMsg BYTE "Redisplaying the integers: ", 0dh, 0ah, 0
sumMsg BYTE " Sum is now: ", 0
strArray SDWORD COUNT DUP(?)
.code
main PROC
; Read Integers from User
mov ebx, 0
mov ecx, COUNT
mov edx, OFFSET inputMsg
mov esi, OFFSET strArray
L1: call WriteString ; Display Prompt
call ReadInt ; Read input from user
mov [esi], eax ; Store value into array
add esi, TYPE strArray ; Move to next array position
loop L1
call Crlf
; Redisplay the integers
mov edx, OFFSET outputMsg ; Display 'outputMsg'
call WriteString
mov ecx, COUNT
mov esi, OFFSET strArray
L2: mov ebx, 0 ; Initialize ebx to 0
mov eax, [esi] ; Get integer from array
call WriteInt ; Display integer
mov edx, OFFSET sumMsg ; Display value of 'sumMsg'
call WriteString
; mov eax, ebx
add ebx, [esi]
mov eax, ebx ; <---- MOVED from above add ebx, [esi]
call WriteInt
call Crlf
add esi, TYPE strArray ; Move to next array position
loop L2
exit
main ENDP
END main
``` | 2014/02/05 | [
"https://Stackoverflow.com/questions/21588692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1956099/"
] | Do this: (taken from: [How to create a passsword protected pdf file](https://stackoverflow.com/questions/2545804/how-to-create-a-passsword-protected-pdf-file))
<http://www.idsecuritysuite.com/blog/wp-content/uploads/fpdi.zip>
```
<?php
function pdfEncrypt ($origFile, $password, $destFile){
require_once('FPDI_Protection.php');
$pdf =& new FPDI_Protection();
$pdf->FPDF('P', 'in');
//Calculate the number of pages from the original document.
$pagecount = $pdf->setSourceFile($origFile);
//Copy all pages from the old unprotected pdf in the new one.
for ($loop = 1; $loop <= $pagecount; $loop++) {
$tplidx = $pdf->importPage($loop);
$pdf->addPage();
$pdf->useTemplate($tplidx);
}
//Protect the new pdf file, and allow no printing, copy, etc. and
//leave only reading allowed.
$pdf->SetProtection(array(), $password);
$pdf->Output($destFile, 'F');
return $destFile;
}
//Password for the PDF file (I suggest using the email adress of the purchaser).
$password = "testpassword";
//Name of the original file (unprotected).
$origFile = "sample.pdf";
//Name of the destination file (password protected and printing rights removed).
$destFile ="sample_protected.pdf";
//Encrypt the book and create the protected file.
pdfEncrypt($origFile, $password, $destFile );
?>
``` | It's not a very good idea because in that way you should:
1. Store a key or password inside the generated PDF file
that is obviously not reliable;
2. OR: Send the typed key to some server but PDF format
doesn't allow that;
3. OR: Implement your own PDF Reader with support of some
custom non-standart extensions that is also not a best
decition.
I think it's better to do some web-based document viewer with [encryption and keys](http://knowyourmeme.com/memes/im-going-to-build-my-own-theme-park-with-blackjack-and-hookers) or something else.
UPD: proprietary [PDFlib](http://www.pdflib.com/knowledge-base/pdf-security/encryption/) allows a compatible encryption
UPD2: open-source [iSafePDF](http://isafepdf.eurekaa.org/) allows it too |
2,386,101 | I define Intermediate Value Property (IVP) and Extreme Value Property (EVP) as follows:
* IVP: If $I$ is an interval, and $f:I\rightarrow\mathbb{R}$, we say that
$f$ has the intermediate value property iff whenever $a<b$ are point's in $I$ and $f(a)\leq c\leq f(b)$, there is a $d\in$ $(a,b)$ such that $f(d)=c$.
* EVP: If $I$ is an interval, and $f:I\rightarrow\mathbb{R}$, we say that $f$ has the extreme value property iff $f$ has maximum and minimum value, each at least once. That is, $\exists a,b\in I$ such that $f(a)\leq f(x) \leq f(b)$ for all $x\in I$.
My question is, does IVP imply EVP? I don't think so, but I still can't find the counter example. Cheers! | 2017/08/07 | [
"https://math.stackexchange.com/questions/2386101",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/389367/"
] | Non-explicit example: Let $f$ be any differentiable function on an interval $I$ with unbounded derivative. Then $f'$ has the IVP by [Darboux's theorem](https://en.wikipedia.org/wiki/Darboux%27s_theorem_(analysis)), but misses either a maximum or minimum. | On the interval $0 < x < 1$, consider $f(x) = x$. It has the intermediate value property, but not the extreme value property. |
2,386,101 | I define Intermediate Value Property (IVP) and Extreme Value Property (EVP) as follows:
* IVP: If $I$ is an interval, and $f:I\rightarrow\mathbb{R}$, we say that
$f$ has the intermediate value property iff whenever $a<b$ are point's in $I$ and $f(a)\leq c\leq f(b)$, there is a $d\in$ $(a,b)$ such that $f(d)=c$.
* EVP: If $I$ is an interval, and $f:I\rightarrow\mathbb{R}$, we say that $f$ has the extreme value property iff $f$ has maximum and minimum value, each at least once. That is, $\exists a,b\in I$ such that $f(a)\leq f(x) \leq f(b)$ for all $x\in I$.
My question is, does IVP imply EVP? I don't think so, but I still can't find the counter example. Cheers! | 2017/08/07 | [
"https://math.stackexchange.com/questions/2386101",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/389367/"
] | Non-explicit example: Let $f$ be any differentiable function on an interval $I$ with unbounded derivative. Then $f'$ has the IVP by [Darboux's theorem](https://en.wikipedia.org/wiki/Darboux%27s_theorem_(analysis)), but misses either a maximum or minimum. | No, not at all.
Even if you stricten up your definition to avoid the most simple counter example, by borrowing from the IVT and EVT and replacing the requirement of continuity by IVP and EVP respectively. For example:
>
> A function $f$ is said to have the IVP in $E$ if for every closed interval $[a,b]\subseteq E$ and for every $c$ between $f(a)$ and $f(b)$ there's a $\xi\in I$ such that $f(\xi) = c$.
>
>
>
This one avoids counterexamples where you just insert a discontinuity as you would then normally have a gap and can find an interval where the IVP is not fulfilled. Also since $E$ is a subset of the domain of $f$ you need the function to be defined everywhere in the interval making it harder to use unbounded functions.
However there are functions that would avoids such attempts at covering the loopholes. One way is to use an everywhere surjective function. That's functions whose range on every non-trivial interval is $\mathbb R$. One such function is:
$$\phi(x) = \begin{cases}
\lim\_{n\to\infty} \tan(n!x) & \text{ if the limit exists} \\
0 & \text{ otherwise}
\end{cases}$$
This means trivially that $\phi$ would have the IVP (in fact it would take every other value on the interval too), but since it's unbounded there it would mean the function does not have the EVP (anywhere).
We can even make a bounded counter example by composing such a function to produce an arbitrary range. For example we have that $f(x) = 1/(1+\phi(x)^2)$ will have the range $(0,1]$ on every non-trivial interval. It will have the IVP, be bounded yet not having the EVP since $\inf\_I f(x)=0$, but $f(x)\ne 0$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.