qid int64 1 74.7M | question stringlengths 15 58.3k | date stringlengths 10 10 | metadata list | response_j stringlengths 4 30.2k | response_k stringlengths 11 36.5k |
|---|---|---|---|---|---|
647,156 | Can the following integral be computed?
 | 2014/01/22 | [
"https://math.stackexchange.com/questions/647156",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/121418/"
] | Hint: consider $$A\_n=\int\_{1/(n+1)}^{1/n}\{1/x\}^{4}dx$$. This can be computed. Does the series $\sum\_{i=1}^\infty A\_i$ converges? | With some preliminary transformations (in attachement) the integral is reduced to the integral of a polygamma function which is known (I let WolframAlpha do the known part of the job).
 |
647,156 | Can the following integral be computed?
 | 2014/01/22 | [
"https://math.stackexchange.com/questions/647156",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/121418/"
] | With some preliminary transformations (in attachement) the integral is reduced to the integral of a polygamma function which is known (I let WolframAlpha do the known part of the job).
 | First, let's prove that it converges:
$$\int\_0^1\bigg\{\frac1x\bigg\}^4dx=\int\_1^\infty\frac{\{t\}^4}{t^2}dt\color{red}<\int\_1^\infty\frac{1^4}{t^2}dt=\bigg[-\frac1t\bigg]\_1^\infty=1,\qquad\text{since }0\le\{t\}<1.$$
---
$$\int\_0^1\bigg\{\frac1x\bigg\}^4dx=\int\_1^\infty\frac{\{t\}^4}{t^2}dt=\sum\_1^\infty\int\_k^{k+1}\frac{(t-k)^4}{t^2}dt=\sum\_1^\infty\int\_0^1\frac{u^4}{(u+k)^2}du=$$
$$=\sum\_1^\infty\bigg(\frac43-2k+4k^2-\frac1{k+1}+4k^3\ln\frac k{k+1}\bigg)=\ldots<1$$ |
68,982,152 | I would like to check in KQL (Kusto Query Language) if a string starts with any prefix that is contained in a list.
Something like:
```
let MaxAge = ago(30d);
let prefix_list = pack_array(
'Mr',
'Ms',
'Mister',
'Miss'
);
| where Name startswith(prefix_list)
```
I know this example can be done with `startswith("Mr","Ms","Mister","Miss")` but this is not escalable. | 2021/08/30 | [
"https://Stackoverflow.com/questions/68982152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7845878/"
] | an *inefficient* but functional option would be using `matches regex` - this can work well if the input data set is not too large:
```
let T = datatable(Name:string)
[
"hello" ,'world', "Mra", "Ms 2", "Miz", 'Missed'
]
;
let prefix_list = pack_array(
'Mr',
'Ms',
'Mister',
'Miss'
);
let prefix_regex = strcat("^(", strcat_array(prefix_list, ")|("), ")");
T
| where Name matches regex prefix_regex
```
| Name |
| --- |
| Mra |
| Ms 2 |
| Missed | | This function is not available in the Kusto query language, you are welcome to open a suggestion for it in the [user feedback form](http://aka.ms/kustouservoice) |
59,675,629 | **\* I tried to update some field of my database from my page but when i tried it doesn't update the element just delete it, i don't know why the element gets destroyed and not update it, how can i solve it\***
* Below i will let the code that doesn't update the information and my `index function()` because the code is inside os that function,also my model `Entrada` and my table `Entrada`
**\* `index function` \***
```php
public function index(Request $request,$id_entrada,$id_venta_entrada,$id_costo,$correo)
{
$detalle_entrada=new Detalle_venta_entrada();
$detalle_entrada->precio=$id_costo;
$detalle_entrada->fk_venta_entrada=$id_venta_entrada;
$detalle_entrada->fk_entrada=$id_entrada;
$detalle_entrada->save();
$ventas=new Venta();
$ventas->monto_total=$id_costo;
$now = new \DateTime();
$ventas->fecha_venta=$now->format('d-m-Y');
$ventas->fk_cliente_natural=1;
$sub = DB::select(DB::raw("SELECT precio_entrada from entrada WHERE id_entrada='$id_entrada'"));
$subtotal = $sub[0]->precio_entrada;
$monto = DB::select(DB::raw("SELECT precio from detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'"));
$monto_total = $monto[0]->precio;
$ventas->save();
$id = DB::select(DB::raw("SELECT Max(id_venta) as venta_id from venta"));
$id_venta = $id[0]->venta_id;
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
$detalle_venta_entrada = DB::select(DB::raw("SELECT id_detalle_entrada,precio,
(select numero_entrada from entrada where id_entrada='$id_entrada'),
(select fecha from venta_entrada where id_venta_entrada='$id_venta_entrada'),
(select monto_total from venta_entrada where id_venta_entrada='$id_venta_entrada' and fk_cliente_natural=1),
(select primer_nombre from cliente_natural where id_cliente_natural=1)
FROM detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'and fk_entrada='$id_entrada'"));
return view ('home.misOrdenes')->with('detalle_venta_entrada',$detalle_venta_entrada)
->with('id_entrada',$id_entrada)
->with('id_venta_entrada',$id_venta_entrada)
->with('detalle_entrada',$detalle_entrada)
->with('subtotal',$subtotal)
->with('monto_total',$monto_total)
->with('id_venta',$id_venta)
->with('correo',$correo);
}
```
**\* code that doesn't update \***
```php
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
```
**\*This is my table `Entrada` \***
```sql
create table Entrada(
ID_Entrada serial,
Numero_Entrada integer not null,
Precio_Entrada real not null,
Disponible boolean not null,
FK_Evento integer not null,
constraint pk_ID_Entrada primary key (ID_Entrada),
constraint fk_FK_Evento_Entrada foreign key(FK_Evento) references Evento(ID_Evento) on delete cascade on update cascade
);
```
**\*Model `Entrada` \***
```php
class Entrada extends Model
{
protected $primaryKey = 'id_entrada';
protected $table = 'entrada';
public $incrementing = false;
protected $fillable = ['id_entrada','numero_entrada','precio_entrada','disponible','fk_evento'];
public $timestamps = false;
}
``` | 2020/01/10 | [
"https://Stackoverflow.com/questions/59675629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12506687/"
] | Change your code to below: and see if it works.
```
$detalle_entrada = new Detalle_venta_entrada();
$detalle_entrada->precio = $request->get('id_costo');
$detalle_entrada->fk_venta_entrada = $request->get('id_venta_entrada');
$detalle_entrada->fk_entrada = $request->get('id_entrada');
$detalle_entrada->save();
``` | ```
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));`
```
Try the code below:
```
$response = Entrada::where(['id_entrada'=>$id_entrada])->first();
$response->disponible = 'true';
$response->update();
``` |
59,675,629 | **\* I tried to update some field of my database from my page but when i tried it doesn't update the element just delete it, i don't know why the element gets destroyed and not update it, how can i solve it\***
* Below i will let the code that doesn't update the information and my `index function()` because the code is inside os that function,also my model `Entrada` and my table `Entrada`
**\* `index function` \***
```php
public function index(Request $request,$id_entrada,$id_venta_entrada,$id_costo,$correo)
{
$detalle_entrada=new Detalle_venta_entrada();
$detalle_entrada->precio=$id_costo;
$detalle_entrada->fk_venta_entrada=$id_venta_entrada;
$detalle_entrada->fk_entrada=$id_entrada;
$detalle_entrada->save();
$ventas=new Venta();
$ventas->monto_total=$id_costo;
$now = new \DateTime();
$ventas->fecha_venta=$now->format('d-m-Y');
$ventas->fk_cliente_natural=1;
$sub = DB::select(DB::raw("SELECT precio_entrada from entrada WHERE id_entrada='$id_entrada'"));
$subtotal = $sub[0]->precio_entrada;
$monto = DB::select(DB::raw("SELECT precio from detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'"));
$monto_total = $monto[0]->precio;
$ventas->save();
$id = DB::select(DB::raw("SELECT Max(id_venta) as venta_id from venta"));
$id_venta = $id[0]->venta_id;
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
$detalle_venta_entrada = DB::select(DB::raw("SELECT id_detalle_entrada,precio,
(select numero_entrada from entrada where id_entrada='$id_entrada'),
(select fecha from venta_entrada where id_venta_entrada='$id_venta_entrada'),
(select monto_total from venta_entrada where id_venta_entrada='$id_venta_entrada' and fk_cliente_natural=1),
(select primer_nombre from cliente_natural where id_cliente_natural=1)
FROM detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'and fk_entrada='$id_entrada'"));
return view ('home.misOrdenes')->with('detalle_venta_entrada',$detalle_venta_entrada)
->with('id_entrada',$id_entrada)
->with('id_venta_entrada',$id_venta_entrada)
->with('detalle_entrada',$detalle_entrada)
->with('subtotal',$subtotal)
->with('monto_total',$monto_total)
->with('id_venta',$id_venta)
->with('correo',$correo);
}
```
**\* code that doesn't update \***
```php
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
```
**\*This is my table `Entrada` \***
```sql
create table Entrada(
ID_Entrada serial,
Numero_Entrada integer not null,
Precio_Entrada real not null,
Disponible boolean not null,
FK_Evento integer not null,
constraint pk_ID_Entrada primary key (ID_Entrada),
constraint fk_FK_Evento_Entrada foreign key(FK_Evento) references Evento(ID_Evento) on delete cascade on update cascade
);
```
**\*Model `Entrada` \***
```php
class Entrada extends Model
{
protected $primaryKey = 'id_entrada';
protected $table = 'entrada';
public $incrementing = false;
protected $fillable = ['id_entrada','numero_entrada','precio_entrada','disponible','fk_evento'];
public $timestamps = false;
}
``` | 2020/01/10 | [
"https://Stackoverflow.com/questions/59675629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12506687/"
] | I believe that you have included the "Entrada" model on top of the class. If no then include it first, because you have explicitly created new instance for "Detalle\_venta\_entrada" and "Venta" and not for "Entrada"
Make sure the $id\_entrada attribute value is properly passed to this method or not
Try the following code in a tinker first
```
php artisan tinker
```
`Entrada::where('id_entrada', $id_entrada)->first();` ([] is not needed when you have a single where condition, hence removed).
If you get the result, then update it.
Always have a condition before you update any data.
eg)
```
$entradaData = Entrada::where('id_entrada', $id_entrada)->first();
if(filled($entradaData))
{
Entrada::where('id_entrada', $id_entrada)->update(
['disponible'=>'true']
);
}
``` | ```
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));`
```
Try the code below:
```
$response = Entrada::where(['id_entrada'=>$id_entrada])->first();
$response->disponible = 'true';
$response->update();
``` |
59,675,629 | **\* I tried to update some field of my database from my page but when i tried it doesn't update the element just delete it, i don't know why the element gets destroyed and not update it, how can i solve it\***
* Below i will let the code that doesn't update the information and my `index function()` because the code is inside os that function,also my model `Entrada` and my table `Entrada`
**\* `index function` \***
```php
public function index(Request $request,$id_entrada,$id_venta_entrada,$id_costo,$correo)
{
$detalle_entrada=new Detalle_venta_entrada();
$detalle_entrada->precio=$id_costo;
$detalle_entrada->fk_venta_entrada=$id_venta_entrada;
$detalle_entrada->fk_entrada=$id_entrada;
$detalle_entrada->save();
$ventas=new Venta();
$ventas->monto_total=$id_costo;
$now = new \DateTime();
$ventas->fecha_venta=$now->format('d-m-Y');
$ventas->fk_cliente_natural=1;
$sub = DB::select(DB::raw("SELECT precio_entrada from entrada WHERE id_entrada='$id_entrada'"));
$subtotal = $sub[0]->precio_entrada;
$monto = DB::select(DB::raw("SELECT precio from detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'"));
$monto_total = $monto[0]->precio;
$ventas->save();
$id = DB::select(DB::raw("SELECT Max(id_venta) as venta_id from venta"));
$id_venta = $id[0]->venta_id;
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
$detalle_venta_entrada = DB::select(DB::raw("SELECT id_detalle_entrada,precio,
(select numero_entrada from entrada where id_entrada='$id_entrada'),
(select fecha from venta_entrada where id_venta_entrada='$id_venta_entrada'),
(select monto_total from venta_entrada where id_venta_entrada='$id_venta_entrada' and fk_cliente_natural=1),
(select primer_nombre from cliente_natural where id_cliente_natural=1)
FROM detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'and fk_entrada='$id_entrada'"));
return view ('home.misOrdenes')->with('detalle_venta_entrada',$detalle_venta_entrada)
->with('id_entrada',$id_entrada)
->with('id_venta_entrada',$id_venta_entrada)
->with('detalle_entrada',$detalle_entrada)
->with('subtotal',$subtotal)
->with('monto_total',$monto_total)
->with('id_venta',$id_venta)
->with('correo',$correo);
}
```
**\* code that doesn't update \***
```php
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
```
**\*This is my table `Entrada` \***
```sql
create table Entrada(
ID_Entrada serial,
Numero_Entrada integer not null,
Precio_Entrada real not null,
Disponible boolean not null,
FK_Evento integer not null,
constraint pk_ID_Entrada primary key (ID_Entrada),
constraint fk_FK_Evento_Entrada foreign key(FK_Evento) references Evento(ID_Evento) on delete cascade on update cascade
);
```
**\*Model `Entrada` \***
```php
class Entrada extends Model
{
protected $primaryKey = 'id_entrada';
protected $table = 'entrada';
public $incrementing = false;
protected $fillable = ['id_entrada','numero_entrada','precio_entrada','disponible','fk_evento'];
public $timestamps = false;
}
``` | 2020/01/10 | [
"https://Stackoverflow.com/questions/59675629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12506687/"
] | The code works, the thing was i didn't notice that when i update the Entrada send it to the bottom of the list in my bd so i thought it didn't work because i have a lot of registers and change the position of the updated one | ```
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));`
```
Try the code below:
```
$response = Entrada::where(['id_entrada'=>$id_entrada])->first();
$response->disponible = 'true';
$response->update();
``` |
59,675,629 | **\* I tried to update some field of my database from my page but when i tried it doesn't update the element just delete it, i don't know why the element gets destroyed and not update it, how can i solve it\***
* Below i will let the code that doesn't update the information and my `index function()` because the code is inside os that function,also my model `Entrada` and my table `Entrada`
**\* `index function` \***
```php
public function index(Request $request,$id_entrada,$id_venta_entrada,$id_costo,$correo)
{
$detalle_entrada=new Detalle_venta_entrada();
$detalle_entrada->precio=$id_costo;
$detalle_entrada->fk_venta_entrada=$id_venta_entrada;
$detalle_entrada->fk_entrada=$id_entrada;
$detalle_entrada->save();
$ventas=new Venta();
$ventas->monto_total=$id_costo;
$now = new \DateTime();
$ventas->fecha_venta=$now->format('d-m-Y');
$ventas->fk_cliente_natural=1;
$sub = DB::select(DB::raw("SELECT precio_entrada from entrada WHERE id_entrada='$id_entrada'"));
$subtotal = $sub[0]->precio_entrada;
$monto = DB::select(DB::raw("SELECT precio from detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'"));
$monto_total = $monto[0]->precio;
$ventas->save();
$id = DB::select(DB::raw("SELECT Max(id_venta) as venta_id from venta"));
$id_venta = $id[0]->venta_id;
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
$detalle_venta_entrada = DB::select(DB::raw("SELECT id_detalle_entrada,precio,
(select numero_entrada from entrada where id_entrada='$id_entrada'),
(select fecha from venta_entrada where id_venta_entrada='$id_venta_entrada'),
(select monto_total from venta_entrada where id_venta_entrada='$id_venta_entrada' and fk_cliente_natural=1),
(select primer_nombre from cliente_natural where id_cliente_natural=1)
FROM detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'and fk_entrada='$id_entrada'"));
return view ('home.misOrdenes')->with('detalle_venta_entrada',$detalle_venta_entrada)
->with('id_entrada',$id_entrada)
->with('id_venta_entrada',$id_venta_entrada)
->with('detalle_entrada',$detalle_entrada)
->with('subtotal',$subtotal)
->with('monto_total',$monto_total)
->with('id_venta',$id_venta)
->with('correo',$correo);
}
```
**\* code that doesn't update \***
```php
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
```
**\*This is my table `Entrada` \***
```sql
create table Entrada(
ID_Entrada serial,
Numero_Entrada integer not null,
Precio_Entrada real not null,
Disponible boolean not null,
FK_Evento integer not null,
constraint pk_ID_Entrada primary key (ID_Entrada),
constraint fk_FK_Evento_Entrada foreign key(FK_Evento) references Evento(ID_Evento) on delete cascade on update cascade
);
```
**\*Model `Entrada` \***
```php
class Entrada extends Model
{
protected $primaryKey = 'id_entrada';
protected $table = 'entrada';
public $incrementing = false;
protected $fillable = ['id_entrada','numero_entrada','precio_entrada','disponible','fk_evento'];
public $timestamps = false;
}
``` | 2020/01/10 | [
"https://Stackoverflow.com/questions/59675629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12506687/"
] | The code works, the thing was i didn't notice that when i update the Entrada send it to the bottom of the list in my bd so i thought it didn't work because i have a lot of registers and change the position of the updated one | Change your code to below: and see if it works.
```
$detalle_entrada = new Detalle_venta_entrada();
$detalle_entrada->precio = $request->get('id_costo');
$detalle_entrada->fk_venta_entrada = $request->get('id_venta_entrada');
$detalle_entrada->fk_entrada = $request->get('id_entrada');
$detalle_entrada->save();
``` |
59,675,629 | **\* I tried to update some field of my database from my page but when i tried it doesn't update the element just delete it, i don't know why the element gets destroyed and not update it, how can i solve it\***
* Below i will let the code that doesn't update the information and my `index function()` because the code is inside os that function,also my model `Entrada` and my table `Entrada`
**\* `index function` \***
```php
public function index(Request $request,$id_entrada,$id_venta_entrada,$id_costo,$correo)
{
$detalle_entrada=new Detalle_venta_entrada();
$detalle_entrada->precio=$id_costo;
$detalle_entrada->fk_venta_entrada=$id_venta_entrada;
$detalle_entrada->fk_entrada=$id_entrada;
$detalle_entrada->save();
$ventas=new Venta();
$ventas->monto_total=$id_costo;
$now = new \DateTime();
$ventas->fecha_venta=$now->format('d-m-Y');
$ventas->fk_cliente_natural=1;
$sub = DB::select(DB::raw("SELECT precio_entrada from entrada WHERE id_entrada='$id_entrada'"));
$subtotal = $sub[0]->precio_entrada;
$monto = DB::select(DB::raw("SELECT precio from detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'"));
$monto_total = $monto[0]->precio;
$ventas->save();
$id = DB::select(DB::raw("SELECT Max(id_venta) as venta_id from venta"));
$id_venta = $id[0]->venta_id;
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
$detalle_venta_entrada = DB::select(DB::raw("SELECT id_detalle_entrada,precio,
(select numero_entrada from entrada where id_entrada='$id_entrada'),
(select fecha from venta_entrada where id_venta_entrada='$id_venta_entrada'),
(select monto_total from venta_entrada where id_venta_entrada='$id_venta_entrada' and fk_cliente_natural=1),
(select primer_nombre from cliente_natural where id_cliente_natural=1)
FROM detalle_venta_entrada WHERE fk_venta_entrada='$id_venta_entrada'and fk_entrada='$id_entrada'"));
return view ('home.misOrdenes')->with('detalle_venta_entrada',$detalle_venta_entrada)
->with('id_entrada',$id_entrada)
->with('id_venta_entrada',$id_venta_entrada)
->with('detalle_entrada',$detalle_entrada)
->with('subtotal',$subtotal)
->with('monto_total',$monto_total)
->with('id_venta',$id_venta)
->with('correo',$correo);
}
```
**\* code that doesn't update \***
```php
Entrada::where(['id_entrada'=>$id_entrada])->update(array(
'disponible'=>'true',
));
```
**\*This is my table `Entrada` \***
```sql
create table Entrada(
ID_Entrada serial,
Numero_Entrada integer not null,
Precio_Entrada real not null,
Disponible boolean not null,
FK_Evento integer not null,
constraint pk_ID_Entrada primary key (ID_Entrada),
constraint fk_FK_Evento_Entrada foreign key(FK_Evento) references Evento(ID_Evento) on delete cascade on update cascade
);
```
**\*Model `Entrada` \***
```php
class Entrada extends Model
{
protected $primaryKey = 'id_entrada';
protected $table = 'entrada';
public $incrementing = false;
protected $fillable = ['id_entrada','numero_entrada','precio_entrada','disponible','fk_evento'];
public $timestamps = false;
}
``` | 2020/01/10 | [
"https://Stackoverflow.com/questions/59675629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12506687/"
] | The code works, the thing was i didn't notice that when i update the Entrada send it to the bottom of the list in my bd so i thought it didn't work because i have a lot of registers and change the position of the updated one | I believe that you have included the "Entrada" model on top of the class. If no then include it first, because you have explicitly created new instance for "Detalle\_venta\_entrada" and "Venta" and not for "Entrada"
Make sure the $id\_entrada attribute value is properly passed to this method or not
Try the following code in a tinker first
```
php artisan tinker
```
`Entrada::where('id_entrada', $id_entrada)->first();` ([] is not needed when you have a single where condition, hence removed).
If you get the result, then update it.
Always have a condition before you update any data.
eg)
```
$entradaData = Entrada::where('id_entrada', $id_entrada)->first();
if(filled($entradaData))
{
Entrada::where('id_entrada', $id_entrada)->update(
['disponible'=>'true']
);
}
``` |
25,382,690 | What I am trying to do is create a temp table in SQL Server 2008 that uses the column names from resultset
For example: this is what i get form my result set:
```
Account weight zone
22 5 1
23 3 2
22 5 1
23 3 2
24 7 3
```
From this result set, `Zone` column values should be converted to dynamic column based on the zone count such as
```
Account weight zone 1 zone 2 zone 3
22 5 2
23 3 2
24 7 1
```
Please help me? | 2014/08/19 | [
"https://Stackoverflow.com/questions/25382690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302158/"
] | You could use [PIVOT](http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx):
```
SELECT Account, Weight, [1] AS Zone1, [2] AS Zone2, [3] AS Zone3, [4] AS Zone4
FROM AccountWeight
PIVOT
(
COUNT(Zone) FOR Zone IN ([1], [2], [3], [4])
) AS ResultTable
ORDER BY Account
```
See [SQL Demo](http://rextester.com/CJJOY4431).
Also, you can find that interesting: [Efficiently convert rows to columns in sql server](https://stackoverflow.com/questions/15745042/efficiently-convert-rows-to-columns-in-sql-server) | ```
select account, weight
sum(case when zone = 1 then 1 end) as zone1,
sum(case when zone = 2 then 1 end) as zone2,
sum(case when zone = 3 then 1 end) as zone3
from your_table
group by account, weight
``` |
25,382,690 | What I am trying to do is create a temp table in SQL Server 2008 that uses the column names from resultset
For example: this is what i get form my result set:
```
Account weight zone
22 5 1
23 3 2
22 5 1
23 3 2
24 7 3
```
From this result set, `Zone` column values should be converted to dynamic column based on the zone count such as
```
Account weight zone 1 zone 2 zone 3
22 5 2
23 3 2
24 7 1
```
Please help me? | 2014/08/19 | [
"https://Stackoverflow.com/questions/25382690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302158/"
] | You could use [PIVOT](http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx):
```
SELECT Account, Weight, [1] AS Zone1, [2] AS Zone2, [3] AS Zone3, [4] AS Zone4
FROM AccountWeight
PIVOT
(
COUNT(Zone) FOR Zone IN ([1], [2], [3], [4])
) AS ResultTable
ORDER BY Account
```
See [SQL Demo](http://rextester.com/CJJOY4431).
Also, you can find that interesting: [Efficiently convert rows to columns in sql server](https://stackoverflow.com/questions/15745042/efficiently-convert-rows-to-columns-in-sql-server) | You can solve this problem by using dynamic sql query. Go through some very good posts on dynamic pivoting...
[SQL Server 2005 Pivot on Unknown Number of Columns](https://stackoverflow.com/questions/213702/sql-server-2005-pivot-on-unknown-number-of-columns)
[Pivot Table and Concatenate Columns](https://stackoverflow.com/questions/159456/pivot-table-and-concatenate-columns#159803) |
43,705,734 | I have a function, that, given two integers A and B, return the number of whole squares within the interval [A..B] (both ends included)
For example, given $A = 4 and $B = 17, the function should return 3, because there are three squares of integers in the interval [4..17]. namely 4 = 2\*, 9 = 3\* and 14 = 4\*
How would I get a number of square numbers up to the number? | 2017/04/30 | [
"https://Stackoverflow.com/questions/43705734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7383720/"
] | This function loops through all integer numbers between `$start` and `$end`. If the square root of the number is equal to its integer part, then `$count` is increased by 1.
```
function countSquares($start, $end)
{
$count = 0;
for($n=$start;$n<=$end;$n++)
{
if(pow($n, 0.5) == intval(pow($n, 0.5)))
{
//echo "$n<br>";
$count++;
}
}
return $count;
}
echo countSquares(4, 17);
```
However this first function is quite slow when used with large numbers.
This other function does the job much more quickly, and the code is also much shorter.
The number of integer square roots between `$start` and `$end` is obtained by subtracting the number of integer square roots between `0` and `$end` to the number of integer square roots between `0` and `$start-1`. (I use `$start-1` because your interval includes the starting number)
```
function countSquares2($start, $end)
{
return floor(pow($end, 0.5)) - floor(pow($start-1, 0.5));
}
echo countSquares2(1000000, 10000000);
``` | Or you could do it by using sqrt function:
```
function getSquaresInRange($a, $b){
return floor(sqrt($b)) - ceil(sqrt($a)) + 1;
}
``` |
62,328,378 | I get `Notice: Undefined index: i` during the first reload of page with cookies
```
if( (isset($_COOKIE["i"])) && !empty($_COOKIE["i"]) ){
setcookie("i",$_COOKIE["i"]+1);
}
else{
setcookie("i",1);
}
echo $_COOKIE["i"]; //here is the error
```
but after 2nd reload,it's OK. | 2020/06/11 | [
"https://Stackoverflow.com/questions/62328378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13383185/"
] | The solution is to not use the `$_COOKIE` array, but a variable
```php
<?php
// Use a variable
$cookieValue = 1;
// Check the cookie
if ((isset($_COOKIE["i"])) && !empty($_COOKIE["i"])) {
$cookieValue = (int)$_COOKIE["i"] + 1;
}
// Push the cookie
setcookie("i", $cookieValue);
// Use the variable
echo $cookieValue;
``` | ```
else{
setcookie("i",1);
header("Refresh:0");
}
``` |
20,057,357 | I want to convert both of the following columns to integer (they were placed as text in the SQlite db) as soon as I select them.
```
string sql4 = "select seq, maxLen from abc where maxLen > 30";
```
I think it might be done like this..(using cast)
```
string sql4 = "select cast( seq as int, maxLen as int) from abc where maxLen > 30";
```
Not sure if it's right as I seem to be getting a syntax error.
How would I also convert the text to double | 2013/11/18 | [
"https://Stackoverflow.com/questions/20057357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2966216/"
] | You need to cast in `where` clause, not where you are selecting it.
```
string sql4 = "select seq, maxLen from abc where CAST(maxLen as INTEGER) > 30";
```
Also in your current `cast` version it will not work since `CAST` works for a single field.
For your question:
>
> How would I also convert the text to double
>
>
>
cast it to REAL like:
```
CAST(maxLen as REAL)
``` | Syntax issue is you're putting two casts in one cast clause. Try this:
```
string sql4 = "select cast(seq as int), cast(maxLen as int)
from abc where maxLen > 30"
```
Like Habib pointed out, you should typecast the where clause, otherwise the comparison isn't numeric but textual.
```
string sql4 = "select cast(seq as int), cast(maxLen as int)
from abc where cast(maxLen as int) > 30"
```
And also, casting to float is simple use float instead of int (or you can use REAL which is same datatype in SQLite)
```
cast(maxLen as float)
``` |
14,503,204 | What size and format should a WPF MenuItem Icon be to look right?
Right now I have
```
<ContextMenu>
<MenuItem Header="Camera">
<MenuItem.Icon>
<Image Source="images/camera.png" />
</MenuItem.Icon>
```
But in the menu, the icon spills over margin, which looks bad.
---
Frustratingly, the docs don't give this information. [System.Windows.Controls.MenuItem.Icon](http://msdn.microsoft.com/en-us/library/system.windows.controls.menuitem.icon.aspx) | 2013/01/24 | [
"https://Stackoverflow.com/questions/14503204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284795/"
] | I think that the right size of image would be `20px`.
Just specify the `Width` and the `Heigth` of your image:
Use this:
```
<MenuItem.Icon>
<Image Source="images/camera.png"
Width="20"
Height="20" />
</MenuItem.Icon>
...
``` | this is the code:
```
<MenuItem>
<MenuItem.Header>
<StackPanel>
<Image Width="20" Height="20" Source="PATH to your image" />
</StackPanel>
</MenuItem.Header>
</MenuItem>
```
You can also try stretch
[Image.Stretch Property](http://msdn.microsoft.com/en-us/library/system.windows.controls.image.stretch.aspx) |
68,700,679 | [](https://i.stack.imgur.com/eIXjq.png)
This happens a lot of the time and really makes ui look ugly. Is there a way to fix this? maybe condense it less? | 2021/08/08 | [
"https://Stackoverflow.com/questions/68700679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12715927/"
] | Setting your DPI awareness to 1 should resolve your issue
```
from ctypes import windll
windll.shcore.SetProcessDpiAwareness(1)
``` | *This happens a lot of the time and really makes ui look ugly.*
According to [wiki.tcl-lang.org](https://wiki.tcl-lang.org/page/Alternative+Canvases)
>
> The Tk canvas lacks modern features such as antialiasing, and an alpha
> channel for transparency/translucency.
>
>
>
which is probably cause of your elements being jagged. There are alternatives listed, but my understanding is that they are made fot `tcl` language, not `python`. |
1,119,860 | I've kept on reading a lot on this, but pretty much confused on what to go with. I'm having a desktop monitor which supports both `VGA` plus the `DVI`, but then my laptop has a `VGA` plus `HDMI`ports. What could be the best option in order to get a digital output?
1) Having a HDMI to VGA convertor
2) Having a HDMI to DVI cable
3) Having a VGA to DVI cable
Wanted to know the best option in terms of quality of display.
Any help could be appreciated. | 2016/09/01 | [
"https://superuser.com/questions/1119860",
"https://superuser.com",
"https://superuser.com/users/388149/"
] | In case anyone else finds this, there is a work around... you just need to reformat the tunnel with a specific bind address like this:
```
ssh -L 127.0.0.1:8022:173.22.0.1:22 username@172.11.0.1
```
From reading through the bug listing linked in the other answer, it looks like the issue is in the IPv6 subsystem, so I'm guessing this works by forcing IPv4.
Either way it works for me, using a fully updated Win 10 version 1607 install as of Jan 20, 2017. | It's a known bug and it's tracked here <https://github.com/Microsoft/BashOnWindows/issues/739>
As an alternative you can try using something like <http://sshwindows.sourceforge.net/> |
21,640,357 | I have a data entry form broken into multiple steps. Each step is on its own div. When filling out the form, I would like to have a nice transition between each step where the current step/div fades out, the height of the form is adjusted to the correct height for the next step/div and then the new step/div fades in.
This would work something like "[Lightbox](http://lokeshdhakar.com/projects/lightbox2/)" but would not be in a modal popup.
Also similar to this [Sliding Form](http://tympanus.net/Tutorials/FancySlidingForm/) but there will only be "Next" and "Previous" buttons on the bottom and not a tab for each step.
Does a JavaScript library like this already exist, or is my best option to set each div to style="display: none;" and combine .slideDown() and .fadeIn()? | 2014/02/08 | [
"https://Stackoverflow.com/questions/21640357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/47226/"
] | If you want with `Next` and `Previous` buttons [here](http://www.jquery-steps.com/Examples#basic-form) is very good demo.
And Here is the link of [GitHub](https://github.com/rstaib/jquery-steps) for above `jquery` plugin | I have used [jQuery Steps](http://www.jquery-steps.com/) with a lot of success
It is extremely polished, provides validation per step, and it very customizable. |
33,178,865 | I know this has been asked a 1000 times and I think I looked through all of them.
I have scheduled tasks running PowerShell Scripts on other servers already, but not on this server. Which has me scratching my head as to why I can't get it to work on this server.
I have a powershell script on a Windows 2008 R2 server. I can run it manually and it all works perfectly, but when I try to run it from a scheduled task the History says it was run, but the PowerShell script does not execute.
PSRemoting is enabled
The server ExecutionPolicy is "RemoteSigned"
I get two entries in the History
1. Action completed
Task Scheduler successfully completed task "\Processing" , instance "{dbbd4924-42d6-4024-a8ed-77494c7f84cf}" , action "C:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.EXE" with return code 0.
2. Task complted
Task Scheduler successfully finished "{dbbd4924-42d6-4024-a8ed-77494c7f84cf}" instance of the "\Processing" task for user "domain\user".
The Scheduled Task looks like this:
1. I set to run under my account while I'm logged on. (since I can run the script manually as myself already)
2. checked Run with highest privileges.
3. trigger is to run every 10 minutes
4. Start a program Action.... Powershell.exe
5. Arguments: -executionpolicy remotesigned -File D:\abc\def\powershell\Processing.ps1
6. Conditions & Settings default settings. | 2015/10/16 | [
"https://Stackoverflow.com/questions/33178865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/919907/"
] | The `export default {...}` construction is just a shortcut for something like this:
```js
const funcs = {
foo() { console.log('foo') },
bar() { console.log('bar') },
baz() { foo(); bar() }
}
export default funcs
```
It must become obvious now that there are no `foo`, `bar` or `baz` functions in the module's scope. But there is an object named `funcs` (though in reality it has no name) that contains these functions as its properties and which will become the module's default export.
So, to fix your code, re-write it without using the shortcut and refer to `foo` and `bar` as properties of `funcs`:
```js
const funcs = {
foo() { console.log('foo') },
bar() { console.log('bar') },
baz() { funcs.foo(); funcs.bar() } // here is the fix
}
export default funcs
```
Another option is to use `this` keyword to refer to `funcs` object without having to declare it explicitly, [as @pawel has pointed out](https://stackoverflow.com/a/33179005/804678).
Yet another option (and the one which I generally prefer) is to declare these functions in the module scope. This allows to refer to them directly:
```js
function foo() { console.log('foo') }
function bar() { console.log('bar') }
function baz() { foo(); bar() }
export default {foo, bar, baz}
```
And if you want the convenience of default export *and* ability to import items individually, you can also export all functions individually:
```js
// util.js
export function foo() { console.log('foo') }
export function bar() { console.log('bar') }
export function baz() { foo(); bar() }
export default {foo, bar, baz}
// a.js, using default export
import util from './util'
util.foo()
// b.js, using named exports
import {bar} from './util'
bar()
```
Or, as @loganfsmyth suggested, you can do without default export and just use `import * as util from './util'` to get all named exports in one object. | tl;dr: `baz() { this.foo(); this.bar() }`
In ES2015 this construct:
```
var obj = {
foo() { console.log('foo') }
}
```
is equal to this ES5 code:
```
var obj = {
foo : function foo() { console.log('foo') }
}
```
`exports.default = {}` is like creating an object, your default export translates to ES5 code like this:
```
exports['default'] = {
foo: function foo() {
console.log('foo');
},
bar: function bar() {
console.log('bar');
},
baz: function baz() {
foo();bar();
}
};
```
now it's kind of obvious (I hope) that `baz` tries to call `foo` and `bar` defined somewhere in the outer scope, which are undefined. But `this.foo` and `this.bar` will resolve to the keys defined in `exports['default']` object. So the default export referencing its own methods shold look like this:
```
export default {
foo() { console.log('foo') },
bar() { console.log('bar') },
baz() { this.foo(); this.bar() }
}
```
See [babel repl transpiled code](https://babeljs.io/repl/#?experimental=false&evaluate=true&loose=false&spec=false&code=export%20default%20%7B%0A%20%20%20%20foo%28%29%20%7B%20console.log%28%27foo%27%29%20%7D%2C%20%0A%20%20%20%20bar%28%29%20%7B%20console.log%28%27bar%27%29%20%7D%2C%0A%20%20%20%20baz%28%29%20%7B%20this.foo%28%29%3B%20this.bar%28%29%20%7D%0A%7D). |
33,178,865 | I know this has been asked a 1000 times and I think I looked through all of them.
I have scheduled tasks running PowerShell Scripts on other servers already, but not on this server. Which has me scratching my head as to why I can't get it to work on this server.
I have a powershell script on a Windows 2008 R2 server. I can run it manually and it all works perfectly, but when I try to run it from a scheduled task the History says it was run, but the PowerShell script does not execute.
PSRemoting is enabled
The server ExecutionPolicy is "RemoteSigned"
I get two entries in the History
1. Action completed
Task Scheduler successfully completed task "\Processing" , instance "{dbbd4924-42d6-4024-a8ed-77494c7f84cf}" , action "C:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.EXE" with return code 0.
2. Task complted
Task Scheduler successfully finished "{dbbd4924-42d6-4024-a8ed-77494c7f84cf}" instance of the "\Processing" task for user "domain\user".
The Scheduled Task looks like this:
1. I set to run under my account while I'm logged on. (since I can run the script manually as myself already)
2. checked Run with highest privileges.
3. trigger is to run every 10 minutes
4. Start a program Action.... Powershell.exe
5. Arguments: -executionpolicy remotesigned -File D:\abc\def\powershell\Processing.ps1
6. Conditions & Settings default settings. | 2015/10/16 | [
"https://Stackoverflow.com/questions/33178865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/919907/"
] | One alternative is to change up your module. Generally if you are exporting an object with a bunch of functions on it, it's easier to export a bunch of named functions, e.g.
```
export function foo() { console.log('foo') },
export function bar() { console.log('bar') },
export function baz() { foo(); bar() }
```
In this case you are export all of the functions with names, so you could do
```
import * as fns from './foo';
```
to get an object with properties for each function instead of the import you'd use for your first example:
```
import fns from './foo';
``` | tl;dr: `baz() { this.foo(); this.bar() }`
In ES2015 this construct:
```
var obj = {
foo() { console.log('foo') }
}
```
is equal to this ES5 code:
```
var obj = {
foo : function foo() { console.log('foo') }
}
```
`exports.default = {}` is like creating an object, your default export translates to ES5 code like this:
```
exports['default'] = {
foo: function foo() {
console.log('foo');
},
bar: function bar() {
console.log('bar');
},
baz: function baz() {
foo();bar();
}
};
```
now it's kind of obvious (I hope) that `baz` tries to call `foo` and `bar` defined somewhere in the outer scope, which are undefined. But `this.foo` and `this.bar` will resolve to the keys defined in `exports['default']` object. So the default export referencing its own methods shold look like this:
```
export default {
foo() { console.log('foo') },
bar() { console.log('bar') },
baz() { this.foo(); this.bar() }
}
```
See [babel repl transpiled code](https://babeljs.io/repl/#?experimental=false&evaluate=true&loose=false&spec=false&code=export%20default%20%7B%0A%20%20%20%20foo%28%29%20%7B%20console.log%28%27foo%27%29%20%7D%2C%20%0A%20%20%20%20bar%28%29%20%7B%20console.log%28%27bar%27%29%20%7D%2C%0A%20%20%20%20baz%28%29%20%7B%20this.foo%28%29%3B%20this.bar%28%29%20%7D%0A%7D). |
33,178,865 | I know this has been asked a 1000 times and I think I looked through all of them.
I have scheduled tasks running PowerShell Scripts on other servers already, but not on this server. Which has me scratching my head as to why I can't get it to work on this server.
I have a powershell script on a Windows 2008 R2 server. I can run it manually and it all works perfectly, but when I try to run it from a scheduled task the History says it was run, but the PowerShell script does not execute.
PSRemoting is enabled
The server ExecutionPolicy is "RemoteSigned"
I get two entries in the History
1. Action completed
Task Scheduler successfully completed task "\Processing" , instance "{dbbd4924-42d6-4024-a8ed-77494c7f84cf}" , action "C:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.EXE" with return code 0.
2. Task complted
Task Scheduler successfully finished "{dbbd4924-42d6-4024-a8ed-77494c7f84cf}" instance of the "\Processing" task for user "domain\user".
The Scheduled Task looks like this:
1. I set to run under my account while I'm logged on. (since I can run the script manually as myself already)
2. checked Run with highest privileges.
3. trigger is to run every 10 minutes
4. Start a program Action.... Powershell.exe
5. Arguments: -executionpolicy remotesigned -File D:\abc\def\powershell\Processing.ps1
6. Conditions & Settings default settings. | 2015/10/16 | [
"https://Stackoverflow.com/questions/33178865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/919907/"
] | The `export default {...}` construction is just a shortcut for something like this:
```js
const funcs = {
foo() { console.log('foo') },
bar() { console.log('bar') },
baz() { foo(); bar() }
}
export default funcs
```
It must become obvious now that there are no `foo`, `bar` or `baz` functions in the module's scope. But there is an object named `funcs` (though in reality it has no name) that contains these functions as its properties and which will become the module's default export.
So, to fix your code, re-write it without using the shortcut and refer to `foo` and `bar` as properties of `funcs`:
```js
const funcs = {
foo() { console.log('foo') },
bar() { console.log('bar') },
baz() { funcs.foo(); funcs.bar() } // here is the fix
}
export default funcs
```
Another option is to use `this` keyword to refer to `funcs` object without having to declare it explicitly, [as @pawel has pointed out](https://stackoverflow.com/a/33179005/804678).
Yet another option (and the one which I generally prefer) is to declare these functions in the module scope. This allows to refer to them directly:
```js
function foo() { console.log('foo') }
function bar() { console.log('bar') }
function baz() { foo(); bar() }
export default {foo, bar, baz}
```
And if you want the convenience of default export *and* ability to import items individually, you can also export all functions individually:
```js
// util.js
export function foo() { console.log('foo') }
export function bar() { console.log('bar') }
export function baz() { foo(); bar() }
export default {foo, bar, baz}
// a.js, using default export
import util from './util'
util.foo()
// b.js, using named exports
import {bar} from './util'
bar()
```
Or, as @loganfsmyth suggested, you can do without default export and just use `import * as util from './util'` to get all named exports in one object. | One alternative is to change up your module. Generally if you are exporting an object with a bunch of functions on it, it's easier to export a bunch of named functions, e.g.
```
export function foo() { console.log('foo') },
export function bar() { console.log('bar') },
export function baz() { foo(); bar() }
```
In this case you are export all of the functions with names, so you could do
```
import * as fns from './foo';
```
to get an object with properties for each function instead of the import you'd use for your first example:
```
import fns from './foo';
``` |
349,237 | The paper [Field Wiring and Noise Considerations for Analog Signals](http://www.ni.com/white-paper/3344/en/) mentions the following info:
>
> Single-ended input connections can be used when all input signals meet
> the following criteria:
>
>
> Input signals are high level (greater than 1 V)
>
>
>
What meant by "1V" is AC or DC here? Do they mean by 1V a swing or offset?
In other words if the signal offset is 5V but the max swing is 100mV should I use differential ended? | 2018/01/10 | [
"https://electronics.stackexchange.com/questions/349237",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/134429/"
] | They are referring to the amplitude of the signal, so the AC component. Whether the signals themselves always stay positive, or have negative peaks, doesn't really matter.
The point is signal to noise ratio. All they are saying is that you shouldn't use single-ended connections, with their inherent susceptibility to common mode noise, when the signal is less than 1 V.
This is clearly meant to be a rough guide. For example, they are saying it would be OK to use single-ended signals for line-level audio, but not for microphone level audio, for example. | The stated reasons should be self-explanatory. **That means you must estimate or measure to find out the environment for errors in ground difference and common mode radiated noise**
>
> justified only if the magnitude of the induced errors is smaller than the required accuracy of the data. Single-ended input connections can be used when all input signals meet the following criteria.
>
>
> |
45,659,246 | I've implemented Microsoft ads on my uwp app, but it always shows same ads such as below:
[](https://i.stack.imgur.com/UATSc.png)
Here are the codes for implementation.
```
void MainWindow::showAd()
{
auto adControl = ref new AdControl();
// Set the application id and ad unit id
// The application id and ad unit id can be obtained from Dev Center.
// See "Monetize with Ads" at https ://msdn.microsoft.com/en-us/library/windows/apps/mt170658.aspx
adControl->ApplicationId = L"------";
adControl->AdUnitId = L"------";
// Set the dimensions
adControl->Width = 50;
adControl->Height = 300;
// Add event handlers if you want
adControl->AdRefreshed += ref new EventHandler<RoutedEventArgs^>(this, &OpenGLESPage::OnAdRefreshed);
adControl->VerticalAlignment = Windows::UI::Xaml::VerticalAlignment::Top;
// adControl-? = Windows::UI::Xaml::Visibility::Visible;
// Add the ad control to the page
// auto parent = mPage->Parent;
// parent->Append(adControl);
swapChainPanel->Children->Append(adControl);
}
```
Why Microsoft ads alwyas shows same image like above? I got this ads on debug mode. Is it related to Debug or release mode? | 2017/08/13 | [
"https://Stackoverflow.com/questions/45659246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6292298/"
] | The Ad unit displays the same advertisement because it seems that you are using the test App ID and Ad unit ID.. During development you can use the test values to test out the ad control..
Before submission of the application to the windows store you need to create your own AD unit (for which you will receive an unique ID) and then use those values in your application to display ads.
Also note that the ***real***(or actual) ad unit ID will only show ***real*** advertisements when the package is published to the store.. when developing you will not be able to see the ads with your Ad unit ID.
>
> Every AdControl has a corresponding ad unit that is used by our services to serve ads to the control, and every ad unit consists of an ad unit ID and application ID. In these steps, you assign test ad unit ID and application ID values to your control. These test values can only be used in a test version of your app. Before you publish your app to the Store, you must replace these test values with live values from Windows Dev Center.
>
>
>
More info : [Adcontrol(MSDN)](https://learn.microsoft.com/en-us/windows/uwp/monetize/adcontrol-in-xaml-and--net)
---
**EDIT :**
After testing your application I think this might be the issue that you are facing :
>
>
> >
> > Test ads are showing in your app instead of live ads Test ads can be shown, even when you are expecting live ads. This can happen in the
> > following scenarios:
> >
> >
> >
>
>
> * Our advertising platform cannot verify or find the live application ID used in the Store. In this case, when an ad unit is created by a
> user, its status can start as live (non-test) but will move to test
> status within 6 hours after the first ad request. It will change back
> to live if there are no requests from test apps for 10 days.
> * Side-loaded apps or apps that are running in the emulator will not show live ads.
>
>
> When a live ad unit is serving test ads, the ad unit’s status shows Active and > serving test ads in Windows Dev Center. This does not currently apply to phone > apps.
>
>
>
Known Issues for the UWP Advertising Libraries : [**Link Here**](https://learn.microsoft.com/en-us/windows/uwp/monetize/known-issues-for-the-advertising-libraries). | Some fixes are rolled out by Microsoft not long ago, and this issue should be resolved now.
You can have a check if the issue is gone for you.
If it still exists, then a [**support ticket**](https://developer.microsoft.com/en-us/windows/support) will be needed to get it reviewed. |
27,962,006 | I have an array :
```
$results =@()
```
Then i loop with custom logic through wmi and create custom objects that i add to the array like this:
```
$item= @{}
$item.freePercent = $freePercent
$item.freeGB = $freeGB
$item.system = $system
$item.disk = $disk
$results += $item
```
I know want to to some stuff on the results array, like converting to html .
I can do it with a foreach and custom html writing but i want to use convertto-html...
P.S. I can print out data like this but only this:.
```
foreach($result in $results) {
$result.freeGB
}
``` | 2015/01/15 | [
"https://Stackoverflow.com/questions/27962006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1296313/"
] | Custom object creation doesn't work like you seem to think. The code
```psh
$item= @{}
$item.freePercent = $freePercent
$item.freeGB = $freeGB
$item.system = $system
$item.disk = $disk
```
creates a hashtable, not a custom object, so you're building a list of hashtables.
Demonstration:
```none
PS C:\> **$results = @()**
PS C:\> **1..3 | % {**
>> **$item = @{}**
>> **$item.A = $\_ + 2**
>> **$item.B = $\_ - 5**
>> **$results += $item**
>> **}**
>>
PS C:\> **$results**
Name Value
---- -----
A 3
B -4
A 4
B -3
A 5
B -2
PS C:\> **$results[0]**
Name Value
---- -----
A 3
B -4
```
Change your object creation to this:
```psh
$item = New-Object -Type PSCustomObject -Property @{
'freePercent' = $freePercent
'freeGB' = $freeGB
'system' = $system
'disk' = $disk
}
$results += $item
```
so you get the desired list of objects:
```none
PS C:\> **$results = @()**
PS C:\> **1..3 | % {**
>> **$item = New-Object -Type PSCustomObject -Property @{**
>> **'A' = $\_ + 2**
>> **'B' = $\_ - 5**
>> v}
>> **$results += $item**
>> **}**
>>
PS C:\> **$results**
A B
- -
3 -4
4 -3
5 -2
PS C:\> **$results[0]**
A B
- -
3 -4
```
Also, appending to an array in a loop is bound to perform poorly. It's better to just "echo" the objects inside the loop and assign the result to the list variable:
```psh
$results = foreach (...) {
New-Object -Type PSCustomObject -Property @{
'freePercent' = $freePercent
'freeGB' = $freeGB
'system' = $system
'disk' = $disk
}
}
```
Pipe `$results` into `ConvertTo-Html` to convert the list to an HTML page (use the parameter `-Fragment` if you want to create just an HTML table instead of an entire HTML page).
```psh
$results | ConvertTo-Html
```
An even better approach would be to pipeline your whole processing like this:
```psh
... | ForEach-Object {
New-Object -Type PSCustomObject -Property @{
'freePercent' = $freePercent
'freeGB' = $freeGB
'system' = $system
'disk' = $disk
}
} | ConvertTo-Html
``` | You aren't creating a custom object, you're creating a hash table.
Assuming you've got at least V3:
```
[PSCustomObject]@{
freePercent = $freePercent
freeGB = $freeGB
system = $system
disk = $disk
}
``` |
57,742,776 | Is there a more pythonic way to write this code? this field\_split variable becomes part of a mysql statement. I need to reformat these 5 time fields using `dateutil.parser.parse` and make the field `None` if the timestamp value was empty in the incoming CSV file that i parsed in to `field_split`. [6],[7],[8],[9],[44] are the columns that are timestamps is this table. I feel like there should be a way to consolidate this code more but not sure how.
```
if field_split[6]:
field_split[6]= dateutil.parser.parse(field_split[6])
else:
field_split[6]=None
if field_split[7]:
field_split[7]= dateutil.parser.parse(field_split[7])
else:
field_split[7] = None
if field_split[8]:
field_split[8]= dateutil.parser.parse(field_split[8])
else:
field_split[8] = None
if field_split[9]:
field_split[9]= dateutil.parser.parse(field_split[9])
else:
field_split[9] = None
if field_split[44]:
field_split[44] = dateutil.parser.parse(field_split[44])
else:
field_split[44] = None
``` | 2019/09/01 | [
"https://Stackoverflow.com/questions/57742776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439350/"
] | It feels like a loop would be appropriate here:
```py
time_fields = [6,7,8,9,44]
for v in time_fields:
if field_split[v]:
field_split[v] = dateutil.parser.parse(field_split[v])
else:
field_split[v] = None
```
An other, more condensed, way of doing it...
```py
set_ = field_split.__setitem__; parse_ = dateutil.parser.parse
[set_(i, parse_(field_split[i])) if field_split[i] else set_(i, None) for i in time_fields]
``` | Reuse `field_split[timestamp]` as `data` in the next if-else statement.
```
timestamps = [6, 7, 8, 9, 44]
for timestamp in timestamps:
data = field_split[timestamp]
field_split[timestamp] = dateutil.parser.parse(data) if data else None
``` |
169,856 | Find all $k\_1, k\_2$ that satisfy $k\_1 a = k\_2 b + c$ where everything are integers. It feels like there should be some easy way to describe this in terms of congruence and gcd. | 2012/07/12 | [
"https://math.stackexchange.com/questions/169856",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/21556/"
] | Let $d=\gcd(a,b)$. If $d$ does not divide $c$, there is no solution. So assume from now on that $d$ divides $c$.
Suppose that we have found one particular solution $(x\_0,y\_0)$ of the equation $ax=by+c$. Then **all** solutions $(x,y)$ are given by
$$x=x\_0 +\frac{b}{d}t, \qquad y=y\_0+\frac{a}{d}t,\tag{$1$}$$
where $t$ ranges over the integers, positive, negative, and $0$.
So now look for a particular solution $(x\_0,y\_0)$. In "small" cases, a particular solution can be found by experimentation. In other cases, use the [*Extended Euclidean Algorithm*](http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm) to find integers $s$ and $t$ such that $as=bt+d$. Then a particular solution $(x\_0,y\_0)$ of our original equation is given by
$$x\_0=\frac{c}{d}s,\qquad y\_0=\frac{c}{d}t.$$
Now using $(1)$ we can generate all the solutions. | This is the simplest of the Diophantine equation i.e. linear Diophantine equation with 2 variables.$$ax+by=c$$
The condition for solvability is - $ax+by=c$ admits a solution if and only if $gcd(a,b)|c$ .
And if $(x\_0,y\_0)$ is any particular solution of this equation , then all other solutions are given by$$x=x\_0+\frac{b}{d}t\quad \quad y=y\_0-\frac{a}{b}t$$
For example consider the linear Diophantine equation$$172x+20y=1000$$
So applying Euclid's algorithms to find the gcd.
$$\begin{align\*}172&=8.20+12 \\
20&=1.12+8\\
12&=1.8+4\\
8&=2.4\end{align\*}$$
So the $\text{gcd}(172,20)=4$.And since $4|1000$ ,a solution to this equation exists.So working backward.
$$\begin{align\*}4&=12-8\\
&=12-(20-12)\\
&=2.12-20\\
&=2(172-8.20)-20\\
&=2.172+(-17)20
\end{align\*}$$
Multiplying by $250$ we get$$1000=500.172+(-4250)20$$
So $x=500 \text{ and }y=-4250$. And then putting these value in above formula you can get the general solution. |
118,851 | How do I add a dot after `thechapter` in the ToC, but have no dot after `thechapter` in the body?
Here's an MWE:
```
\documentclass{book}
\usepackage{fontspec} % enagles loading of OpenType fonts
\usepackage{polyglossia} % support for languages
% fonts:
\defaultfontfeatures{Scale=MatchLowercase,Mapping=tex-text} % without this XeLaTeX won't turn "--" into dashes
\setmainfont{DejaVu Sans}
\setsansfont{DejaVu Sans}
\setmonofont{DejaVu Sans Mono}
% toc:
\usepackage{titletoc}
\titlecontents{chapter}
[1.5em]{\addvspace{\baselineskip}}
{\contentslabel{1.5em}\hspace*{0em}}
{}
{\titlerule*[1pc]{.}\contentspage}
\begin{document}
\tableofcontents
\chapter{foo}
\chapter{bar}
\chapter{baz}
\end{document}
```
I tried to define `thechapter` with dot before the ToC, and then redifine it with no dot after the ToC, but it doesn't work:
```
\renewcommand\thechapter{\Roman{chapter}.}
\tableofcontents
\renewcommand\thechapter{\Roman{chapter}}
```
**Edit**:
Thanks to egreg, the solution is:
```
\usepackage[dotinlabels]{titletoc}
```
I also had to:
```
\renewcommand{\theequation}{\thechapter.\arabic{equation}}
\renewcommand\thefigure{\thechapter.\arabic{figure}}
\renewcommand\theproblem{\thechapter.\arabic{problem}.}
```
in order to have dot in the name of the figures, problems, etc. | 2013/06/12 | [
"https://tex.stackexchange.com/questions/118851",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/8992/"
] | At the end of section 6.2 in the documentation of `titlesec`/`titletoc` you find the solution:
```
\usepackage[dotinlabels]{titletoc}
```
Here's the code:
```
\documentclass{book}
\usepackage{fontspec} % enagles loading of OpenType fonts
\usepackage{polyglossia} % support for languages
% fonts:
\defaultfontfeatures{Scale=MatchLowercase,Mapping=tex-text} % without this XeLaTeX won't turn "--" into dashes
\setmainfont{DejaVu Sans}
\setsansfont{DejaVu Sans}
\setmonofont{DejaVu Sans Mono}
% toc:
\usepackage[dotinlabels]{titletoc}
\titlecontents{chapter}
[1.5em]
{\addvspace{\baselineskip}}
{\contentslabel{1.5em}\hspace*{0em}}
{}
{\titlerule*[1pc]{.}\contentspage}
\begin{document}
\tableofcontents
\chapter{foo}
\chapter{bar}
\chapter{baz}
\end{document}
```
If you want to use the period only for chapters and not for sections, you can do it differently
```
\usepackage{titletoc}
\titlecontents{chapter}
[1.5em]
{\addvspace{\baselineskip}}
{\contentslabel[\thecontentslabel.\hfill]{1.5em}\hspace*{0em}}
{}
{\titlerule*[1pc]{.}\contentspage}
```
 | Just in case someone prefers using the [tocloft](http://www.ctan.org/tex-archive/macros/latex/contrib/tocloft) package rather than the `titletoc` package: To achieve the OP's objective, it suffices to load the `tocloft` package and issue the command
```
\renewcommand{\cftchapaftersnum}{.}
```
in the preamble. Separately, if you wanted a set of "dot leaders" between the chapter name and the associated page number, you can get LaTeX to do so by issuing the command
```
\renewcommand{\cftchapleader}{\cftdotfill{\cftdotsep}}
```
The following MWE illustrates the effect of `\renewcommand{\cftchapaftersnum}{.}`:
```
\documentclass{book}
\usepackage{tocloft}
\renewcommand{\cftchapaftersnum}{.}
\begin{document}
\tableofcontents
\chapter{ABC}
\chapter{DEF}
\end{document}
```
 |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | There's [SQLCop](http://sqlcop.lessthandot.com/) - free, and quite an interesting tool, too!
 | There is a tool called [Static Code Analysis](http://msdn.microsoft.com/en-us/library/dd172133.aspx) (not exactly a great name given its collision with VS-integrated FxCop) that is included with Visual Studio Premium and Ultimate that can cover at least the design-time subset of your rules. You can also [add your own rules](http://msdn.microsoft.com/en-us/library/dd193244.aspx) if the in-box rule set doesn't do everything you want. |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | There is a tool called [Static Code Analysis](http://msdn.microsoft.com/en-us/library/dd172133.aspx) (not exactly a great name given its collision with VS-integrated FxCop) that is included with Visual Studio Premium and Ultimate that can cover at least the design-time subset of your rules. You can also [add your own rules](http://msdn.microsoft.com/en-us/library/dd193244.aspx) if the in-box rule set doesn't do everything you want. | I'm not aware of one. It would be welcome.
I post this as an answer, because I actually went a long way to implementing monitoring many things which can be easily done in straight T-SQL - the majority of the examples you give can be done by inspecting the metadata.
After writing a large number of "system health" procedures and some organization around them, I wrote a framework for something like this myself, using metadata including extended properties. It allowed objects to be marked to be excluded from warnings using extended properties, and rules could be categorized. I included examples of some rules and their implementations in my metadata presentation. <http://code.google.com/p/caderoux/source/browse/#hg%2FLeversAndTurtles> This also includes a Windows Forms app which will call the system, but the system itself is entirely coded and organized in T-SQL. |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | There is a tool called [Static Code Analysis](http://msdn.microsoft.com/en-us/library/dd172133.aspx) (not exactly a great name given its collision with VS-integrated FxCop) that is included with Visual Studio Premium and Ultimate that can cover at least the design-time subset of your rules. You can also [add your own rules](http://msdn.microsoft.com/en-us/library/dd193244.aspx) if the in-box rule set doesn't do everything you want. | Take a look at [SQLCop](http://sqlcop.lessthandot.com/). It's the closest I've seen to FXCop. |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | There's [SQLCop](http://sqlcop.lessthandot.com/) - free, and quite an interesting tool, too!
 | I'm not aware of one. It would be welcome.
I post this as an answer, because I actually went a long way to implementing monitoring many things which can be easily done in straight T-SQL - the majority of the examples you give can be done by inspecting the metadata.
After writing a large number of "system health" procedures and some organization around them, I wrote a framework for something like this myself, using metadata including extended properties. It allowed objects to be marked to be excluded from warnings using extended properties, and rules could be categorized. I included examples of some rules and their implementations in my metadata presentation. <http://code.google.com/p/caderoux/source/browse/#hg%2FLeversAndTurtles> This also includes a Windows Forms app which will call the system, but the system itself is entirely coded and organized in T-SQL. |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | There's [SQLCop](http://sqlcop.lessthandot.com/) - free, and quite an interesting tool, too!
 | Take a look at [SQLCop](http://sqlcop.lessthandot.com/). It's the closest I've seen to FXCop. |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | There's [SQLCop](http://sqlcop.lessthandot.com/) - free, and quite an interesting tool, too!
 | Check out SQL Enlight - <http://www.ubitsoft.com/products/sqlenlight/sqlenlight.php> |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | Check out SQL Enlight - <http://www.ubitsoft.com/products/sqlenlight/sqlenlight.php> | I'm not aware of one. It would be welcome.
I post this as an answer, because I actually went a long way to implementing monitoring many things which can be easily done in straight T-SQL - the majority of the examples you give can be done by inspecting the metadata.
After writing a large number of "system health" procedures and some organization around them, I wrote a framework for something like this myself, using metadata including extended properties. It allowed objects to be marked to be excluded from warnings using extended properties, and rules could be categorized. I included examples of some rules and their implementations in my metadata presentation. <http://code.google.com/p/caderoux/source/browse/#hg%2FLeversAndTurtles> This also includes a Windows Forms app which will call the system, but the system itself is entirely coded and organized in T-SQL. |
8,928,808 | Is there a tool out there that can analyse SQL Server databases for potential problems?
For example:
* [a foreign key column that is not indexed](https://stackoverflow.com/questions/836167/does-a-foreign-key-automatically-create-an-index)
* an index on a `uniqueidentifier` column that has no `FILL FACTOR`
* a `LastModifiedDate DATETIME` column that has no `UPDATE` trigger to update the datetime
* a large index with "high" fragmentation
* a non-fragmented index that exists in multiple extents
* a trigger that does not contain `SET NOCOUNT ON` (leaving it suspectible to *"A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active."*)
* a database, table, stored procedure, trigger, view, created with `SET ANSI_NULLS OFF`
* a [database or table with `SET ANSI_PADDING OFF`](https://stackoverflow.com/questions/1415726/why-is-sql-server-deprecating-set-ansi-padding-off)
* a database or table created with `SET CONCAT_NULL_YIELDS_NULL OFF`
* a highly fragmented index that might benefit from a lower `FILLFACTOR` (i.e. more padding)
* a table with a very wide clustered index (e.g. uniqueidentifier+uniqueidentifier)
* a table with a non-unique clustered index
* use of `text/ntext` rather than `varchar(max)/nvarchar(max)`
* use of `varchar` in columns that could likely contain localized strings and should be `nvarchar` (e.g. Name, FirstName, LastName, BusinessName, CountryName, City)
* use of `*=`, `=*`, `*=*` rather than `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, `FULL OUTER JOIN`
* [trigger that returns a results set](http://msdn.microsoft.com/en-us/library/ms143729%28v=sql.110%29.aspx)
* any column declared as `timestamp` rather than `rowversion`
* a nullable `timestamp` column
* use of `image` rather than `varbinary(max)`
* databases not in simple mode (or a log file more than 100x the size of the data file)
Is there an FxCop for SQL Server?
**Note:** The Microsoft SQL Server 2008 R2 Best Practices Analyzer [doesn't fit the bill](http://www.bradmcgehee.com/2010/11/sql-server-2008-r2-best-practices-analyzer/). | 2012/01/19 | [
"https://Stackoverflow.com/questions/8928808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
] | Check out SQL Enlight - <http://www.ubitsoft.com/products/sqlenlight/sqlenlight.php> | Take a look at [SQLCop](http://sqlcop.lessthandot.com/). It's the closest I've seen to FXCop. |
3,307,162 | Ten lockers are in a row. The lockers are numbered in order with the positive integers 1 to 10. Each locker is to be painted either blue, red or green subject to the following rules:
* Two lockers numbered $m$ and $n$ are painted different colours whenever $m−n$ is odd.
* It is not required that all 3 colours be used.
In how many ways can the collection of lockers be painted?
---
**Attempt:**
Notice that the total number of coloring schemes without the rules is $3^{10}$. The total number of coloring schemes where every two side-by-side lockers have different color is $3 \cdot 2^{9}$ (this corresponds to $m-n=1$). The total number of coloring schemes where $m-n = 3$ is $(3 \cdot 2)^{7}$, without considering the others $m-n$. Also for $m-n = 5$, the total number of schemes is $(3 \cdot 2)^{5}$. For $m-n=7$, $(3 \cdot 2)^{3}$. For $m-n=9$, $(3 \cdot 2)$.
How to continue? | 2019/07/29 | [
"https://math.stackexchange.com/questions/3307162",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/97835/"
] | We can first divide the lockers into two groups, one for the odd-numbered lockers and the other group for even-numbered lockers. We know that If a colour is used in the one group, then it can't be used in the other group. Then we can divide into $2$ cases.
**First case:** only two colours is used.
>
> First, we choose two colours, then one of them will be used for the first group and the other will be used for the second group. So the number of ways for this case is $P^3\_2=6$.
>
>
>
**Second case:** all three colours is used.
>
> We can have two colours in a group and one colour for the other group. WLOG, we can assume that red and blue is used for the first group and green for the second group, then multiply the result by $C^3\_2\times2=6$, then we will have the answer for this case.
>
>
> We have $2^5-2=30$ choices for the first group, because there are $2$ choices for each of the $5$ lockers in the first group, but subtracting $2$ for the cases that all of the $5$ lockers in the first group are red or blue. Then, there is only $1$ cases for the second group. That means there are $30\times6=180$ ways for the second case.
>
>
>
So, $180+6=186$ is the answer to the problem. | Note that $m-n$ is odd iff $m$ and $n$ have different parity. So two lockers having different parity must have different colors. So we partition the set of given colors $\{R, G, B\}$ in two non-empty disjoint sets $O$ and $E$. The lockers with odd parity can only be colored with colors in $O$, and lockers with even parity can only be colored with colors in $E$.
Let's suppose $|O|=1$ and $|E|=2$. Number of ways to partition will be $\binom{3}{1}=3$. All odd lockers will have to choose the same color. All even lockers will have to choose one from two available colors. So, number of ways to color in this case is $3 \cdot 1^5 \cdot 2^5 = 96$.
Another case will be $|O|=2$ and $|E|=1$. Proceeding in the same manner, we have our answer as $ \binom{3}{2} \cdot 2^5 \cdot 1^5 = 96$.
Our answer is $96+96=192$.
As mentioned in comments by [Culver Kwan](https://math.stackexchange.com/users/686157/culver-kwan), I have double-counted the case when $|O|=|E|=1$. In this case, we can choose $O$ in $3$ ways, and we can choose $E$ from remaining colors in $2$ ways. So, the number of coloring in this case is $3 \cdot 2 \cdot 1^5 \cdot 1^5 = 6$.
We get $192-6=186$ as our answer. |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | No. `javac` does not do any optimizations on different platforms.
See the oracle ["tools"](http://download.oracle.com/javase/1.4.2/docs/tooldocs/tools.html) page (where `javac` and other tools are described):
>
> Each of the development tools comes in a Microsoft Windows version (Windows) and a Solaris or Linux version. **There is virtually no difference in features between versions.** However, there are minor differences in configuration and usage to accommodate the special requirements of each operating system. (*For example, the way you specify directory separators depends on the OS.*)
>
>
>
---
(Maybe the Solaris JVM is slower than the windows JVM?) | To my understanding javac only consideres the `-target` argument to decide what bytecode to emit, hence there is no platform specific in the byte code generation.
All the optimization is done by the JVM, not the compiler, when interpreting the byte codes. This is specific to the individual platform.
Also I've read somewhere that the Solaris JVM is the reference implementation, and then it is ported to Windows. Hence the Windows version is more optimzied than the Solaris one. |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | The compilation output should not depend on OS on which javac was called.
If you want to verify it try:
```
me@windows@ javac Main.java
me@windows@ javap Main.class > Main.win.txt
me@linux@ javac Main.java
me@linux@ javap Main.class > Main.lin.txt
diff Main.win.txt Main.lin.txt
``` | To my understanding javac only consideres the `-target` argument to decide what bytecode to emit, hence there is no platform specific in the byte code generation.
All the optimization is done by the JVM, not the compiler, when interpreting the byte codes. This is specific to the individual platform.
Also I've read somewhere that the Solaris JVM is the reference implementation, and then it is ported to Windows. Hence the Windows version is more optimzied than the Solaris one. |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | The compilation output should not depend on OS on which javac was called.
If you want to verify it try:
```
me@windows@ javac Main.java
me@windows@ javap Main.class > Main.win.txt
me@linux@ javac Main.java
me@linux@ javap Main.class > Main.lin.txt
diff Main.win.txt Main.lin.txt
``` | >
> Does javac perform any bytecode level optimizations depending on the
> underlying operating system?
>
>
>
No.
Determining why the performance characteristics of your program are different on two platforms requires profiling them under the same workload, and careful analysis of method execution times and memory allocation/gc behaivor. Does your program do any I/O? |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | I decided to google it anyway. ;)
<http://java.sun.com/docs/white/platform/javaplatform.doc1.html>
>
> The Java Platform is a new software platform for delivering and running highly interactive, dynamic, and secure applets and applications on networked computer systems. But what sets the Java Platform apart is that it sits on top of these other platforms, and executes bytecodes, which are not specific to any physical machine, but are machine instructions for a virtual machine. A program written in the Java Language compiles to a bytecode file that can run wherever the Java Platform is present, on any underlying operating system. In other words, the same exact file can run on any operating system that is running the Java Platform. This portability is possible because at the core of the Java Platform is the Java Virtual Machine.
>
>
>
Written April 30, 1996.
---
A common mistake, esp if you have developed for C/C++, is to assume that the compiler optimises the code. It does one and only one optimisation which is to evaluate compiler time known constants.
It is certainly true that the compiler is no where near as powerful as you might imagine because it just validates the code and produces byte-code which matches your code as closely as possible.
This is because the byte-code is for an idealised virtual machine which in theory doesn't need any optimisations. Hopefully when you think about it that way it makes sense that the compiler does do anything much, it doesn't know how the code will actually be used.
Instead all the optimisation is performed by the JIT in the JVM. This is entirely platform dependant and can produce 32-bit or 64-bit code and use the exact instruction of the processor running the code. It will also optimise the code based on how it is actually used, something a static compiler cannot do. It means the code can be re-compiled more than once based on different usage patterns. ;) | To my understanding javac only consideres the `-target` argument to decide what bytecode to emit, hence there is no platform specific in the byte code generation.
All the optimization is done by the JVM, not the compiler, when interpreting the byte codes. This is specific to the individual platform.
Also I've read somewhere that the Solaris JVM is the reference implementation, and then it is ported to Windows. Hence the Windows version is more optimzied than the Solaris one. |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | No. `javac` does not do any optimizations on different platforms.
See the oracle ["tools"](http://download.oracle.com/javase/1.4.2/docs/tooldocs/tools.html) page (where `javac` and other tools are described):
>
> Each of the development tools comes in a Microsoft Windows version (Windows) and a Solaris or Linux version. **There is virtually no difference in features between versions.** However, there are minor differences in configuration and usage to accommodate the special requirements of each operating system. (*For example, the way you specify directory separators depends on the OS.*)
>
>
>
---
(Maybe the Solaris JVM is slower than the windows JVM?) | To extend on dacwe's part "Maybe the Solaris JVM is slower than the windows JVM?"
There are configuration options (e.g., whether to use the client or server vm [[link]](http://download.oracle.com/javase/6/docs/technotes/guides/vm/server-class.html), and probably others as well), whose defaults differ depending on the OS. So that might be a reason why the Solaris VM is slower here. |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | I decided to google it anyway. ;)
<http://java.sun.com/docs/white/platform/javaplatform.doc1.html>
>
> The Java Platform is a new software platform for delivering and running highly interactive, dynamic, and secure applets and applications on networked computer systems. But what sets the Java Platform apart is that it sits on top of these other platforms, and executes bytecodes, which are not specific to any physical machine, but are machine instructions for a virtual machine. A program written in the Java Language compiles to a bytecode file that can run wherever the Java Platform is present, on any underlying operating system. In other words, the same exact file can run on any operating system that is running the Java Platform. This portability is possible because at the core of the Java Platform is the Java Virtual Machine.
>
>
>
Written April 30, 1996.
---
A common mistake, esp if you have developed for C/C++, is to assume that the compiler optimises the code. It does one and only one optimisation which is to evaluate compiler time known constants.
It is certainly true that the compiler is no where near as powerful as you might imagine because it just validates the code and produces byte-code which matches your code as closely as possible.
This is because the byte-code is for an idealised virtual machine which in theory doesn't need any optimisations. Hopefully when you think about it that way it makes sense that the compiler does do anything much, it doesn't know how the code will actually be used.
Instead all the optimisation is performed by the JIT in the JVM. This is entirely platform dependant and can produce 32-bit or 64-bit code and use the exact instruction of the processor running the code. It will also optimise the code based on how it is actually used, something a static compiler cannot do. It means the code can be re-compiled more than once based on different usage patterns. ;) | >
> Does javac perform any bytecode level optimizations depending on the
> underlying operating system?
>
>
>
No.
Determining why the performance characteristics of your program are different on two platforms requires profiling them under the same workload, and careful analysis of method execution times and memory allocation/gc behaivor. Does your program do any I/O? |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | No. `javac` does not do any optimizations on different platforms.
See the oracle ["tools"](http://download.oracle.com/javase/1.4.2/docs/tooldocs/tools.html) page (where `javac` and other tools are described):
>
> Each of the development tools comes in a Microsoft Windows version (Windows) and a Solaris or Linux version. **There is virtually no difference in features between versions.** However, there are minor differences in configuration and usage to accommodate the special requirements of each operating system. (*For example, the way you specify directory separators depends on the OS.*)
>
>
>
---
(Maybe the Solaris JVM is slower than the windows JVM?) | I decided to google it anyway. ;)
<http://java.sun.com/docs/white/platform/javaplatform.doc1.html>
>
> The Java Platform is a new software platform for delivering and running highly interactive, dynamic, and secure applets and applications on networked computer systems. But what sets the Java Platform apart is that it sits on top of these other platforms, and executes bytecodes, which are not specific to any physical machine, but are machine instructions for a virtual machine. A program written in the Java Language compiles to a bytecode file that can run wherever the Java Platform is present, on any underlying operating system. In other words, the same exact file can run on any operating system that is running the Java Platform. This portability is possible because at the core of the Java Platform is the Java Virtual Machine.
>
>
>
Written April 30, 1996.
---
A common mistake, esp if you have developed for C/C++, is to assume that the compiler optimises the code. It does one and only one optimisation which is to evaluate compiler time known constants.
It is certainly true that the compiler is no where near as powerful as you might imagine because it just validates the code and produces byte-code which matches your code as closely as possible.
This is because the byte-code is for an idealised virtual machine which in theory doesn't need any optimisations. Hopefully when you think about it that way it makes sense that the compiler does do anything much, it doesn't know how the code will actually be used.
Instead all the optimisation is performed by the JIT in the JVM. This is entirely platform dependant and can produce 32-bit or 64-bit code and use the exact instruction of the processor running the code. It will also optimise the code based on how it is actually used, something a static compiler cannot do. It means the code can be re-compiled more than once based on different usage patterns. ;) |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | The compilation output should not depend on OS on which javac was called.
If you want to verify it try:
```
me@windows@ javac Main.java
me@windows@ javap Main.class > Main.win.txt
me@linux@ javac Main.java
me@linux@ javap Main.class > Main.lin.txt
diff Main.win.txt Main.lin.txt
``` | To extend on dacwe's part "Maybe the Solaris JVM is slower than the windows JVM?"
There are configuration options (e.g., whether to use the client or server vm [[link]](http://download.oracle.com/javase/6/docs/technotes/guides/vm/server-class.html), and probably others as well), whose defaults differ depending on the OS. So that might be a reason why the Solaris VM is slower here. |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | No. `javac` does not do any optimizations on different platforms.
See the oracle ["tools"](http://download.oracle.com/javase/1.4.2/docs/tooldocs/tools.html) page (where `javac` and other tools are described):
>
> Each of the development tools comes in a Microsoft Windows version (Windows) and a Solaris or Linux version. **There is virtually no difference in features between versions.** However, there are minor differences in configuration and usage to accommodate the special requirements of each operating system. (*For example, the way you specify directory separators depends on the OS.*)
>
>
>
---
(Maybe the Solaris JVM is slower than the windows JVM?) | The compilation output should not depend on OS on which javac was called.
If you want to verify it try:
```
me@windows@ javac Main.java
me@windows@ javap Main.class > Main.win.txt
me@linux@ javac Main.java
me@linux@ javap Main.class > Main.lin.txt
diff Main.win.txt Main.lin.txt
``` |
7,233,865 | My understanding is that the Java bytecode produced by invoking `javac` is independent of the underlying operating system, but the HotSpot compiler will perform platform-specific JIT optimizations and compilations as the program is running.
However, I compiled code on Windows under a 32 bit JDK and executed it on Solaris under a 32 bit JVM (neither OS is a 64 bit operating system). The Solaris x86 box, to the best of my knowledge (working to confirm the specs on it) should outperform the Windows box in all regards (number of cores, amount of RAM, hard disk latency, processor speed, and so on). However, the same code is running measurably faster on Windows (a single data point would be a 7.5 second operation on Windows taking over 10 seconds on Solaris) on a consistent basis. My next test would be to compile on Solaris and note performance differences, but that just doesn't make sense to me, and I couldn't find any Oracle documentation that would explain what I'm seeing.
Given the same version (major, minor, release, etc.) of the JVM on two different operating systems, would invoking `javac` on the same source files result in different optimizations within the Java bytecode (the `.class` files produced)? Is there any documentation that explains this behavior? | 2011/08/29 | [
"https://Stackoverflow.com/questions/7233865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
] | No. `javac` does not do any optimizations on different platforms.
See the oracle ["tools"](http://download.oracle.com/javase/1.4.2/docs/tooldocs/tools.html) page (where `javac` and other tools are described):
>
> Each of the development tools comes in a Microsoft Windows version (Windows) and a Solaris or Linux version. **There is virtually no difference in features between versions.** However, there are minor differences in configuration and usage to accommodate the special requirements of each operating system. (*For example, the way you specify directory separators depends on the OS.*)
>
>
>
---
(Maybe the Solaris JVM is slower than the windows JVM?) | >
> Does javac perform any bytecode level optimizations depending on the
> underlying operating system?
>
>
>
No.
Determining why the performance characteristics of your program are different on two platforms requires profiling them under the same workload, and careful analysis of method execution times and memory allocation/gc behaivor. Does your program do any I/O? |
24,943,706 | I am converting some VB.Net code over to C# and I am stuck on this linq query:
```
(From query In dtx.WebQueries Join _
wt In dtx.WebTasks On wt.TaskReportCategory Equals query.QueryCategory _
Where wt.TaskKey = taskKey _
//In the legacy code they reuse thekey below
Select query, thekey = query.QueryKey Order By query.QueryTitle _
Where Not (From qry In dtx.WebQueries Join _
qg In dtx.WebQueryGroups On qry.QueryKey Equals qg.QueryKey Join _
wp In dtx.WebPermissions On qg.QueryGroupNameKey Equals wp.TaskGroupNameKey Join _
wugn In dtx.WebUserGroupNames On wp.UserGroupNameKey Equals wugn.UserGroupNameKey Join _
wug In dtx.WebUserGroups On wugn.UserGroupNameKey Equals wug.UserGroupNameKey Join _
wt In dtx.WebTasks On wt.TaskReportCategory Equals qry.QueryCategory _
Where wp.ResourceKey = 4 _
And wt.TaskKey = taskKey _
And wug.UserKey = userKey _
//Here they reuse thekey and I am not sure how to assign it
Select qry.QueryKey).Contains(thekey))
```
I have converted all but one small piece of code:
```
(from query in dtx.WebQueries join
wt in dtx.WebTasks on query.QueryCategory equals wt.TaskReportCategory
where wt.TaskKey == taskKey
//I am not sure how to assign a variable here to use later
select new { query, var key = query.QueryKey}).OrderBy(x => x.QueryTitle)
.Where(!from qry in dtx.WebQueries join
qg in dtx.WebQueryGroups on qry.QueryKey equals qg.QueryKey join
wp in dtx.WebPermissions on qg.QueryGroupNameKey equals wp.TaskGroupNameKey join
wugn in dtx.WebUserGroupNames on wp.UserGroupNameKey equals wugn.UserGroupNameKey join
wug in dtx.WebUserGroups on wugn.UserGroupNameKey equals wug.UserGroupNameKey join
wt in dtx.WebTasks on qry.QueryCategory equals wt.TaskReportCategory
where wp.ResourceKey == 4
&& wt.TaskKey == taskKey
&& wug.UserKey == userKey
select qry.QueryKey) == key //I need to put the variable here (see above comment);
```
I am not sure how to do the part where the comments are in C#. | 2014/07/24 | [
"https://Stackoverflow.com/questions/24943706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/512915/"
] | If you need to assign a variable you can use the LINQ [let clause](http://msdn.microsoft.com/en-us/library/bb383976.aspx). Something like
```
let key = query.QueryKey
```
that can later be used in the same scope. In your case, though, I would say that just removing the var keyword from the anonymous type should allow you to reference it later in your where clause. Something like:
```
select new { query, key = query.QueryKey}).OrderBy(x => x.query.QueryTitle)
.Where(q => !(from qry in dtx.WebQueries join
qg in dtx.WebQueryGroups on qry.QueryKey equals qg.QueryKey join
wp in dtx.WebPermissions on qg.QueryGroupNameKey equals wp.TaskGroupNameKey join
wugn in dtx.WebUserGroupNames on wp.UserGroupNameKey equals wugn.UserGroupNameKey join
wug in dtx.WebUserGroups on wugn.UserGroupNameKey equals wug.UserGroupNameKey join
wt in dtx.WebTasks on qry.QueryCategory equals wt.TaskReportCategory
where wp.ResourceKey == 4
&& wt.TaskKey == taskKey
&& wug.UserKey == userKey
select qry.QueryKey).Contains(q.key))
``` | Try the following:
```
((from query in dtx.WebQueries
join wt in dtx.WebTasks on wt.TaskReportCategory equals query.QueryCategory
where wt.TaskKey == taskKey
select new {query, thekey = query.QueryKey}).OrderBy(query => query.QueryTitle)
where!(
from qry in dtx.WebQueries
join qg in dtx.WebQueryGroups on qry.QueryKey equals qg.QueryKey
join wp in dtx.WebPermissions on qg.QueryGroupNameKey equals wp.TaskGroupNameKey
join wugn in dtx.WebUserGroupNames on wp.UserGroupNameKey equals wugn.UserGroupNameKey
join wug in dtx.WebUserGroups on wugn.UserGroupNameKey equals wug.UserGroupNameKey
join wt in dtx.WebTasks on wt.TaskReportCategory equals qry.QueryCategory
where wp.ResourceKey == 4 && wt.TaskKey == taskKey && wug.UserKey == userKey
select qry.QueryKey).Contains(thekey));
``` |
7,667,876 | I have some PHP AJAX code that is supposed to validate some parameters sent by jQuery and return some values. Currently, it consistently returns invokes the jQuery error case, and I am not sure why.
Here is my jQuery code:
```
$('.vote_up').click(function()
{
alert ( "test: " + $(this).attr("data-problem_id") );
problem_id = $(this).attr("data-problem_id");
var dataString = 'problem_id='+ problem_id + '&vote=+';
$.ajax({
type: "POST",
url: "/problems/vote.php",
dataType: "json",
data: dataString,
success: function(json)
{
// ? :)
alert (json);
},
error : function(json)
{
alert("ajax error, json: " + json);
//for (var i = 0, l = json.length; i < l; ++i)
//{
// alert (json[i]);
//}
}
});
//Return false to prevent page navigation
return false;
});
```
and here is the PHP code. The validation errors in PHP do occur, but I see no sign that the error that is happening on the php side, is the one that is invoking the jQuery error case.
This is the snippet that gets invoked:
```
if ( empty ( $member_id ) || !isset ( $member_id ) )
{
error_log ( ".......error validating the problem - no member id");
$error = "not_logged_in";
echo json_encode ($error);
}
```
But how do I get the "not\_logged\_in" to show up in my JavaScript of the jQuery so that I know it is the bit that got returned? And if it isn't, how do I make it that that error is what comes back to the jQuery?
Thanks! | 2011/10/05 | [
"https://Stackoverflow.com/questions/7667876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731255/"
] | Don't echo $error in the json\_encode() method just simply echo $error like so. Also, don't use the variable json, use the variable data. Edited code below:
### PHP
```
if ( empty ( $member_id ) || !isset ( $member_id ) )
{
error_log ( ".......error validating the problem - no member id");
$error = "not_logged_in";
echo $error;
}
```
### jQuery
```
$('.vote_up').click(function()
{
alert ( "test: " + $(this).attr("data-problem_id") );
problem_id = $(this).attr("data-problem_id");
var dataString = 'problem_id='+ problem_id + '&vote=+';
$.ajax({
type: "POST",
url: "/problems/vote.php",
dataType: "json",
data: dataString,
success: function(data)
{
// ? :)
alert (data);
},
error : function(data)
{
alert("ajax error, json: " + data);
//for (var i = 0, l = json.length; i < l; ++i)
//{
// alert (json[i]);
//}
}
});
//Return false to prevent page navigation
return false;
});
``` | jQuery uses the `.success(...)` method when the response status is `200` (OK) any other status like `404` or `500` is considered an error so jQuery would use `.error(...)`. |
7,667,876 | I have some PHP AJAX code that is supposed to validate some parameters sent by jQuery and return some values. Currently, it consistently returns invokes the jQuery error case, and I am not sure why.
Here is my jQuery code:
```
$('.vote_up').click(function()
{
alert ( "test: " + $(this).attr("data-problem_id") );
problem_id = $(this).attr("data-problem_id");
var dataString = 'problem_id='+ problem_id + '&vote=+';
$.ajax({
type: "POST",
url: "/problems/vote.php",
dataType: "json",
data: dataString,
success: function(json)
{
// ? :)
alert (json);
},
error : function(json)
{
alert("ajax error, json: " + json);
//for (var i = 0, l = json.length; i < l; ++i)
//{
// alert (json[i]);
//}
}
});
//Return false to prevent page navigation
return false;
});
```
and here is the PHP code. The validation errors in PHP do occur, but I see no sign that the error that is happening on the php side, is the one that is invoking the jQuery error case.
This is the snippet that gets invoked:
```
if ( empty ( $member_id ) || !isset ( $member_id ) )
{
error_log ( ".......error validating the problem - no member id");
$error = "not_logged_in";
echo json_encode ($error);
}
```
But how do I get the "not\_logged\_in" to show up in my JavaScript of the jQuery so that I know it is the bit that got returned? And if it isn't, how do I make it that that error is what comes back to the jQuery?
Thanks! | 2011/10/05 | [
"https://Stackoverflow.com/questions/7667876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731255/"
] | jQuery uses the `.success(...)` method when the response status is `200` (OK) any other status like `404` or `500` is considered an error so jQuery would use `.error(...)`. | You must handle all output returned from the php script in the `success` handler in javascript. So a not-logged in user in php can still (should normally...) result in a successful ajax call.
If you are consistently getting the `error` handler in your javascript call, your php script was not run or is returning a real error instead of a json object.
According to the [manual](http://api.jquery.com/jQuery.ajax/), you have 3 variables available in the error handler, so just checking these will tell you exactly what the problem is:
```
// success
success: function(data)
{
if (data == 'not_logged_in') {
// not logged in
} else {
// data contains some json object
}
},
// ajax error
error: function(jqXHR, textStatus, errorThrown)
{
console.log(jqXHR);
console.log(textStatus);
console.log(errorThrown);
}
//
``` |
7,667,876 | I have some PHP AJAX code that is supposed to validate some parameters sent by jQuery and return some values. Currently, it consistently returns invokes the jQuery error case, and I am not sure why.
Here is my jQuery code:
```
$('.vote_up').click(function()
{
alert ( "test: " + $(this).attr("data-problem_id") );
problem_id = $(this).attr("data-problem_id");
var dataString = 'problem_id='+ problem_id + '&vote=+';
$.ajax({
type: "POST",
url: "/problems/vote.php",
dataType: "json",
data: dataString,
success: function(json)
{
// ? :)
alert (json);
},
error : function(json)
{
alert("ajax error, json: " + json);
//for (var i = 0, l = json.length; i < l; ++i)
//{
// alert (json[i]);
//}
}
});
//Return false to prevent page navigation
return false;
});
```
and here is the PHP code. The validation errors in PHP do occur, but I see no sign that the error that is happening on the php side, is the one that is invoking the jQuery error case.
This is the snippet that gets invoked:
```
if ( empty ( $member_id ) || !isset ( $member_id ) )
{
error_log ( ".......error validating the problem - no member id");
$error = "not_logged_in";
echo json_encode ($error);
}
```
But how do I get the "not\_logged\_in" to show up in my JavaScript of the jQuery so that I know it is the bit that got returned? And if it isn't, how do I make it that that error is what comes back to the jQuery?
Thanks! | 2011/10/05 | [
"https://Stackoverflow.com/questions/7667876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731255/"
] | Don't echo $error in the json\_encode() method just simply echo $error like so. Also, don't use the variable json, use the variable data. Edited code below:
### PHP
```
if ( empty ( $member_id ) || !isset ( $member_id ) )
{
error_log ( ".......error validating the problem - no member id");
$error = "not_logged_in";
echo $error;
}
```
### jQuery
```
$('.vote_up').click(function()
{
alert ( "test: " + $(this).attr("data-problem_id") );
problem_id = $(this).attr("data-problem_id");
var dataString = 'problem_id='+ problem_id + '&vote=+';
$.ajax({
type: "POST",
url: "/problems/vote.php",
dataType: "json",
data: dataString,
success: function(data)
{
// ? :)
alert (data);
},
error : function(data)
{
alert("ajax error, json: " + data);
//for (var i = 0, l = json.length; i < l; ++i)
//{
// alert (json[i]);
//}
}
});
//Return false to prevent page navigation
return false;
});
``` | You must handle all output returned from the php script in the `success` handler in javascript. So a not-logged in user in php can still (should normally...) result in a successful ajax call.
If you are consistently getting the `error` handler in your javascript call, your php script was not run or is returning a real error instead of a json object.
According to the [manual](http://api.jquery.com/jQuery.ajax/), you have 3 variables available in the error handler, so just checking these will tell you exactly what the problem is:
```
// success
success: function(data)
{
if (data == 'not_logged_in') {
// not logged in
} else {
// data contains some json object
}
},
// ajax error
error: function(jqXHR, textStatus, errorThrown)
{
console.log(jqXHR);
console.log(textStatus);
console.log(errorThrown);
}
//
``` |
35,873,933 | I'm trying to add prints inside nosetests run that show how much of the test has passes, but I do not want to use full carriage return.
It should look like:
my\_test\_module.MyTestCase.test\_somthing 10%
my\_test\_module.MyTestCase.test\_somthing 20%
...
my\_test\_module.MyTestCase.test\_somthing 100%
my\_test\_module.MyTestCase.test\_somthing ok
all in the same line.
I cannot use "\r" because it will override the entire line. I need a way to use some "partial carriage return" of a given number of letters.
How can I do it? | 2016/03/08 | [
"https://Stackoverflow.com/questions/35873933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4615411/"
] | You could just apply `numpy.floor`;
```
import numpy as np
tempDF['int_measure'] = tempDF['measure'].apply(np.floor)
id measure int_measure
0 12 3.2 3
1 12 4.2 4
2 12 6.8 6
...
9 51 2.1 2
10 51 NaN NaN
11 51 3.5 3
...
19 91 7.3 7
``` | You could also try:
```
df.apply(lambda s: s // 1)
```
Using `np.floor` is faster, however. |
35,873,933 | I'm trying to add prints inside nosetests run that show how much of the test has passes, but I do not want to use full carriage return.
It should look like:
my\_test\_module.MyTestCase.test\_somthing 10%
my\_test\_module.MyTestCase.test\_somthing 20%
...
my\_test\_module.MyTestCase.test\_somthing 100%
my\_test\_module.MyTestCase.test\_somthing ok
all in the same line.
I cannot use "\r" because it will override the entire line. I need a way to use some "partial carriage return" of a given number of letters.
How can I do it? | 2016/03/08 | [
"https://Stackoverflow.com/questions/35873933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4615411/"
] | You could just apply `numpy.floor`;
```
import numpy as np
tempDF['int_measure'] = tempDF['measure'].apply(np.floor)
id measure int_measure
0 12 3.2 3
1 12 4.2 4
2 12 6.8 6
...
9 51 2.1 2
10 51 NaN NaN
11 51 3.5 3
...
19 91 7.3 7
``` | The answers here are pretty dated and as of pandas 0.25.2 (perhaps earlier) the error
```
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
```
Which would be
```
df.iloc[:,0] = df.iloc[:,0].astype(int)
```
for one particular column. |
35,873,933 | I'm trying to add prints inside nosetests run that show how much of the test has passes, but I do not want to use full carriage return.
It should look like:
my\_test\_module.MyTestCase.test\_somthing 10%
my\_test\_module.MyTestCase.test\_somthing 20%
...
my\_test\_module.MyTestCase.test\_somthing 100%
my\_test\_module.MyTestCase.test\_somthing ok
all in the same line.
I cannot use "\r" because it will override the entire line. I need a way to use some "partial carriage return" of a given number of letters.
How can I do it? | 2016/03/08 | [
"https://Stackoverflow.com/questions/35873933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4615411/"
] | You could also try:
```
df.apply(lambda s: s // 1)
```
Using `np.floor` is faster, however. | The answers here are pretty dated and as of pandas 0.25.2 (perhaps earlier) the error
```
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
```
Which would be
```
df.iloc[:,0] = df.iloc[:,0].astype(int)
```
for one particular column. |
11,762,696 | I am getting the following error when trying to retrieve float from database:
**The 'Hours' property on 'WorkHours' could not be set to a 'Double' value. You must set this property to a non-null value of type 'Single'.**
The Hours property in the WorkHours Entity is:
**public Single? Hours {get; set;}**
Table design type:
**Hours Float**
When i store a value it stores as a double(16 digits after the point),i believe float is 7 digits after the point.
Any ideas why i am getting that error?
Thanks
MVC3, EF4, Sql Server 8 (2000) | 2012/08/01 | [
"https://Stackoverflow.com/questions/11762696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172730/"
] | Google does not call this method their Jelly Bean speech app (QuickSearchBox). Its simply not in the code. Unless there is an official comment from a Google Engineer I cannot give a definite answer "why" they did this. I did search the developer forums but did not see any commentary about this decision.
The ics default for speech recognition comes from Google's VoiceSearch.apk. You can decompile this apk and see and find see there is an Activity to handle an intent of action \*android.speech.action.RECOGNIZE\_SPEECH\*. In this apk I searched for "onBufferReceived" and found a reference to it in *com.google.android.voicesearch.GoogleRecognitionService$RecognitionCallback*.
With Jelly Bean, Google renamed VoiceSearch.apk to QuickSearch.apk and made a lot of new additions to the app (ex. offline dictation). You would expect to still find an onBufferReceived call, but for some reason it is completely gone. | I too was using the onBufferReceived method and was disappointed that the (non-guaranteed) call to the method was dropped in Jelly Bean. Well, if we can't grab the audio with onBufferReceived(), maybe there is a possibility of running an AudioRecord simultaneously with voice recognition. Anyone tried this? If not, I'll give it a whirl and report back. |
11,762,696 | I am getting the following error when trying to retrieve float from database:
**The 'Hours' property on 'WorkHours' could not be set to a 'Double' value. You must set this property to a non-null value of type 'Single'.**
The Hours property in the WorkHours Entity is:
**public Single? Hours {get; set;}**
Table design type:
**Hours Float**
When i store a value it stores as a double(16 digits after the point),i believe float is 7 digits after the point.
Any ideas why i am getting that error?
Thanks
MVC3, EF4, Sql Server 8 (2000) | 2012/08/01 | [
"https://Stackoverflow.com/questions/11762696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172730/"
] | Google does not call this method their Jelly Bean speech app (QuickSearchBox). Its simply not in the code. Unless there is an official comment from a Google Engineer I cannot give a definite answer "why" they did this. I did search the developer forums but did not see any commentary about this decision.
The ics default for speech recognition comes from Google's VoiceSearch.apk. You can decompile this apk and see and find see there is an Activity to handle an intent of action \*android.speech.action.RECOGNIZE\_SPEECH\*. In this apk I searched for "onBufferReceived" and found a reference to it in *com.google.android.voicesearch.GoogleRecognitionService$RecognitionCallback*.
With Jelly Bean, Google renamed VoiceSearch.apk to QuickSearch.apk and made a lot of new additions to the app (ex. offline dictation). You would expect to still find an onBufferReceived call, but for some reason it is completely gone. | I ran in to the same problem. The reason why I didn't just accept that "this does not work" was because Google Nows "note-to-self" record the audio and sends it to you. What I found out in logcat while running the "note-to-self"-operation was:
```
02-20 14:04:59.664: I/AudioService(525): AudioFocus requestAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50
02-20 14:04:59.754: I/AbstractCardController.SelfNoteController(8675): #attach
02-20 14:05:01.006: I/AudioService(525): AudioFocus abandonAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50
02-20 14:05:05.791: I/ActivityManager(525): START u0 {act=com.google.android.gm.action.AUTO_SEND typ=text/plain cmp=com.google.android.gm/.AutoSendActivity (has extras)} from pid 8675
02-20 14:05:05.821: I/AbstractCardView.SelfNoteCard(8675): #onViewDetachedFromWindow
```
This makes me belive that google disposes the audioFocus from google now (the regonizerIntent), and that they use an audio recorder or something similar when the Note-to-self-tag appears in onPartialResults. I can not confirm this, has anyone else made tries to make this work? |
11,762,696 | I am getting the following error when trying to retrieve float from database:
**The 'Hours' property on 'WorkHours' could not be set to a 'Double' value. You must set this property to a non-null value of type 'Single'.**
The Hours property in the WorkHours Entity is:
**public Single? Hours {get; set;}**
Table design type:
**Hours Float**
When i store a value it stores as a double(16 digits after the point),i believe float is 7 digits after the point.
Any ideas why i am getting that error?
Thanks
MVC3, EF4, Sql Server 8 (2000) | 2012/08/01 | [
"https://Stackoverflow.com/questions/11762696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172730/"
] | Google does not call this method their Jelly Bean speech app (QuickSearchBox). Its simply not in the code. Unless there is an official comment from a Google Engineer I cannot give a definite answer "why" they did this. I did search the developer forums but did not see any commentary about this decision.
The ics default for speech recognition comes from Google's VoiceSearch.apk. You can decompile this apk and see and find see there is an Activity to handle an intent of action \*android.speech.action.RECOGNIZE\_SPEECH\*. In this apk I searched for "onBufferReceived" and found a reference to it in *com.google.android.voicesearch.GoogleRecognitionService$RecognitionCallback*.
With Jelly Bean, Google renamed VoiceSearch.apk to QuickSearch.apk and made a lot of new additions to the app (ex. offline dictation). You would expect to still find an onBufferReceived call, but for some reason it is completely gone. | I have a service that is implementing RecognitionListener and I also override onBufferReceived(byte[]) method. I was investigating why the speech recognition is much slower to call onResults() on <=ICS . The only difference I could find was that onBufferReceived is called on phones <= ICS. On JellyBean the onBufferReceived() is never called and onResults() is called significantly faster and I'm thinking its because of the overhead to call onBufferReceived every second or millisecond. Maybe thats why they did away with onBufferReceived()? |
11,762,696 | I am getting the following error when trying to retrieve float from database:
**The 'Hours' property on 'WorkHours' could not be set to a 'Double' value. You must set this property to a non-null value of type 'Single'.**
The Hours property in the WorkHours Entity is:
**public Single? Hours {get; set;}**
Table design type:
**Hours Float**
When i store a value it stores as a double(16 digits after the point),i believe float is 7 digits after the point.
Any ideas why i am getting that error?
Thanks
MVC3, EF4, Sql Server 8 (2000) | 2012/08/01 | [
"https://Stackoverflow.com/questions/11762696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172730/"
] | I too was using the onBufferReceived method and was disappointed that the (non-guaranteed) call to the method was dropped in Jelly Bean. Well, if we can't grab the audio with onBufferReceived(), maybe there is a possibility of running an AudioRecord simultaneously with voice recognition. Anyone tried this? If not, I'll give it a whirl and report back. | I ran in to the same problem. The reason why I didn't just accept that "this does not work" was because Google Nows "note-to-self" record the audio and sends it to you. What I found out in logcat while running the "note-to-self"-operation was:
```
02-20 14:04:59.664: I/AudioService(525): AudioFocus requestAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50
02-20 14:04:59.754: I/AbstractCardController.SelfNoteController(8675): #attach
02-20 14:05:01.006: I/AudioService(525): AudioFocus abandonAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50
02-20 14:05:05.791: I/ActivityManager(525): START u0 {act=com.google.android.gm.action.AUTO_SEND typ=text/plain cmp=com.google.android.gm/.AutoSendActivity (has extras)} from pid 8675
02-20 14:05:05.821: I/AbstractCardView.SelfNoteCard(8675): #onViewDetachedFromWindow
```
This makes me belive that google disposes the audioFocus from google now (the regonizerIntent), and that they use an audio recorder or something similar when the Note-to-self-tag appears in onPartialResults. I can not confirm this, has anyone else made tries to make this work? |
11,762,696 | I am getting the following error when trying to retrieve float from database:
**The 'Hours' property on 'WorkHours' could not be set to a 'Double' value. You must set this property to a non-null value of type 'Single'.**
The Hours property in the WorkHours Entity is:
**public Single? Hours {get; set;}**
Table design type:
**Hours Float**
When i store a value it stores as a double(16 digits after the point),i believe float is 7 digits after the point.
Any ideas why i am getting that error?
Thanks
MVC3, EF4, Sql Server 8 (2000) | 2012/08/01 | [
"https://Stackoverflow.com/questions/11762696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172730/"
] | I too was using the onBufferReceived method and was disappointed that the (non-guaranteed) call to the method was dropped in Jelly Bean. Well, if we can't grab the audio with onBufferReceived(), maybe there is a possibility of running an AudioRecord simultaneously with voice recognition. Anyone tried this? If not, I'll give it a whirl and report back. | I have a service that is implementing RecognitionListener and I also override onBufferReceived(byte[]) method. I was investigating why the speech recognition is much slower to call onResults() on <=ICS . The only difference I could find was that onBufferReceived is called on phones <= ICS. On JellyBean the onBufferReceived() is never called and onResults() is called significantly faster and I'm thinking its because of the overhead to call onBufferReceived every second or millisecond. Maybe thats why they did away with onBufferReceived()? |
6,291,577 | I am using XSL to configure my XML file into a smaller XML. My code fragments are so:
```
public class MessageTransformer {
public static void main(String[] args) {
try {
TransformerFactory transformerFactory = TransformerFactory.newInstance();
Transformer transformer = transformerFactory.newTransformer (new StreamSource("sample.xsl"));
transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty(OutputKeys.METHOD, "xml");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2");
transformer.transform(new StreamSource ("sample.xml"),
new StreamResult( new FileOutputStream("sample.xml"))
);
}
catch (Exception e) {
e.printStackTrace( );
}
}
}
```
I got this error
```
ERROR: 'Premature end of file.'
ERROR: 'com.sun.org.apache.xml.internal.utils.WrappedRuntimeException: Premature end of file.'
```
When I use XSL file to transform XML manually I don' t have any problem. However with this JAVA file I cannot transform.
What would be the problem? | 2011/06/09 | [
"https://Stackoverflow.com/questions/6291577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/517811/"
] | You are streaming from and to the same file. Try changing it to something like this:
```
transformer.transform(new StreamSource ("sample.xml"),
new StreamResult( new FileOutputStream("sample_result.xml"))
);
``` | Complete your xml file give any tag other wise it will give an error: **Premature end of file.**
```
<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>
<Customer>
<person>
<Customer_ID>1</Customer_ID>
<name>Nirav Modi</name>
</person>
<person>
<Customer_ID>2</Customer_ID>
<name>Nihar dave</name>
</person>
</Customer>
```
Like this and try again. |
54,985,072 | Hi I am new to spring test framework. I have a Spring bean which is like this -
```
BEAN A{
@Autowired
BEAN B;
@Autowired
BEAN C;
}
```
I want to mock Bean A and also its internal dependencies as well.
When I am trying to instatiate a mock instance of Bean A using Mockito, its failing with "UnsatisfiedDependencyException".
So I have to go and find each and every dependency in Bean A and mock those individually.
Is there a way I can mock a spring bean and all its internal dependencies as well in a single go? | 2019/03/04 | [
"https://Stackoverflow.com/questions/54985072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4159389/"
] | Hooks should always be called outside conditionals, as mentioned in the [Rules of Hooks](https://reactjs.org/docs/hooks-overview.html#rules-of-hooks). You can move your hook to the top-level and move the conditional inside the hook.
```
const Map = (props) => {
const { data } = props
useEffect(() => {
if (data['features'] != null) {
const map = getMap();
map.on('load', function () {
map.addSource('malls', {
type: "geojson",
data: data,
cluster: true,
clusterMaxZoom: 14,
clusterRadius: 50
});
});
}
}, [data]);
return (
<div style={style} id="mapContainer" />
);
}
``` | Instead of writing it as a function, you should use a class and implement the [`shouldComponentUpdate`](https://reactjs.org/docs/react-component.html#shouldcomponentupdate) function. |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | Summary:
1. check if tensorflow sees your GPU (optional)
2. check if your videocard can work with tensorflow (optional)
3. [find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version](https://www.tensorflow.org/install/source#linux)
4. [install CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)
5. [install cuDNN SDK](https://developer.nvidia.com/rdp/cudnn-archive)
6. pip uninstall tensorflow; pip install tensorflow-gpu
7. check if tensorflow sees your GPU
`*` source - <https://www.tensorflow.org/install/gpu>
Detailed instruction:
1. check if tensorflow sees your GPU (optional)
```
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
# my output was => ['/device:CPU:0']
# good output must be => ['/device:CPU:0', '/device:GPU:0']
```
2. check if your card can work with tensorflow (optional)
* my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook
* tensorflow needs Compute Capability 3.5 or higher. (<https://www.tensorflow.org/install/gpu#hardware_requirements>)
* <https://developer.nvidia.com/cuda-gpus>
* select "CUDA-Enabled GeForce Products"
* result - "GeForce GTX 1060 Compute Capability = 6.1"
* my card can work with tf!
3. find versions of CUDA Toolkit and cuDNN SDK, that you need
a) find your tf version
```
import sys
print (sys.version)
# 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
import tensorflow as tf
print(tf.__version__)
# my output was => 1.13.1
```
b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version
```
https://www.tensorflow.org/install/source#linux
* it is written for linux, but worked in my case
see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4
```
4. install CUDA Toolkit
a) install CUDA Toolkit 10.0
```
https://developer.nvidia.com/cuda-toolkit-archive
select: CUDA Toolkit 10.0 and download base installer (2 GB)
installation settings: select only CUDA
(my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development)
```
b) add environment variables:
```
system variables / path must have:
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include
```
5. install cuDNN SDK
a) download cuDNN SDK v7.4
```
https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple)
select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0"
```
b) add path to 'bin' folder into "environment variables / system variables / path":
```
D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin
```
6. pip uninstall tensorflow
pip install tensorflow-gpu
7. check if tensorflow sees your GPU
```
- restart your PC
- print(get_available_devices())
- # now this code should return => ['/device:CPU:0', '/device:GPU:0']
``` | So as of 2022-04, the `tensorflow` package contains both CPU and GPU builds. To install a GPU build, search to see what's available:
```
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
```
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for [MKL](https://en.wikipedia.org/wiki/Math_Kernel_Library) (Intel CPU), [Eigen](https://eigen.tuxfamily.org/), or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
```
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
```
To install a specific version in an otherwise empty environment, you can use a command like:
```
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
```
As you can see, if you install a GPU build, it will automatically also install compatible `cudatoolkit` and `cudnn` packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on [the official website](https://www.tensorflow.org/install/gpu).
After installation, confirm that it worked and it sees the GPU by running:
```
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
```
Getting conda to install a GPU build *and* other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install *any* TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:
```
λ conda install tensorflow=2*=gpu* spyder matplotlib
```
For me, this ended up installing a two year old GPU version of tensorflow:
```
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
```
I had previously been using the `tensorflow-gpu` package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow *or* the CUDA dependencies:
```
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was
```
conda install cudatoolkit==11.2
pip install tensorflow-gpu==2.8.0
```
Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where `libcudart.so.11.0` was not found. As a result, GPUs were not visible. The remedy was to set [environmental variable](https://forums.developer.nvidia.com/t/path-ld-library-path/48080) `LD_LIBRARY_PATH` to point to your `anaconda3/envs/<your_tensorflow_environment>/lib` with this command
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib
```
Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this [procedure](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#macos-and-linux) from conda's official website. | In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:
```
pip uninstall tensorflow-gpu==1.14
pip install tensorflow-gpu==1.14
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | Summary:
1. check if tensorflow sees your GPU (optional)
2. check if your videocard can work with tensorflow (optional)
3. [find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version](https://www.tensorflow.org/install/source#linux)
4. [install CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)
5. [install cuDNN SDK](https://developer.nvidia.com/rdp/cudnn-archive)
6. pip uninstall tensorflow; pip install tensorflow-gpu
7. check if tensorflow sees your GPU
`*` source - <https://www.tensorflow.org/install/gpu>
Detailed instruction:
1. check if tensorflow sees your GPU (optional)
```
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
# my output was => ['/device:CPU:0']
# good output must be => ['/device:CPU:0', '/device:GPU:0']
```
2. check if your card can work with tensorflow (optional)
* my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook
* tensorflow needs Compute Capability 3.5 or higher. (<https://www.tensorflow.org/install/gpu#hardware_requirements>)
* <https://developer.nvidia.com/cuda-gpus>
* select "CUDA-Enabled GeForce Products"
* result - "GeForce GTX 1060 Compute Capability = 6.1"
* my card can work with tf!
3. find versions of CUDA Toolkit and cuDNN SDK, that you need
a) find your tf version
```
import sys
print (sys.version)
# 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
import tensorflow as tf
print(tf.__version__)
# my output was => 1.13.1
```
b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version
```
https://www.tensorflow.org/install/source#linux
* it is written for linux, but worked in my case
see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4
```
4. install CUDA Toolkit
a) install CUDA Toolkit 10.0
```
https://developer.nvidia.com/cuda-toolkit-archive
select: CUDA Toolkit 10.0 and download base installer (2 GB)
installation settings: select only CUDA
(my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development)
```
b) add environment variables:
```
system variables / path must have:
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include
```
5. install cuDNN SDK
a) download cuDNN SDK v7.4
```
https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple)
select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0"
```
b) add path to 'bin' folder into "environment variables / system variables / path":
```
D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin
```
6. pip uninstall tensorflow
pip install tensorflow-gpu
7. check if tensorflow sees your GPU
```
- restart your PC
- print(get_available_devices())
- # now this code should return => ['/device:CPU:0', '/device:GPU:0']
``` | In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:
```
pip uninstall tensorflow-gpu==1.14
pip install tensorflow-gpu==1.14
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | I came across this same issue in jupyter notebooks. This could be an easy fix.
```
$ pip uninstall tensorflow
$ pip install tensorflow-gpu
```
You can check if it worked with:
```
tf.test.gpu_device_name()
```
### Update 2020
It seems like tensorflow 2.0+ comes with gpu capabilities therefore
`pip install tensorflow` should be enough | I had a problem because I didn't specify the version of **Tensorflow** so my version was **2.11**. After many hours I found that my problem is described in [install guide](https://www.tensorflow.org/install/pip#windows-native):
>
> Caution: **TensorFlow 2.10** was the last TensorFlow release that **supported GPU on native-Windows**. Starting with **TensorFlow 2.11**, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin
>
>
>
Before that, I read most of the answers to this and similar questions. I followed [@AndrewPt answer](https://stackoverflow.com/a/55891075/13448436). I already had installed **CUDA** but updated the version just in case, installed **cudNN**, and restarted the computer.
The easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it's not necessary, but I didn't want to see how pip messed up versions at 2 am):
```
pip uninstall keras
pip uninstall tensorflow-io-gcs-filesystem
pip uninstall tensorflow-estimator
pip uninstall tensorflow
pip uninstall Keras-Preprocessing
pip uninstall tensorflow-intel
```
because I wanted only packages required for the old version, and I didn't do it for [all required packages for 2.11 version](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/pip_package/setup.py). After that I installed **tensorflow 2.10**:
```
pip install tensorflow<2.11
```
and it worked.
I used this code to check if GPU is visible:
```py
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through <https://developer.nvidia.com/cuda-gpus>) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0.
<https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux>
You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU. | I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success.
What solved my issue was to update my GPU drivers. You can update them via:
1. Pressing windows-button + r
2. Entering `devmgmt.msc`
3. Right-Clicking on "Display adapters" and clicking on the "Properties" option
4. Going to the "Driver" tab and selecting "Updating Driver".
5. Finally, click on "Search automatically for updated driver software"
6. Restart your machine and run the following check again:
```
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[x.name for x in local_device_protos]
```
```
Sample output:
2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
>>> [x.name for x in local_device_protos]
['/device:CPU:0', '/device:GPU:0']
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through <https://developer.nvidia.com/cuda-gpus>) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0.
<https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux>
You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU. | I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was
```
conda install cudatoolkit==11.2
pip install tensorflow-gpu==2.8.0
```
Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where `libcudart.so.11.0` was not found. As a result, GPUs were not visible. The remedy was to set [environmental variable](https://forums.developer.nvidia.com/t/path-ld-library-path/48080) `LD_LIBRARY_PATH` to point to your `anaconda3/envs/<your_tensorflow_environment>/lib` with this command
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib
```
Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this [procedure](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#macos-and-linux) from conda's official website. |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | So as of 2022-04, the `tensorflow` package contains both CPU and GPU builds. To install a GPU build, search to see what's available:
```
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
```
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for [MKL](https://en.wikipedia.org/wiki/Math_Kernel_Library) (Intel CPU), [Eigen](https://eigen.tuxfamily.org/), or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
```
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
```
To install a specific version in an otherwise empty environment, you can use a command like:
```
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
```
As you can see, if you install a GPU build, it will automatically also install compatible `cudatoolkit` and `cudnn` packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on [the official website](https://www.tensorflow.org/install/gpu).
After installation, confirm that it worked and it sees the GPU by running:
```
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
```
Getting conda to install a GPU build *and* other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install *any* TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:
```
λ conda install tensorflow=2*=gpu* spyder matplotlib
```
For me, this ended up installing a two year old GPU version of tensorflow:
```
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
```
I had previously been using the `tensorflow-gpu` package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow *or* the CUDA dependencies:
```
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0
``` | I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was
```
conda install cudatoolkit==11.2
pip install tensorflow-gpu==2.8.0
```
Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where `libcudart.so.11.0` was not found. As a result, GPUs were not visible. The remedy was to set [environmental variable](https://forums.developer.nvidia.com/t/path-ld-library-path/48080) `LD_LIBRARY_PATH` to point to your `anaconda3/envs/<your_tensorflow_environment>/lib` with this command
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib
```
Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this [procedure](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#macos-and-linux) from conda's official website. |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | Summary:
1. check if tensorflow sees your GPU (optional)
2. check if your videocard can work with tensorflow (optional)
3. [find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version](https://www.tensorflow.org/install/source#linux)
4. [install CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)
5. [install cuDNN SDK](https://developer.nvidia.com/rdp/cudnn-archive)
6. pip uninstall tensorflow; pip install tensorflow-gpu
7. check if tensorflow sees your GPU
`*` source - <https://www.tensorflow.org/install/gpu>
Detailed instruction:
1. check if tensorflow sees your GPU (optional)
```
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
# my output was => ['/device:CPU:0']
# good output must be => ['/device:CPU:0', '/device:GPU:0']
```
2. check if your card can work with tensorflow (optional)
* my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook
* tensorflow needs Compute Capability 3.5 or higher. (<https://www.tensorflow.org/install/gpu#hardware_requirements>)
* <https://developer.nvidia.com/cuda-gpus>
* select "CUDA-Enabled GeForce Products"
* result - "GeForce GTX 1060 Compute Capability = 6.1"
* my card can work with tf!
3. find versions of CUDA Toolkit and cuDNN SDK, that you need
a) find your tf version
```
import sys
print (sys.version)
# 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
import tensorflow as tf
print(tf.__version__)
# my output was => 1.13.1
```
b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version
```
https://www.tensorflow.org/install/source#linux
* it is written for linux, but worked in my case
see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4
```
4. install CUDA Toolkit
a) install CUDA Toolkit 10.0
```
https://developer.nvidia.com/cuda-toolkit-archive
select: CUDA Toolkit 10.0 and download base installer (2 GB)
installation settings: select only CUDA
(my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development)
```
b) add environment variables:
```
system variables / path must have:
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include
```
5. install cuDNN SDK
a) download cuDNN SDK v7.4
```
https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple)
select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0"
```
b) add path to 'bin' folder into "environment variables / system variables / path":
```
D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin
```
6. pip uninstall tensorflow
pip install tensorflow-gpu
7. check if tensorflow sees your GPU
```
- restart your PC
- print(get_available_devices())
- # now this code should return => ['/device:CPU:0', '/device:GPU:0']
``` | I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success.
What solved my issue was to update my GPU drivers. You can update them via:
1. Pressing windows-button + r
2. Entering `devmgmt.msc`
3. Right-Clicking on "Display adapters" and clicking on the "Properties" option
4. Going to the "Driver" tab and selecting "Updating Driver".
5. Finally, click on "Search automatically for updated driver software"
6. Restart your machine and run the following check again:
```
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[x.name for x in local_device_protos]
```
```
Sample output:
2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
>>> [x.name for x in local_device_protos]
['/device:CPU:0', '/device:GPU:0']
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (`conda list`) of the environment to see if this is the case . If so, remove the package by using `conda remove tensorflow` and install keras-gpu instead (`conda install -c anaconda keras-gpu`. This will install everything you need to run your machine learning codes in GPU. Cheers!
P.S. You should check first if you have installed the drivers correctly using `nvidia-smi`. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at `C:\Program Files\NVIDIA Corporation\NVSMI` | So as of 2022-04, the `tensorflow` package contains both CPU and GPU builds. To install a GPU build, search to see what's available:
```
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
```
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for [MKL](https://en.wikipedia.org/wiki/Math_Kernel_Library) (Intel CPU), [Eigen](https://eigen.tuxfamily.org/), or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
```
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
```
To install a specific version in an otherwise empty environment, you can use a command like:
```
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
```
As you can see, if you install a GPU build, it will automatically also install compatible `cudatoolkit` and `cudnn` packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on [the official website](https://www.tensorflow.org/install/gpu).
After installation, confirm that it worked and it sees the GPU by running:
```
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
```
Getting conda to install a GPU build *and* other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install *any* TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:
```
λ conda install tensorflow=2*=gpu* spyder matplotlib
```
For me, this ended up installing a two year old GPU version of tensorflow:
```
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
```
I had previously been using the `tensorflow-gpu` package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow *or* the CUDA dependencies:
```
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0
``` |
41,402,454 | I am trying to deserialize a string to object. Is xml node like syntax, but is not an xml (as there is no root node or namespace). This is what I have so far, having this error:
>
> `<delivery xmlns=''>. was not expected`
>
>
>
Deserialize code:
```
var number = 2;
var amount = 3;
var xmlCommand = $"<delivery number=\"{number}\" amount=\"{amount}\" />";
XmlSerializer serializer = new XmlSerializer(typeof(Delivery));
var rdr = new StringReader(xmlCommand);
Delivery delivery = (Delivery)serializer.Deserialize(rdr);
```
Delivery object:
```
using System.Xml.Serialization;
namespace SOMWClient.Events
{
public class Delivery
{
[XmlAttribute(AttributeName = "number")]
public int Number { get; set; }
[XmlAttribute(AttributeName = "amount")]
public string Amount { get; set; }
public Delivery()
{
}
}
}
```
How can I avoid the xmlns error when deserializing ? | 2016/12/30 | [
"https://Stackoverflow.com/questions/41402454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310107/"
] | The following worked for me, hp laptop. I have a Cuda Compute capability
(version) 3.0 compatible Nvidia card. Windows 7.
```
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe install tensorflow-gpu
``` | I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success.
What solved my issue was to update my GPU drivers. You can update them via:
1. Pressing windows-button + r
2. Entering `devmgmt.msc`
3. Right-Clicking on "Display adapters" and clicking on the "Properties" option
4. Going to the "Driver" tab and selecting "Updating Driver".
5. Finally, click on "Search automatically for updated driver software"
6. Restart your machine and run the following check again:
```
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[x.name for x in local_device_protos]
```
```
Sample output:
2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
>>> [x.name for x in local_device_protos]
['/device:CPU:0', '/device:GPU:0']
``` |
353,345 | This is my scenario.
I have a Oauth2 app in which I am getting Cases for the connected user. In the initial setup, after getting initial Cases from a custom query, I would like to create on Salesforce a trigger for when a new Case is created, and in this trigger call my API with the Case information that is relevant to my app. It is important to do this in real time (this is why I am thinking on a trigger) and not being asking, like a cronjob.
How can I accomplish this?
I have been taking a look at metadata api (ApexTrigger), [but it seems that I need a callout](https://salesforce.stackexchange.com/questions/325843/how-do-i-use-the-rest-api-to-update-an-apex-trigger/325850?noredirect=1#comment515267_325850)? Not even know what this is.
Can anyone point me in the right direction, please? Not sure if this something that I can accomplish, documentation is huge.
I am using Go, but this is just FYI, I don't think this is quite relevant.
Any help would be much appreciated. | 2021/08/04 | [
"https://salesforce.stackexchange.com/questions/353345",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/102288/"
] | Dynamically creating a Trigger is the most dangerous, and among the worst, ways to accomplish data synchronization. You should investigate features like
* [Change Data Capture](https://developer.salesforce.com/docs/atlas.en-us.change_data_capture.meta/change_data_capture/cdc_intro.htm#!)
* The [sObject Get Updated](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_getupdated.htm#!) endpoint
While creating and deploying a Trigger via the Metadata API is possible, data sync triggers are not easy to write in a way that is resilient to volume. As an external data consumer, it's wiser to use an API-driven or event-driven approach. | You just need to [create a package](https://help.salesforce.com/articleView?id=sf.creating_packages.htm&type=5), and have an administrator [install the package](https://help.salesforce.com/articleView?id=sf.distribution_installing_packages.htm&type=5). It's not typical to create metadata directly in an org, unless you have a specific use case (like [DLRS](https://github.com/afawcett/declarative-lookup-rollup-summaries) dynamic trigger creation). You could have your app redirect to the install URL, and the administrator would complete the setup by using the installation wizard. As a bonus, you don't need to validate code coverage or trigger a Run All Tests (required for deploying metadata to production orgs). It is strongly not recommended that you deploy metadata directly to production anyways. You can also use SFDX if you have a valid API token, which you can install on your server rather easily if you can support NodeJS. |
38,974,744 | I want to do some other things when user Click+[Ctrl], but it seems that I could not detect if user press Ctrl when clicking.
I copy the event object infos below.
```
bubbles : false
cancelBubble : false
cancelable : false
currentTarget : react
defaultPrevented : false
eventPhase : 2
isTrusted : false
path : Array[1]
returnValue : true
srcElement : react
target : react
timeStamp : 5690056.695
type : "react-click"
```
I can see the ctrlKey attribute in the arguments[0]-Proxy Object. But this object is unaccessable('Uncaught illegal access'):
```
[[Target]]
:
SyntheticMouseEvent
_dispatchInstances:ReactDOMComponent
_dispatchListeners:(e)
_targetInst:ReactDOMComponent
altKey:false
bubbles:true
button:0
buttons:0
cancelable:true
clientX:275
clientY:315
ctrlKey:false
``` | 2016/08/16 | [
"https://Stackoverflow.com/questions/38974744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2110800/"
] | Your click handling function would have a format as such:
```
clickHandler: function (event, value) {
event.stopPropagation();
// In that case, event.ctrlKey does the trick.
if (event.ctrlKey) {
console.debug("Ctrl+click has just happened!");
}
}
``` | In the `click` event of this element you can check if at the moment of the click, a button (with `keycode` of `ctrl` in this case) is pressed down |
38,974,744 | I want to do some other things when user Click+[Ctrl], but it seems that I could not detect if user press Ctrl when clicking.
I copy the event object infos below.
```
bubbles : false
cancelBubble : false
cancelable : false
currentTarget : react
defaultPrevented : false
eventPhase : 2
isTrusted : false
path : Array[1]
returnValue : true
srcElement : react
target : react
timeStamp : 5690056.695
type : "react-click"
```
I can see the ctrlKey attribute in the arguments[0]-Proxy Object. But this object is unaccessable('Uncaught illegal access'):
```
[[Target]]
:
SyntheticMouseEvent
_dispatchInstances:ReactDOMComponent
_dispatchListeners:(e)
_targetInst:ReactDOMComponent
altKey:false
bubbles:true
button:0
buttons:0
cancelable:true
clientX:275
clientY:315
ctrlKey:false
``` | 2016/08/16 | [
"https://Stackoverflow.com/questions/38974744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2110800/"
] | you can use this code below in your render() method
```
document.addEventListener('click', (e) => {
if (e.ctrlKey) {
console.log('With ctrl, do something...');
}
});
``` | In the `click` event of this element you can check if at the moment of the click, a button (with `keycode` of `ctrl` in this case) is pressed down |
38,974,744 | I want to do some other things when user Click+[Ctrl], but it seems that I could not detect if user press Ctrl when clicking.
I copy the event object infos below.
```
bubbles : false
cancelBubble : false
cancelable : false
currentTarget : react
defaultPrevented : false
eventPhase : 2
isTrusted : false
path : Array[1]
returnValue : true
srcElement : react
target : react
timeStamp : 5690056.695
type : "react-click"
```
I can see the ctrlKey attribute in the arguments[0]-Proxy Object. But this object is unaccessable('Uncaught illegal access'):
```
[[Target]]
:
SyntheticMouseEvent
_dispatchInstances:ReactDOMComponent
_dispatchListeners:(e)
_targetInst:ReactDOMComponent
altKey:false
bubbles:true
button:0
buttons:0
cancelable:true
clientX:275
clientY:315
ctrlKey:false
``` | 2016/08/16 | [
"https://Stackoverflow.com/questions/38974744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2110800/"
] | Your click handling function would have a format as such:
```
clickHandler: function (event, value) {
event.stopPropagation();
// In that case, event.ctrlKey does the trick.
if (event.ctrlKey) {
console.debug("Ctrl+click has just happened!");
}
}
``` | you can use this code below in your render() method
```
document.addEventListener('click', (e) => {
if (e.ctrlKey) {
console.log('With ctrl, do something...');
}
});
``` |
42,748 | Coming up soon is the annual [World Low Cost Airlines Congress](https://www.terrapinn.com/conference/aviation-festival/world-low-cost-airlines.stm),
>
> drawing in low cost carriers from around the world year on year. Sessions will discuss business models, pricing strategies, revenue streams and more.
>
>
>
I have observed some meetings that bring competitors together to discuss business, and found they usually include an antitrust warning at their beginning reminding people to *not* discuss these kinds of topics, **especially** anything around pricing or market division (which for airlines might be around who's going for what routes), including at sidebars, networking breaks, and social events.
The upcoming meeting is in London, but includes a variety of companies from around the world (taking the event website at face value).
Are there any laws (especially those related to antitrust) which prevent international competitors from coming together to discuss pricing strategies etc.? | 2019/07/08 | [
"https://law.stackexchange.com/questions/42748",
"https://law.stackexchange.com",
"https://law.stackexchange.com/users/427/"
] | I am going to assume you are in England and Wales (because you use terms like *outline planning permission* and *major matters reserved*). If this is not correct, my answer may not apply.
There is absolutely nothing to stop a developer submitting a planning application to knock your house down, and build a block of flats on the site. Obviously they can't actually *demolish* the house without your permission - but they ask the local authority "what do you think of this idea?" Needing your land is not a breach of any planning rules (and, as you say, the builder will be planning to persuade you to agree).
The covenant looks promising, but if you go ahead and allow the road to be built, who could enforce the covenant? If the person (natural or legal) who has the right to enforce no longer exists, and hasn't passed the right on to some successor then the covenant is worthless. Similarly if the person who has the right to enforce is now owned or can be bought off by the builder, the covenant is worthless.
You can object to the planning application in the normal way, but do make sure that you object on proper planning grounds (loss of amenity, overlooked, not according to the local plan, over-developed, etc).
You can also write to the developer directly (not the planning department), saying that they shouldn't waste any further time on this project, as you will not be cooperating. (This only works if there is no *other* property they could use as an access. If the project is large enough, they may even be able to buy another house, demolish it, and build the road through there.) | (Assuming this is the UK) planning permission is not concerned with access rights or other restrictions, which are a civil matter between the landowners concerned.
When it comes to **detailed** planning permission then the highways authority would look at road widths, turning areas, splays etc, but access rights over the land are still not a matter within the planning authority's purview. |
63,423,534 | I am trying to get the email of the user to display it in the account page, This is what I did :
```
getuseremail() async {
FirebaseAuth _auth = FirebaseAuth.instance;
String email;
final user = _auth.currentUser();
setState() {() {
email = user.email;
}}
}
```
and Then I display it but It gives me error that it is null. | 2020/08/15 | [
"https://Stackoverflow.com/questions/63423534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13981087/"
] | If the user is currently logged in then you have to do the following:
```
getuseremail() async {
FirebaseAuth _auth = FirebaseAuth.instance;
String email;
final user = await _auth.currentUser();
setState() {() {
email = user.email;
}}
}
```
Use `await` since `currentUser()` returns a `Future<FirebaseUser>`, but make sure the user is currently logged in or you won't be able to get the email. | Can you try this?
```
final user = _auth.currentUser;
```
or use:
```
final user = _auth.getCurrentUser();
``` |
63,423,534 | I am trying to get the email of the user to display it in the account page, This is what I did :
```
getuseremail() async {
FirebaseAuth _auth = FirebaseAuth.instance;
String email;
final user = _auth.currentUser();
setState() {() {
email = user.email;
}}
}
```
and Then I display it but It gives me error that it is null. | 2020/08/15 | [
"https://Stackoverflow.com/questions/63423534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13981087/"
] | You can try this code :
```
String _email;
void getUserEmail() {
FirebaseAuth.instance.currentUser().then((user) {
setState(() {
_email = user.email;
});
});
}
```
1, *currentUser* is return *Future*
2, *\_email* is need define a field on *State* to *setState* | Can you try this?
```
final user = _auth.currentUser;
```
or use:
```
final user = _auth.getCurrentUser();
``` |
63,423,534 | I am trying to get the email of the user to display it in the account page, This is what I did :
```
getuseremail() async {
FirebaseAuth _auth = FirebaseAuth.instance;
String email;
final user = _auth.currentUser();
setState() {() {
email = user.email;
}}
}
```
and Then I display it but It gives me error that it is null. | 2020/08/15 | [
"https://Stackoverflow.com/questions/63423534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13981087/"
] | You can try this code :
```
String _email;
void getUserEmail() {
FirebaseAuth.instance.currentUser().then((user) {
setState(() {
_email = user.email;
});
});
}
```
1, *currentUser* is return *Future*
2, *\_email* is need define a field on *State* to *setState* | If the user is currently logged in then you have to do the following:
```
getuseremail() async {
FirebaseAuth _auth = FirebaseAuth.instance;
String email;
final user = await _auth.currentUser();
setState() {() {
email = user.email;
}}
}
```
Use `await` since `currentUser()` returns a `Future<FirebaseUser>`, but make sure the user is currently logged in or you won't be able to get the email. |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | The best way to do this is to set the drawable for the thumb from XML (as I was doing all along) and then when you want to hide/show the Thumb drawable, just manipulate it's alpha value:
```
// Hide the thumb drawable if the SeekBar is disabled
if (enabled) {
seekBar.getThumb().mutate().setAlpha(255);
} else {
seekBar.getThumb().mutate().setAlpha(0);
}
```
Edit:
If thumb appearing white after setting alpha to zero, try adding
```
<SeekBar
....
android:splitTrack="false"
/>
``` | Hide your thumb in a SeekBar via xml
```
android:thumbTint="@color/transparent"
```
for Example
```
<SeekBar
android:id="@+id/seekBar"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:clickable="false"
android:thumb="@color/gray"
android:thumbTint="@android:color/transparent" />
``` |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | The best way to do this is to set the drawable for the thumb from XML (as I was doing all along) and then when you want to hide/show the Thumb drawable, just manipulate it's alpha value:
```
// Hide the thumb drawable if the SeekBar is disabled
if (enabled) {
seekBar.getThumb().mutate().setAlpha(255);
} else {
seekBar.getThumb().mutate().setAlpha(0);
}
```
Edit:
If thumb appearing white after setting alpha to zero, try adding
```
<SeekBar
....
android:splitTrack="false"
/>
``` | You can hide the seekbar thumb by setting arbitrarily large thumb offset value, which will move the thumb out of view. For example,
```
mySeekBar.setThumbOffset(10000); // moves the thumb out of view (to left)
```
To show the thumb again, set the offset to zero or other conventional value:
```
mySeekBar.setThumbOffset(0); // moves the thumb back to view
```
This approach works in all API levels. |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | The best way to do this is to set the drawable for the thumb from XML (as I was doing all along) and then when you want to hide/show the Thumb drawable, just manipulate it's alpha value:
```
// Hide the thumb drawable if the SeekBar is disabled
if (enabled) {
seekBar.getThumb().mutate().setAlpha(255);
} else {
seekBar.getThumb().mutate().setAlpha(0);
}
```
Edit:
If thumb appearing white after setting alpha to zero, try adding
```
<SeekBar
....
android:splitTrack="false"
/>
``` | **The easiest & simplest way to do it is
just add this line in your Seekbar view in its xml**
```
android:thumb="@android:color/transparent"
``` |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | The best way to do this is to set the drawable for the thumb from XML (as I was doing all along) and then when you want to hide/show the Thumb drawable, just manipulate it's alpha value:
```
// Hide the thumb drawable if the SeekBar is disabled
if (enabled) {
seekBar.getThumb().mutate().setAlpha(255);
} else {
seekBar.getThumb().mutate().setAlpha(0);
}
```
Edit:
If thumb appearing white after setting alpha to zero, try adding
```
<SeekBar
....
android:splitTrack="false"
/>
``` | Hide your thumb in a SeekBar via xml
```
android:thumb="@null"
``` |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | Hide your thumb in a SeekBar via xml
```
android:thumbTint="@color/transparent"
```
for Example
```
<SeekBar
android:id="@+id/seekBar"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:clickable="false"
android:thumb="@color/gray"
android:thumbTint="@android:color/transparent" />
``` | You can hide the seekbar thumb by setting arbitrarily large thumb offset value, which will move the thumb out of view. For example,
```
mySeekBar.setThumbOffset(10000); // moves the thumb out of view (to left)
```
To show the thumb again, set the offset to zero or other conventional value:
```
mySeekBar.setThumbOffset(0); // moves the thumb back to view
```
This approach works in all API levels. |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | Hide your thumb in a SeekBar via xml
```
android:thumbTint="@color/transparent"
```
for Example
```
<SeekBar
android:id="@+id/seekBar"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:clickable="false"
android:thumb="@color/gray"
android:thumbTint="@android:color/transparent" />
``` | Hide your thumb in a SeekBar via xml
```
android:thumb="@null"
``` |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | **The easiest & simplest way to do it is
just add this line in your Seekbar view in its xml**
```
android:thumb="@android:color/transparent"
``` | You can hide the seekbar thumb by setting arbitrarily large thumb offset value, which will move the thumb out of view. For example,
```
mySeekBar.setThumbOffset(10000); // moves the thumb out of view (to left)
```
To show the thumb again, set the offset to zero or other conventional value:
```
mySeekBar.setThumbOffset(0); // moves the thumb back to view
```
This approach works in all API levels. |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | You can hide the seekbar thumb by setting arbitrarily large thumb offset value, which will move the thumb out of view. For example,
```
mySeekBar.setThumbOffset(10000); // moves the thumb out of view (to left)
```
To show the thumb again, set the offset to zero or other conventional value:
```
mySeekBar.setThumbOffset(0); // moves the thumb back to view
```
This approach works in all API levels. | Hide your thumb in a SeekBar via xml
```
android:thumb="@null"
``` |
20,855,815 | I have a SeekBar with a custom drawable for the Thumb, and I would like to be able to show/hide it based on another control I have.
I have tried loading the drawable from resources, and then using SeekBar.setThumb() with the drawable, or null.
That hides it (the set to null), but I can never get it back. | 2013/12/31 | [
"https://Stackoverflow.com/questions/20855815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/573149/"
] | **The easiest & simplest way to do it is
just add this line in your Seekbar view in its xml**
```
android:thumb="@android:color/transparent"
``` | Hide your thumb in a SeekBar via xml
```
android:thumb="@null"
``` |
21,640,920 | ```
public static void main(String[] args) {
Scanner data = new Scanner(System.in);
double low = data.nextDouble();
Attacker type = new Attacker();
type.setLow(low);
Defender fight = new Defender();
fight.result();
}
```
Defender class
```
private int ATKvalue;
public void result() {
Attacker xxx = new Attacker();
ATKvalue = xxx.ATKtype();
}
```
Attacker Class
```
public Attacker() {
double low = 0
}
public void setLow(double lowpercent) {
low = lowpercent;
}
public int ATKtype() {
System.out.println(low);
}
```
I simplified my code, but it is the same idea. When I run it, low is equal to 0 instead of the user input. How can I change it equal to user input?
my code :
```
import java.util.Random;
public class Attacker {
public double low, med, genPercent;
private int lowtype, medtype, hightype;
public Attacker() {
low = 0;
lowtype = 0;
medtype = 1;
hightype = 2;
}
public void setLow(double low) {
this.low = low;
}
public double getLow(){
return(low);
}
public void setMed(double med) {
this.med = med;
}
public int ATKtype() {
System.out.println(low);
Random generator = new Random();
genPercent = generator.nextDouble() * 100.0;
System.out.println(genPercent);
System.out.println(low);
if ( genPercent <= low ) {
System.out.println("low");
return (lowtype);
}
else if ( genPercent <= med + low ) {
System.out.println("med");
return (medtype);
}
else {
System.out.println("high");
return (hightype);
}
}
}
```
---
```
import java.util.Random;
public class Defender {
private int lowtype, medtype, hightype, DEFvalue, ATKvalue;
private double genPercent;
public Defender() {
lowtype = 0;
medtype = 1;
hightype = 2;
}
public int getDEFtype() {
Random generator = new Random();
genPercent = generator.nextDouble() ;
if ( genPercent <= 1d/3d ) {
return (lowtype);
}
else if ( genPercent <= 2d/3d ) {
return (medtype);
}
else {
return (hightype);
}
}
public void result() {
Manager ATK = new Manager();
Defender DEF = new Defender();
DEFvalue = DEF.getDEFtype();
ATKvalue = ATK.getATKtype();
System.out.println(DEFvalue);
System.out.println(ATKvalue);
if ( ATKvalue == DEFvalue ) {
System.out.println("block");
}
else {
System.out.println("hit");
}
}
}
```
---
```
import java.util.Scanner;
public class Manager {
public int getATKtype() {
Attacker genType = new Attacker();
int attack = genType.ATKtype();
return (attack);
}
public static void main(String[] args) {
System.out.print("Number of attack rounds: " );
Scanner data = new Scanner(System.in);
int round = data.nextInt();
System.out.println("Enter percentages for the number of attacks that will be "
+ "directed: low, medium, high. The total of the three percentages "
+ "must sum to 100%");
System.out.print("Percentage of attacks that will be aimed low: ");
double low = data.nextDouble();
System.out.print("Percentage of attacks that will be aimed at medium height: ");
double med = data.nextDouble();
System.out.print("Percentage of attacks that will be aimed high: ");
double high = data.nextDouble();
if ( low + med + high != 100 ){
System.out.println("The sum is not 100%. Equal probablilty will be used.");
low = med = high = 100d/3d ;
}
Attacker type = new Attacker();
type.setLow(low);
type.setMed(med);
System.out.print(type.getLow());
for ( int i = 0 ; i < round ; i++) {
Defender fight = new Defender(Attacker type);
fight.result();
}
}
}
``` | 2014/02/08 | [
"https://Stackoverflow.com/questions/21640920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3230613/"
] | [Old answer.](http://pastie.org/8710687)
---
You are recreating instances of your `Attacker` class and the problem with that is the `low` value is associated with only that specific instance. So when you create a new one after the fact in `Defender`, you lose the value. I've also separated `Attacker` and `Defender` completely and created a `Battle` class that will take instances of `Attacker` and `Defender` and decide which one wins. That class is a face off of instances. This improves readability, logic, etc. It's a better design. Yours is sort of a mess with not much structure (no offense - just learning):
**Main class:**
```
public static void main(String args[])
{
Scanner input = new Scanner(System.in);
System.out.println("Enter attack low: ");
double low = input.nextDouble();
Attacker attacker = new Attacker();
attacker.setLow(low);
Defender defender = new Defender();
Battle battle = new Battle(attacker, defender);
battle.result();
}
```
---
**Attacker class:**
```
import java.util.Random;
public class Attacker {
public double low = 0;
public void setLow(double low) {
this.low = low;
}
public double getATKtype() {
double genPercent = new Random().nextDouble() * 100.0;
System.out.println(low);
if ( genPercent <= low ) {
System.out.println("Attack: low");
return 0; // low type
}
else if ( genPercent <= 1 + low ) { // genPercent <= medium + low
System.out.println("Attack: medium");
return 1; // medium type
}
else {
System.out.println("Attack: high");
return 2; // high type
}
}
}
```
---
**Defender class:**
```
import java.util.Random;
public class Defender {
public double getDEFtype() {
double genPercent = new Random().nextDouble();
if ( genPercent <= 1d/3d ) {
System.out.println("Defense: low");
return 0; // low type
}
else if ( genPercent <= 2d/3d ) {
System.out.println("Defense: medium");
return 1; // medium type
}
else {
System.out.println("Defense: high");
return 2; // high type
}
}
}
```
---
And finally, **Battle class:**
```
public class Battle {
private double attackValue = 0;
private double defenseValue = 0;
public Battle (Attacker attacker, Defender defender) {
attackValue = attacker.getATKtype();
defenseValue = defender.getDEFtype();
}
public void result() {
if (attackValue == defenseValue) {
System.out.println("Block");
} else if (attackValue > defenseValue) {
System.out.println("Hit");
} else { // attack is lower than defense
// do what you need for that
}
}
}
``` | To pass a value into another class you will need to pass it as a parameter either of the other class's constructor or a setter method.
So if you give Attacker a `setLow(int low)` method, you can pass in the information.
```
public void setLow(int low) {
this.low = low;
}
```
And then call the method when needed. |
48,694,682 | Using Bash commands, I would like to substitute field 3 of each line of a text file with the result of a command which takes the original field 3 as an argument. Fields are `/`-delimited.
Input file:
```
./REMOTE_PARENT_DIR/0x134000564f:0x4c:0x0/test_runs/testgsi_O1
./REMOTE_PARENT_DIR/0x134000564f:0x4c:0x0/test_runs/testgsi_O2
...
```
Desired output file (don't print field 1 and 2, field 3 will be result of Unix command, print remaining fields):
```
/scratch/000011/rin/test_runs/testgsi_O1
/scratch/000011/rin/test_runs/testgsi_O2
...
```
Command to translate field 3 into normal path components:
```
hostx#lfs fid2path /scratch [0x134000564f:0x4c:0x0]
/scratch/000011/rin
```
Maybe use `awk` to grab the relevant field then `sed` with command substitution then spit out the new line?
This prints out the bit I need but not sure how to substitute into the lines of the file:
```
awk -F "/" '{ system("/bin/lfs fid2path /scratch " $3) }' outfile.70.sample.tmp
``` | 2018/02/08 | [
"https://Stackoverflow.com/questions/48694682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9335280/"
] | The following `awk` one-liner would help to achieve your goals:
```
awk 'BEGIN { FS="/"; OFS="/" } { cmd = "/bin/lfs fid2path /scratch "$3; cmd | getline path; close(cmd); for (i = 4; i <= NF; i++) { path = path""OFS""$i};print path }' your_input_file.txt
```
* Here we assign the field separator `FS` and output field separator `OFS` built-in variables to a slash `/` in the `BEGIN` rule before any output is read and processed.
* Then we create a command `cmd` variable based on your desired shell call with the third field `$3` as argument.
* We execute `cmd` shell command and pipe its output into the built-in command `getline` and put it in the variable `path`.
* The close() function is called to close `cmd` after it produced its output and to ensure the command runs for each record.
* Then using a `for` loop we concatenate the values starting with the 4th field until the end of the line to `path` variable separated with `OFS`.
* ***Finally*** we print out the desired, changed path. Since I don't have the `/bin/lfs` command installed i just tested it with `cmd = "echo "$3" | cut -d':' -f2"` to see the results and it looks fine.
***For example `paths.txt`:***
```
./REMOTE_PARENT_DIR/0x134000564f:0x4c:0x0/test_runs/testgsi_O1
./REMOTE_PARENT_DIR/0x134000564f:0x4c:0x0/test_runs/testgsi_O2
```
***Example call:***
```
awk 'BEGIN { FS="/"; OFS="/" } { cmd = "echo "$3" | cut -d':' -f2"; cmd | getline path; close(cmd); for (i = 4; i <= NF; i++) { path = path""OFS""$i};print path }' paths.txt
```
***Produces the result:***
```
0x4c/test_runs/testgsi_O1
0x4c/test_runs/testgsi_O2
```
Where a specific part is extracted from the third awk field `$3` using a shell command `cut -d':' -f2`. That is the `2nd` field from the colon (`:`) separated string: `0x134000564f:0x4c:0x0`. | ```
#!/bin/bash
OUTFILE03=tmpfile
while IFS=/ read first second fid remainder
do
REAL=`/bin/lfs fid2path /scratch $fid`
echo "$REAL/$remainder"
done <"input.70" >$OUTFILE03
``` |
48,694,682 | Using Bash commands, I would like to substitute field 3 of each line of a text file with the result of a command which takes the original field 3 as an argument. Fields are `/`-delimited.
Input file:
```
./REMOTE_PARENT_DIR/0x134000564f:0x4c:0x0/test_runs/testgsi_O1
./REMOTE_PARENT_DIR/0x134000564f:0x4c:0x0/test_runs/testgsi_O2
...
```
Desired output file (don't print field 1 and 2, field 3 will be result of Unix command, print remaining fields):
```
/scratch/000011/rin/test_runs/testgsi_O1
/scratch/000011/rin/test_runs/testgsi_O2
...
```
Command to translate field 3 into normal path components:
```
hostx#lfs fid2path /scratch [0x134000564f:0x4c:0x0]
/scratch/000011/rin
```
Maybe use `awk` to grab the relevant field then `sed` with command substitution then spit out the new line?
This prints out the bit I need but not sure how to substitute into the lines of the file:
```
awk -F "/" '{ system("/bin/lfs fid2path /scratch " $3) }' outfile.70.sample.tmp
``` | 2018/02/08 | [
"https://Stackoverflow.com/questions/48694682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9335280/"
] | >
> I hope understand your problem correct, otherwise tell me to delete this
> answer please.
>
>
>
If you do not mind using **Perl** then you can do that very easy and straightforward
Consider the following **one-liner**
```
perl -F'/' -ne '$F[2]="add-some-text/"; print @F[2..$#F]' file
```
It reads the file line by line, and substitutes the filed 2 with `add-some-text` which has this output:
```
all-some-text/test_runstestgsi_O1
all-some-text/test_runstestgsi_O2
```
Now if you want to use a command, just instead of a simple text use a command but with back-stick operator in Perl:
```
perl -F'/' -ne '$F[2]=`date "+%H:M:S"`; print @F[2..$#F]' file
```
or `qx()` which is more readable:
```
perl -F'/' -ne '$F[2]=qx(date "+%H:M:S"); print @F[2..$#F]' file
```
Also if you want to **pass an argument** you can do it as well:
```
perl -F'/' -ne '$F[2]=qx(echo -n $F[2]/); print @F[2..$#F]' file
```
and eventually for substitution just use `-i.bak` before `-F`. It will create a back-up file like `file.bak` and modify your original one. | ```
#!/bin/bash
OUTFILE03=tmpfile
while IFS=/ read first second fid remainder
do
REAL=`/bin/lfs fid2path /scratch $fid`
echo "$REAL/$remainder"
done <"input.70" >$OUTFILE03
``` |
31,676,682 | I was working with `char[]` and `Collection` with the below code :-
```
char[] ch = { 'a', 'b', 'c' };
List<char[]> chList = new ArrayList<char[]>();
chList.add(new char[]{'d','e','f'});
chList.add(new char[]{'g','h','i'});
String chSt = String.valueOf(ch);
String chListSt = chList.toString();
System.out.println(chSt); // outputs abc
System.out.println(chListSt); // outputs [[C@8288f50b, [C@b6d2b94b] instead [def, ghi]
```
Now what I observed above is :-
```
String chSt = String.valueOf(ch);
```
I know the above code behaviour is correct for `char[]` in `String.valueOf()`, so for the above code `abc` is printed. Now consider the next line.
```
String chListSt = chList.toString();
```
Also for the above code I know the `toString()` for `List` is defined in `AbstractList` and in the code of this overriden `toString()` I found `buffer.append(next);` which calls `String.valueOf()` method on the `char[]` which corresponds to `next` here.
So it should also print like `[def, ghi]`, as in the direct case with `char[]` in the line `String chSt = (String.valueOf(ch));`
Why is this change in behaviour is there in both the cases while same method `String.valueOf()` is called on the `char[]`? | 2015/07/28 | [
"https://Stackoverflow.com/questions/31676682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3418420/"
] | You're seeing the difference of calling different overloads of `String.valueOf`. Here's a simpler example:
```
public class Test {
public static void main(String[] args) {
char[] chars = { 'a', 'b', 'c' };
System.out.println(String.valueOf(chars)); // abc
Object object = chars;
System.out.println(String.valueOf(object)); // [C@...
}
}
```
The call you've found in `StringBuffer` or `StringBuilder` is just going to call `String.valueOf(Object)` - which then calls `toString()` on the array (after checking the reference isn't null). Arrays don't override `toString` in Java, hence the output you're getting. | Code `String chListSt = chList.toString();` simply called toString() implementation of List. List implementation calls toString() implementation on the elements.
Your current list has array objects in the list as an elements, so you are getting an array.toString() representation on the console which results hexadecimal code printed for char[].
>
> Instead of char[] try to save string in the array. It will have same performance and results will be readable.
>
>
> |
31,676,682 | I was working with `char[]` and `Collection` with the below code :-
```
char[] ch = { 'a', 'b', 'c' };
List<char[]> chList = new ArrayList<char[]>();
chList.add(new char[]{'d','e','f'});
chList.add(new char[]{'g','h','i'});
String chSt = String.valueOf(ch);
String chListSt = chList.toString();
System.out.println(chSt); // outputs abc
System.out.println(chListSt); // outputs [[C@8288f50b, [C@b6d2b94b] instead [def, ghi]
```
Now what I observed above is :-
```
String chSt = String.valueOf(ch);
```
I know the above code behaviour is correct for `char[]` in `String.valueOf()`, so for the above code `abc` is printed. Now consider the next line.
```
String chListSt = chList.toString();
```
Also for the above code I know the `toString()` for `List` is defined in `AbstractList` and in the code of this overriden `toString()` I found `buffer.append(next);` which calls `String.valueOf()` method on the `char[]` which corresponds to `next` here.
So it should also print like `[def, ghi]`, as in the direct case with `char[]` in the line `String chSt = (String.valueOf(ch));`
Why is this change in behaviour is there in both the cases while same method `String.valueOf()` is called on the `char[]`? | 2015/07/28 | [
"https://Stackoverflow.com/questions/31676682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3418420/"
] | You're seeing the difference of calling different overloads of `String.valueOf`. Here's a simpler example:
```
public class Test {
public static void main(String[] args) {
char[] chars = { 'a', 'b', 'c' };
System.out.println(String.valueOf(chars)); // abc
Object object = chars;
System.out.println(String.valueOf(object)); // [C@...
}
}
```
The call you've found in `StringBuffer` or `StringBuilder` is just going to call `String.valueOf(Object)` - which then calls `toString()` on the array (after checking the reference isn't null). Arrays don't override `toString` in Java, hence the output you're getting. | ```
char[] ch = { 'a', 'b', 'c' };
```
The above one is a literal and hence `valueof()` method gives it value.
```
chList.add(new char[]{'d','e','f'});
chList.add(new char[]{'g','h','i'});
```
These lists have objects stored in them as a result you are getting their hashcodes... |
111,389 | The `fn odd_ones(x: u32) -> bool` function should return `true` when `x` contains a odd number of 1s.
**Assumption:**
1. `x` is 32 bit unsigned.
**Restriction:**
1. The code should contain a total of at most 12 arithmetic, bitwise and logical operations.
**Forbidden:**
1. Conditionals, loops, function calls and macros.
2. Division, modulus and multiplication.
3. Relative comparison operators (`<`, `>`, `<=` and `>=`).
**Allowed operations:**
1. All bit level and logic operations.
2. Left and right shifts, but only with shift amounts between 0 and w - 1
3. Addition and subtraction.
4. Equality (`==`) and inequality (`!=`) tests.
5. Casting between `u32` and `i32`.
---
**My code:** ([Rust playground](http://is.gd/OaFj7o))
```
fn odd_ones(x: u32) -> bool {
let mid = (x >> 16) ^ x;
let mid2 = (mid >> 8) ^ mid;
let mid3 = (mid2 >> 4) ^ mid2;
let mid4 = (mid3 >> 2) ^ mid3;
(((mid4 >> 1) ^ mid4) & 1) == 1
}
fn odd_ones_test(x: u32) -> bool {
let sum = (0..32).map(|y| x >> y )
.fold(0, |sum ,y| sum + (y & 1));
sum % 2 != 0
}
fn test (from: u32, to: u32) -> bool {
(from..to).all(|x| odd_ones_test(x) == odd_ones(x))
}
fn main() {
println!("{}", test(!0 - 45345, !0));
}
```
Any better way of doing this? | 2015/11/21 | [
"https://codereview.stackexchange.com/questions/111389",
"https://codereview.stackexchange.com",
"https://codereview.stackexchange.com/users/77015/"
] | Leveraging code that other people wrote is always a great idea. In this case, there is [`count_ones`](http://doc.rust-lang.org/std/primitive.u32.html#method.count_ones):
```
fn odd_ones(x: u32) -> bool {
x.count_ones() % 2 == 1
}
```
This method [is a shim](https://github.com/rust-lang/rust/blob/1.4.0/src/libcore/num/mod.rs#L1185-L1194) for the LLVM intrinsic `@llvm.ctpop.i32`, which seems likely to become the very optimized SSE4.2 `popcnt` instruction when available.
Beyond that, you have some poor indentation habits. Rust uses 4-space indents, not the 3 (?!?) spaces you have in your `odd_ones` function.
There should not be a space between a function name and the arguments, whether in the function definition or the function call. Your `test` method should be fixed.
Be consistent with your spacing on commas. No space before, one space after. `|sum ,y|` is just wrong.
You should give names to magic constants. A rogue `32` laying around doesn't mean anything. Give it a name like `BITS_IN_U32`. There's an [unstable constant](http://doc.rust-lang.org/std/u32/constant.BITS.html) that you might be able to use someday.
When mapping and folding over an iterator, you might as well put all the mapping into the `map` call. There's no reason to do the bitwise-and in the `fold`.
I have *no idea* what `!0 - 45345` is supposed to mean or why those particular values are useful. That's bad news when you try to understand this code in the future.
```
const BITS_IN_U32: usize = 32;
fn odd_ones(x: u32) -> bool {
x.count_ones() % 2 == 1
}
fn odd_ones_test(x: u32) -> bool {
let sum =
(0..BITS_IN_U32)
.map(|y| (x >> y) & 1)
.fold(0, |sum, y| sum + y);
sum % 2 != 0
}
fn test(from: u32, to: u32) -> bool {
(from..to).all(|x| odd_ones_test(x) == odd_ones(x))
}
fn main() {
println!("{}", test(!0 - 45345, !0));
}
``` | Once I had to count the bits that were set in an array of one million integers. After several naive(very slow) starts I came across some bit twiddling hacks that sped things up dramatically. This is a C# function that contains two methods.
```
private bool isOddOnes(UInt32 numToCheck)
{
//first method
//Dim bitsSetCt As UInt32 = 0
//numToCheck = numToCheck - ((numToCheck >> 1) And &H55555555UI)
//numToCheck = (numToCheck And &H33333333UI) + ((numToCheck >> 2) And &H33333333UI)
//bitsSetCt = ((numToCheck + (numToCheck >> 4) And &HF0F0F0FUI) * &H1010101UI) >> 24
//second method
UInt32 bitsSetCt = default(UInt32);
bitsSetCt = numToCheck - ((numToCheck >> 1) & 0x55555555u);
bitsSetCt = ((bitsSetCt >> 2) & 0x33333333u) + (bitsSetCt & 0x33333333u);
bitsSetCt = ((bitsSetCt >> 4) + bitsSetCt) & 0xf0f0f0fu;
bitsSetCt = ((bitsSetCt >> 8) + bitsSetCt) & 0xff00ffu;
bitsSetCt = ((bitsSetCt >> 16) + bitsSetCt) & 0xffffu;
return (bitsSetCt & 1u) == 1u;
}
``` |
111,389 | The `fn odd_ones(x: u32) -> bool` function should return `true` when `x` contains a odd number of 1s.
**Assumption:**
1. `x` is 32 bit unsigned.
**Restriction:**
1. The code should contain a total of at most 12 arithmetic, bitwise and logical operations.
**Forbidden:**
1. Conditionals, loops, function calls and macros.
2. Division, modulus and multiplication.
3. Relative comparison operators (`<`, `>`, `<=` and `>=`).
**Allowed operations:**
1. All bit level and logic operations.
2. Left and right shifts, but only with shift amounts between 0 and w - 1
3. Addition and subtraction.
4. Equality (`==`) and inequality (`!=`) tests.
5. Casting between `u32` and `i32`.
---
**My code:** ([Rust playground](http://is.gd/OaFj7o))
```
fn odd_ones(x: u32) -> bool {
let mid = (x >> 16) ^ x;
let mid2 = (mid >> 8) ^ mid;
let mid3 = (mid2 >> 4) ^ mid2;
let mid4 = (mid3 >> 2) ^ mid3;
(((mid4 >> 1) ^ mid4) & 1) == 1
}
fn odd_ones_test(x: u32) -> bool {
let sum = (0..32).map(|y| x >> y )
.fold(0, |sum ,y| sum + (y & 1));
sum % 2 != 0
}
fn test (from: u32, to: u32) -> bool {
(from..to).all(|x| odd_ones_test(x) == odd_ones(x))
}
fn main() {
println!("{}", test(!0 - 45345, !0));
}
```
Any better way of doing this? | 2015/11/21 | [
"https://codereview.stackexchange.com/questions/111389",
"https://codereview.stackexchange.com",
"https://codereview.stackexchange.com/users/77015/"
] | Leveraging code that other people wrote is always a great idea. In this case, there is [`count_ones`](http://doc.rust-lang.org/std/primitive.u32.html#method.count_ones):
```
fn odd_ones(x: u32) -> bool {
x.count_ones() % 2 == 1
}
```
This method [is a shim](https://github.com/rust-lang/rust/blob/1.4.0/src/libcore/num/mod.rs#L1185-L1194) for the LLVM intrinsic `@llvm.ctpop.i32`, which seems likely to become the very optimized SSE4.2 `popcnt` instruction when available.
Beyond that, you have some poor indentation habits. Rust uses 4-space indents, not the 3 (?!?) spaces you have in your `odd_ones` function.
There should not be a space between a function name and the arguments, whether in the function definition or the function call. Your `test` method should be fixed.
Be consistent with your spacing on commas. No space before, one space after. `|sum ,y|` is just wrong.
You should give names to magic constants. A rogue `32` laying around doesn't mean anything. Give it a name like `BITS_IN_U32`. There's an [unstable constant](http://doc.rust-lang.org/std/u32/constant.BITS.html) that you might be able to use someday.
When mapping and folding over an iterator, you might as well put all the mapping into the `map` call. There's no reason to do the bitwise-and in the `fold`.
I have *no idea* what `!0 - 45345` is supposed to mean or why those particular values are useful. That's bad news when you try to understand this code in the future.
```
const BITS_IN_U32: usize = 32;
fn odd_ones(x: u32) -> bool {
x.count_ones() % 2 == 1
}
fn odd_ones_test(x: u32) -> bool {
let sum =
(0..BITS_IN_U32)
.map(|y| (x >> y) & 1)
.fold(0, |sum, y| sum + y);
sum % 2 != 0
}
fn test(from: u32, to: u32) -> bool {
(from..to).all(|x| odd_ones_test(x) == odd_ones(x))
}
fn main() {
println!("{}", test(!0 - 45345, !0));
}
``` | Since I already know a better way of doing this thanks to [this link](http://graphics.stanford.edu/~seander/bithacks.html#ParityNaive) given by [@glampert](https://codereview.stackexchange.com/users/39810/glampert) in the comments and none of the answers so far followed the rules, I will answer my own question to wrap things up:
```
fn odd_ones(x: u32) -> bool {
let mut m = x ^ (x >> 16);
m ^= m >> 8;
m ^= m >> 4;
m &= 0xf;
((0x6996 >> m) & 1) == 1
}
```
Using `0x6996` bit pattern saves up 2 operations. Also, the use of `^=` makes the code more compact. |
111,389 | The `fn odd_ones(x: u32) -> bool` function should return `true` when `x` contains a odd number of 1s.
**Assumption:**
1. `x` is 32 bit unsigned.
**Restriction:**
1. The code should contain a total of at most 12 arithmetic, bitwise and logical operations.
**Forbidden:**
1. Conditionals, loops, function calls and macros.
2. Division, modulus and multiplication.
3. Relative comparison operators (`<`, `>`, `<=` and `>=`).
**Allowed operations:**
1. All bit level and logic operations.
2. Left and right shifts, but only with shift amounts between 0 and w - 1
3. Addition and subtraction.
4. Equality (`==`) and inequality (`!=`) tests.
5. Casting between `u32` and `i32`.
---
**My code:** ([Rust playground](http://is.gd/OaFj7o))
```
fn odd_ones(x: u32) -> bool {
let mid = (x >> 16) ^ x;
let mid2 = (mid >> 8) ^ mid;
let mid3 = (mid2 >> 4) ^ mid2;
let mid4 = (mid3 >> 2) ^ mid3;
(((mid4 >> 1) ^ mid4) & 1) == 1
}
fn odd_ones_test(x: u32) -> bool {
let sum = (0..32).map(|y| x >> y )
.fold(0, |sum ,y| sum + (y & 1));
sum % 2 != 0
}
fn test (from: u32, to: u32) -> bool {
(from..to).all(|x| odd_ones_test(x) == odd_ones(x))
}
fn main() {
println!("{}", test(!0 - 45345, !0));
}
```
Any better way of doing this? | 2015/11/21 | [
"https://codereview.stackexchange.com/questions/111389",
"https://codereview.stackexchange.com",
"https://codereview.stackexchange.com/users/77015/"
] | Once I had to count the bits that were set in an array of one million integers. After several naive(very slow) starts I came across some bit twiddling hacks that sped things up dramatically. This is a C# function that contains two methods.
```
private bool isOddOnes(UInt32 numToCheck)
{
//first method
//Dim bitsSetCt As UInt32 = 0
//numToCheck = numToCheck - ((numToCheck >> 1) And &H55555555UI)
//numToCheck = (numToCheck And &H33333333UI) + ((numToCheck >> 2) And &H33333333UI)
//bitsSetCt = ((numToCheck + (numToCheck >> 4) And &HF0F0F0FUI) * &H1010101UI) >> 24
//second method
UInt32 bitsSetCt = default(UInt32);
bitsSetCt = numToCheck - ((numToCheck >> 1) & 0x55555555u);
bitsSetCt = ((bitsSetCt >> 2) & 0x33333333u) + (bitsSetCt & 0x33333333u);
bitsSetCt = ((bitsSetCt >> 4) + bitsSetCt) & 0xf0f0f0fu;
bitsSetCt = ((bitsSetCt >> 8) + bitsSetCt) & 0xff00ffu;
bitsSetCt = ((bitsSetCt >> 16) + bitsSetCt) & 0xffffu;
return (bitsSetCt & 1u) == 1u;
}
``` | Since I already know a better way of doing this thanks to [this link](http://graphics.stanford.edu/~seander/bithacks.html#ParityNaive) given by [@glampert](https://codereview.stackexchange.com/users/39810/glampert) in the comments and none of the answers so far followed the rules, I will answer my own question to wrap things up:
```
fn odd_ones(x: u32) -> bool {
let mut m = x ^ (x >> 16);
m ^= m >> 8;
m ^= m >> 4;
m &= 0xf;
((0x6996 >> m) & 1) == 1
}
```
Using `0x6996` bit pattern saves up 2 operations. Also, the use of `^=` makes the code more compact. |
22,221,270 | How do I create a named function expressions in CoffeeScript like the examples below?
```
var a = function b (param1) {}
```
or
```
return function link (scope) {}
``` | 2014/03/06 | [
"https://Stackoverflow.com/questions/22221270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/228049/"
] | I may be a bit late to the party, but I just realised that you actually create named functions when using the `class` keyword.
Example:
```
class myFunction
# The functions actual code is wrapped in the constructor method
constructor: ->
console.log 'something'
console.log myFunction # -> function AppComponent() { ... }
myFunction() # -> 'something'
``` | Coffeescript doesn't support the latter (named functions), but the former can be achieved with
```
a = (param1) ->
console.log param1
``` |
49,121,826 | I am working with data in R and have a string related question.
If I have a vector (say books),
```
books <- c('123 Book1 331','51 Book2','Book3 69','Book4')
```
I want to split strings that start with numbers and keep the rest, else leave it as it is.
I would like to extract info in a way as shown below:
```
[1] "Book1 331" "Book2" "Book3 69" "Book4"
```
What package do i have to use in R? And what function? | 2018/03/06 | [
"https://Stackoverflow.com/questions/49121826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6872330/"
] | Select the text to be commented then press Ctrl+K and Ctrl+C. | Did you try to install React or Babel plugin? |
49,121,826 | I am working with data in R and have a string related question.
If I have a vector (say books),
```
books <- c('123 Book1 331','51 Book2','Book3 69','Book4')
```
I want to split strings that start with numbers and keep the rest, else leave it as it is.
I would like to extract info in a way as shown below:
```
[1] "Book1 331" "Book2" "Book3 69" "Book4"
```
What package do i have to use in R? And what function? | 2018/03/06 | [
"https://Stackoverflow.com/questions/49121826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6872330/"
] | Select the text to be commented then press Ctrl+K and Ctrl+C. | This might sound dumb but, have you tried re-installing VSCode? Commenting out JSX in a .js file works fine for me without any plugins. (I use windows) |
49,121,826 | I am working with data in R and have a string related question.
If I have a vector (say books),
```
books <- c('123 Book1 331','51 Book2','Book3 69','Book4')
```
I want to split strings that start with numbers and keep the rest, else leave it as it is.
I would like to extract info in a way as shown below:
```
[1] "Book1 331" "Book2" "Book3 69" "Book4"
```
What package do i have to use in R? And what function? | 2018/03/06 | [
"https://Stackoverflow.com/questions/49121826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6872330/"
] | Select the text to be commented then press Ctrl+K and Ctrl+C. | I was having this issue and for me the problem was the extension **Babel ES6/ES7**.
I've uninstalled that extension.
If that doesn't help you could try changing the Language Mode to **JavaScript React** |
44,015,241 | I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a
```
day(mon, tue, wed, thu, fri).
```
and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday)
I know it sounds stupid but I'm used with c and java and this is so hard for me...
Thank you! | 2017/05/17 | [
"https://Stackoverflow.com/questions/44015241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1985937/"
] | If you don't mind using libraries, you can also do something like this:
```
:- use_module(library(clpfd)).
ord_weekday(0, mon).
ord_weekday(1, tue).
ord_weekday(2, wed).
ord_weekday(3, thu).
ord_weekday(4, fri).
ord_weekday(5, sat).
ord_weekday(6, sun).
day_next(D, N) :-
(X+1) mod 7 #= Y,
ord_weekday(X, D),
ord_weekday(Y, N).
```
First we mapped 0 to Monday, 1 to Tuesday, and so on; then, we mapped 0 to 1, 1 to 2, ..., 6 to 0.
Now you can query this like that:
```
?- day_next(mon, X).
X = tue.
?- day_next(X, mon).
X = sun.
```
Importantly, you can leave both arguments free variables and enumerate all possible combinations:
```
?- day_next(D, N).
D = mon, N = tue ;
D = tue, N = wed ;
D = wed, N = thu ;
D = thu, N = fri ;
D = fri, N = sat ;
D = sat, N = sun ;
D = sun, N = mon.
```
This gives you the exact same results are [this solution](https://stackoverflow.com/a/44015672/1812457). I would prefer the other solution for this particular problem (next day of the week), but there might be something else to be learned from the example here. | Here comes the canonical answer—based on [`append/3`](https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue#append):
```
today_tomorrow(T0, T1) :-
Days = [mon,tue,wed,thu,fri,sat,sun,mon],
append(_, [T0,T1|_], Days).
```
Let's ask the *most general query*!
```
?- today_tomorrow(X, Y).
X = mon, Y = tue
; X = tue, Y = wed
; X = wed, Y = thu
; X = thu, Y = fri
; X = fri, Y = sat
; X = sat, Y = sun
; X = sun, Y = mon
; false. % no more solutions
``` |
44,015,241 | I'm new at prolog and it is messing up my head. Could you guys give me a simple example like.. the days of the week! Let's say I have a
```
day(mon, tue, wed, thu, fri).
```
and I wanna know in which day of the week I'm on (assuming on start it'll akways be set as "monday", and I don't even know how to do that also but I think I can figure it out), and I want to set a variable to "tomorrow", (like.. if today is monday I wanna know tomorrow is tuesday)
I know it sounds stupid but I'm used with c and java and this is so hard for me...
Thank you! | 2017/05/17 | [
"https://Stackoverflow.com/questions/44015241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1985937/"
] | If you don't mind using libraries, you can also do something like this:
```
:- use_module(library(clpfd)).
ord_weekday(0, mon).
ord_weekday(1, tue).
ord_weekday(2, wed).
ord_weekday(3, thu).
ord_weekday(4, fri).
ord_weekday(5, sat).
ord_weekday(6, sun).
day_next(D, N) :-
(X+1) mod 7 #= Y,
ord_weekday(X, D),
ord_weekday(Y, N).
```
First we mapped 0 to Monday, 1 to Tuesday, and so on; then, we mapped 0 to 1, 1 to 2, ..., 6 to 0.
Now you can query this like that:
```
?- day_next(mon, X).
X = tue.
?- day_next(X, mon).
X = sun.
```
Importantly, you can leave both arguments free variables and enumerate all possible combinations:
```
?- day_next(D, N).
D = mon, N = tue ;
D = tue, N = wed ;
D = wed, N = thu ;
D = thu, N = fri ;
D = fri, N = sat ;
D = sat, N = sun ;
D = sun, N = mon.
```
This gives you the exact same results are [this solution](https://stackoverflow.com/a/44015672/1812457). I would prefer the other solution for this particular problem (next day of the week), but there might be something else to be learned from the example here. | ### program
```
([user]) .
%%%% database-style prolog program .
%%% for day , tomorrow , yesterday .
day( day( sk( 10'7 ) , pk( 36'sun ) , nm( 'sunday' ) , next( 36'mon ) ) ) .
day( day( sk( 10'6 ) , pk( 36'sat ) , nm( 'saturday' ) , next( 36'sun ) ) ) .
day( day( sk( 10'5 ) , pk( 36'fri ) , nm( 'friday' ) , next( 36'sat ) ) ) .
day( day( sk( 10'4 ) , pk( 36'thu ) , nm( 'thursday' ) , next( 36'fri ) ) ) .
day( day( sk( 10'3 ) , pk( 36'wed ) , nm( 'wednesday' ) , next( 36'thu ) ) ) .
day( day( sk( 10'2 ) , pk( 36'tue ) , nm( 'tuesday' ) , next( 36'wed ) ) ) .
day( day( sk( 10'1 ) , pk( 36'mon ) , nm( 'monday' ) , next( 36'tue ) ) ) .
tomorrow( tomorrow( day( _tomorrow_ ) ) , today( day( _today_ ) ) )
:-
(
_today_ = ( day( sk( _ ) , pk( _ ) , nm( _ ) , next( _next_ ) ) ) ,
_tomorrow_ = ( day( sk( _ ) , pk( _next_ ) , nm( _ ) , next( _ ) ) ) ,
day( _today_ ) ,
day( _tomorrow_ )
) .
yesterday( yesterday( day( _yesterday_ ) ) , today( day( _today_ ) ) )
:-
(
tomorrow( tomorrow( day( _today_ ) ) , today( day( _yesterday_ ) ) )
) .
%%% legend
%% pk --- primary key
%% sk --- secondary (sort) key
%% nm --- name
%% 10'2 --- a number whereby each digit has 10 possibilities ( [0-9] )
%% 16'ff --- a number whereby each digit has 16 possibilities ( [0-9,a-f] )
%% 36'mon --- a number whereby each digit has 36 possibilities ( [0-9,a-z] )
```
### example usage
```
%%%% example usage
%% query for all day
?- day( DAY ) .
%@ DAY = day(sk(7),pk(37391),nm(sunday),next(29399)) ? ;
%@ DAY = day(sk(6),pk(36677),nm(saturday),next(37391)) ? ;
%@ DAY = day(sk(5),pk(20430),nm(friday),next(36677)) ? ;
%@ DAY = day(sk(4),pk(38226),nm(thursday),next(20430)) ? ;
%@ DAY = day(sk(3),pk(41989),nm(wednesday),next(38226)) ? ;
%@ DAY = day(sk(2),pk(38678),nm(tuesday),next(41989)) ? ;
%@ DAY = day(sk(1),pk(29399),nm(monday),next(38678))
%% query for all day , sorted
?- setof( _day_ , day( _day_ ) , VECTOR ) .
%@ VECTOR =
%@ [
%@ day(sk(1),pk(29399),nm(monday),next(38678)) ,
%@ day(sk(2),pk(38678),nm(tuesday),next(41989)) ,
%@ day(sk(3),pk(41989),nm(wednesday),next(38226)) ,
%@ day(sk(4),pk(38226),nm(thursday),next(20430)) ,
%@ day(sk(5),pk(20430),nm(friday),next(36677)) ,
%@ day(sk(6),pk(36677),nm(saturday),next(37391)) ,
%@ day(sk(7),pk(37391),nm(sunday),next(29399))
%@ ]
%% query for all day , sorted
?- use_module( library( lists ) ) . % for ``member`` .
?- setof( _day_ , day( _day_ ) , _vector_ ) , member( DAY , _vector_ ) .
%@ DAY = day(sk(1),pk(29399),nm(monday),next(38678)) ? ;
%@ DAY = day(sk(2),pk(38678),nm(tuesday),next(41989)) ? ;
%@ DAY = day(sk(3),pk(41989),nm(wednesday),next(38226)) ? ;
%@ DAY = day(sk(4),pk(38226),nm(thursday),next(20430)) ? ;
%@ DAY = day(sk(5),pk(20430),nm(friday),next(36677)) ? ;
%@ DAY = day(sk(6),pk(36677),nm(saturday),next(37391)) ? ;
%@ DAY = day(sk(7),pk(37391),nm(sunday),next(29399)) ? ;
%% query for all yesterday
?-
_query_ =
(
yesterday( yesterday( day( _yesterday_ ) ) , today( day( _today_ ) ) )
)
,
setof( [ _yesterday_ , _today_ ] , _query_ , VECTOR )
.
%@ VECTOR =
%@ [
%@ [ day(sk(1),pk(29399),nm(monday),next(38678)) , day(sk(2),pk(38678),nm(tuesday),next(41989)) ] ,
%@ [ day(sk(2),pk(38678),nm(tuesday),next(41989)) , day(sk(3),pk(41989),nm(wednesday),next(38226))] ,
%@ [ day(sk(3),pk(41989),nm(wednesday),next(38226)) , day(sk(4),pk(38226),nm(thursday),next(20430)) ] ,
%@ [ day(sk(4),pk(38226),nm(thursday),next(20430)) , day(sk(5),pk(20430),nm(friday),next(36677)) ] ,
%@ [ day(sk(5),pk(20430),nm(friday),next(36677)) , day(sk(6),pk(36677),nm(saturday),next(37391)) ] ,
%@ [ day(sk(6),pk(36677),nm(saturday),next(37391)) , day(sk(7),pk(37391),nm(sunday),next(29399)) ] ,
%@ [ day(sk(7),pk(37391),nm(sunday),next(29399)) , day(sk(1),pk(29399),nm(monday),next(38678)) ]
%@ ]
%% query for all tomorrow .
% format results as a json-style map .
?-
_query_ =
(
tomorrow( tomorrow( day( _tomorrow_ ) ) , today( day( _today_ ) ) )
,
_tomorrow_ = day( _ , _ , nm( _nm_tomorrow_ ) , _ )
,
_today_ = day( _ , _ , nm( _nm_today_ ) , _ )
)
,
_each_ =
(
{ tommorrow: _nm_tomorrow_ , today: _nm_today_ }
)
,
setof( _each_ , _query_ , VECTOR )
.
%@ VECTOR = [{tommorrow:friday,today:thursday}] ? ;
%@ VECTOR = [{tommorrow:monday,today:sunday}] ? ;
%@ VECTOR = [{tommorrow:saturday,today:friday}] ? ;
%@ VECTOR = [{tommorrow:sunday,today:saturday}] ? ;
%@ VECTOR = [{tommorrow:thursday,today:wednesday}] ? ;
%@ VECTOR = [{tommorrow:tuesday,today:monday}] ? ;
%@ VECTOR = [{tommorrow:wednesday,today:tuesday}]
%% query for the day named monday
?-
_nm_ = 'monday'
,
_day_ = day( sk( _sk_ ) , pk( _pk_ ) , nm( _nm_ ) , _etc_ )
,
setof( _day_ , day( _day_ ) , VECTOR )
.
%@ VECTOR =
%@ [
%@ day(sk(1),pk(29399),nm(monday),next(38678))
%@ ]
%% query for the today of the tomorrow named monday
?-
_today_ = day( _ , _ , nm( _nm_today_ ) , _ )
,
_tomorrow_ = day( _ , _ , nm( _nm_tomorrow_ ) , _ )
,
_nm_tomorrow_ = 'monday'
,
_query_ = tomorrow( tomorrow( day( _tomorrow_ ) ) , today( day( _today_ ) ) )
,
_each_ = { tomorrow: _nm_tomorrow_ , today: _nm_today_ }
,
setof( _each_ , _query_ , VECTOR )
.
%@ VECTOR =
%@ [
%@ { tomorrow: monday , today: sunday }
%@ ]
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.