qid int64 1 74.7M | question stringlengths 15 58.3k | date stringlengths 10 10 | metadata list | response_j stringlengths 4 30.2k | response_k stringlengths 11 36.5k |
|---|---|---|---|---|---|
26,734,569 | How do I add 2 background colours to a container div. I've seen some solutions that work but only with 50% height for each colour. I need one however to have a set height (see image).
My current solution is for background 1 to be an 1x260px background image with background 2 being a background colour. This however leaves you as you open the page with a flash of background colour 2 until background 1 is finished loading, I would like to avoid this flash. Here is the structure of the page:

Thanks for the help in advance!
**UPDATE:**
I couldn't get any of the solutions working properly in my context, but eventually solved it myself (I realise now my brief might've been slightly incomplete).
Here's my [JSFiddle](http://jsfiddle.net/zj8rrsow/2/)
Here's the Code:
```css
html, body {
margin:0;
padding:0;
}
.other-content {
background-color:lightblue;
width:100%;
height:20px;
}
.page-content {
width:100%;
background-color:lightgray;
}
.container {
width:600px;
height:700px; /* This height is flexible and can change to whatever value you want */
background-color:gray;
margin-top:-50px;
margin-left:auto;
margin-right:auto;
}
.white-bg {
background-color:dodgerblue;
height:50px;
width:100%;
}
```
```html
<div class="other-content"></div>
<div class="page-content">
<div class="white-bg"></div>
<div class="container"></div>
</div>
<div class="other-content"></div>
``` | 2014/11/04 | [
"https://Stackoverflow.com/questions/26734569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1500992/"
] | You can add a pseudo-element like `::after` on your page.
```css
* { margin: 0; padding: 0; }
body {
background: #00AEEF;
}
body:after {
background: #0F75BC;
content: "";
display: block;
height: 50px;
left: 0;
top: 0;
width: 100%;
}
``` | **used to this**
css
```css
.nice{
width:500px;
height:500px;
margin:auto;
position:relative;
background:red;
color:white;
z-index:1;
}
.nice:after{
content:"";
position:absolute;
left:0;
right:0;
top:0;
bottom:70%;
background:green;
z-index:-1;
}
```
```html
<div class="nice">helo helo helo </div>
``` |
26,734,569 | How do I add 2 background colours to a container div. I've seen some solutions that work but only with 50% height for each colour. I need one however to have a set height (see image).
My current solution is for background 1 to be an 1x260px background image with background 2 being a background colour. This however leaves you as you open the page with a flash of background colour 2 until background 1 is finished loading, I would like to avoid this flash. Here is the structure of the page:

Thanks for the help in advance!
**UPDATE:**
I couldn't get any of the solutions working properly in my context, but eventually solved it myself (I realise now my brief might've been slightly incomplete).
Here's my [JSFiddle](http://jsfiddle.net/zj8rrsow/2/)
Here's the Code:
```css
html, body {
margin:0;
padding:0;
}
.other-content {
background-color:lightblue;
width:100%;
height:20px;
}
.page-content {
width:100%;
background-color:lightgray;
}
.container {
width:600px;
height:700px; /* This height is flexible and can change to whatever value you want */
background-color:gray;
margin-top:-50px;
margin-left:auto;
margin-right:auto;
}
.white-bg {
background-color:dodgerblue;
height:50px;
width:100%;
}
```
```html
<div class="other-content"></div>
<div class="page-content">
<div class="white-bg"></div>
<div class="container"></div>
</div>
<div class="other-content"></div>
``` | 2014/11/04 | [
"https://Stackoverflow.com/questions/26734569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1500992/"
] | You could use CSS color gradients. Color stops can be specified in pixels for the first band, and then 100% for the rest. This way you won't have to calculate auto-height depending on the container.
Here is an example based on your use-case. I have taken 60px as the height of first band to make it fit neatly in the below snippet. You would make it 260px as required. (Click full-page for better view).
```css
html, body {
height: 100%;
}
.container {
width: 70%;
margin: auto;
height: 100%;
background-image:
linear-gradient(
to bottom,
#0f75bc, /* Start with color of top band */
#0f75bc 60px, /* Top-band color stops at 260px */
#00aeef 60px, /* Bottom-band color starts at 260px */
#00aeef 100% /* Bottom-band color continues to 100% i.e. remaining height */
);
}
```
```html
<div class="container"></div>
``` | **used to this**
css
```css
.nice{
width:500px;
height:500px;
margin:auto;
position:relative;
background:red;
color:white;
z-index:1;
}
.nice:after{
content:"";
position:absolute;
left:0;
right:0;
top:0;
bottom:70%;
background:green;
z-index:-1;
}
```
```html
<div class="nice">helo helo helo </div>
``` |
26,734,569 | How do I add 2 background colours to a container div. I've seen some solutions that work but only with 50% height for each colour. I need one however to have a set height (see image).
My current solution is for background 1 to be an 1x260px background image with background 2 being a background colour. This however leaves you as you open the page with a flash of background colour 2 until background 1 is finished loading, I would like to avoid this flash. Here is the structure of the page:

Thanks for the help in advance!
**UPDATE:**
I couldn't get any of the solutions working properly in my context, but eventually solved it myself (I realise now my brief might've been slightly incomplete).
Here's my [JSFiddle](http://jsfiddle.net/zj8rrsow/2/)
Here's the Code:
```css
html, body {
margin:0;
padding:0;
}
.other-content {
background-color:lightblue;
width:100%;
height:20px;
}
.page-content {
width:100%;
background-color:lightgray;
}
.container {
width:600px;
height:700px; /* This height is flexible and can change to whatever value you want */
background-color:gray;
margin-top:-50px;
margin-left:auto;
margin-right:auto;
}
.white-bg {
background-color:dodgerblue;
height:50px;
width:100%;
}
```
```html
<div class="other-content"></div>
<div class="page-content">
<div class="white-bg"></div>
<div class="container"></div>
</div>
<div class="other-content"></div>
``` | 2014/11/04 | [
"https://Stackoverflow.com/questions/26734569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1500992/"
] | You can add a pseudo-element like `::after` on your page.
```css
* { margin: 0; padding: 0; }
body {
background: #00AEEF;
}
body:after {
background: #0F75BC;
content: "";
display: block;
height: 50px;
left: 0;
top: 0;
width: 100%;
}
``` | Here, I have done only using css and background color.
Working [JsFiddle](http://jsfiddle.net/LL9voo6m/)
HTML:
```
<div class="part-b">
<div class="background"></div>
<div class="container">
<div class="row">
Content
</div>
</div>
</div>
```
CSS:
```
.container {
width: 960px !important;
position: relative;
}
.part-b {
background: yellow;
overflow: hidden;
position: relative;
}
.part-b .background {
width: 100%;
height: 100px;
background-color: green;
}
.row {
height: 50px;
}
``` |
26,734,569 | How do I add 2 background colours to a container div. I've seen some solutions that work but only with 50% height for each colour. I need one however to have a set height (see image).
My current solution is for background 1 to be an 1x260px background image with background 2 being a background colour. This however leaves you as you open the page with a flash of background colour 2 until background 1 is finished loading, I would like to avoid this flash. Here is the structure of the page:

Thanks for the help in advance!
**UPDATE:**
I couldn't get any of the solutions working properly in my context, but eventually solved it myself (I realise now my brief might've been slightly incomplete).
Here's my [JSFiddle](http://jsfiddle.net/zj8rrsow/2/)
Here's the Code:
```css
html, body {
margin:0;
padding:0;
}
.other-content {
background-color:lightblue;
width:100%;
height:20px;
}
.page-content {
width:100%;
background-color:lightgray;
}
.container {
width:600px;
height:700px; /* This height is flexible and can change to whatever value you want */
background-color:gray;
margin-top:-50px;
margin-left:auto;
margin-right:auto;
}
.white-bg {
background-color:dodgerblue;
height:50px;
width:100%;
}
```
```html
<div class="other-content"></div>
<div class="page-content">
<div class="white-bg"></div>
<div class="container"></div>
</div>
<div class="other-content"></div>
``` | 2014/11/04 | [
"https://Stackoverflow.com/questions/26734569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1500992/"
] | You could use CSS color gradients. Color stops can be specified in pixels for the first band, and then 100% for the rest. This way you won't have to calculate auto-height depending on the container.
Here is an example based on your use-case. I have taken 60px as the height of first band to make it fit neatly in the below snippet. You would make it 260px as required. (Click full-page for better view).
```css
html, body {
height: 100%;
}
.container {
width: 70%;
margin: auto;
height: 100%;
background-image:
linear-gradient(
to bottom,
#0f75bc, /* Start with color of top band */
#0f75bc 60px, /* Top-band color stops at 260px */
#00aeef 60px, /* Bottom-band color starts at 260px */
#00aeef 100% /* Bottom-band color continues to 100% i.e. remaining height */
);
}
```
```html
<div class="container"></div>
``` | Here, I have done only using css and background color.
Working [JsFiddle](http://jsfiddle.net/LL9voo6m/)
HTML:
```
<div class="part-b">
<div class="background"></div>
<div class="container">
<div class="row">
Content
</div>
</div>
</div>
```
CSS:
```
.container {
width: 960px !important;
position: relative;
}
.part-b {
background: yellow;
overflow: hidden;
position: relative;
}
.part-b .background {
width: 100%;
height: 100px;
background-color: green;
}
.row {
height: 50px;
}
``` |
46,334,316 | Scenario- There is a master lambda who is splitting work and giving it off to multiple other lambdas (workers). The first lambda iterates and invokes the other lambdas asynchronously
If the number of lambdas which are getting spawned are more than 1000, will it fail?
Should there be an SNS between the two lambdas... so that the SNS will retry?
Or a more complicated approach of putting the messages into a queue and then sending notification of 'X' number of worker lambdas to start polling the queue?
Is there a better way? | 2017/09/21 | [
"https://Stackoverflow.com/questions/46334316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8604662/"
] | Just loop through the tuple.
```
#tup stores your tuple
string = ''
for s in tuple :
string+=s
```
Here you are going through the tuple and adding each element of it into a new string. | You can use map to and join.
```
tup = ('h','e',4)
map_str = map(str, tup)
print(''.join(map_str))
```
Map takes two arguments. First argument is the function which has to be used for each element of the list. Second argument is the iterable. |
46,334,316 | Scenario- There is a master lambda who is splitting work and giving it off to multiple other lambdas (workers). The first lambda iterates and invokes the other lambdas asynchronously
If the number of lambdas which are getting spawned are more than 1000, will it fail?
Should there be an SNS between the two lambdas... so that the SNS will retry?
Or a more complicated approach of putting the messages into a queue and then sending notification of 'X' number of worker lambdas to start polling the queue?
Is there a better way? | 2017/09/21 | [
"https://Stackoverflow.com/questions/46334316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8604662/"
] | The given `filter` is useless for this but the given `accumulate` can be used easily:
```
>>> t = ('h','e',4)
>>> accumulate(lambda x, s: str(x) + s, '', t)
'he4'
``` | Just loop through the tuple.
```
#tup stores your tuple
string = ''
for s in tuple :
string+=s
```
Here you are going through the tuple and adding each element of it into a new string. |
46,334,316 | Scenario- There is a master lambda who is splitting work and giving it off to multiple other lambdas (workers). The first lambda iterates and invokes the other lambdas asynchronously
If the number of lambdas which are getting spawned are more than 1000, will it fail?
Should there be an SNS between the two lambdas... so that the SNS will retry?
Or a more complicated approach of putting the messages into a queue and then sending notification of 'X' number of worker lambdas to start polling the queue?
Is there a better way? | 2017/09/21 | [
"https://Stackoverflow.com/questions/46334316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8604662/"
] | 1) Use `reduce` function:
```
>>> t = ('h','e',4)
>>> reduce(lambda x,y: str(x)+str(y), t, '')
'he4'
```
2) Use foolish recursion:
```
>>> def str_by_recursion(t,s=''):
if not t: return ''
return str(t[0]) + str_by_recursion(t[1:])
>>> str_by_recursion(t)
'he4'
``` | You can use map to and join.
```
tup = ('h','e',4)
map_str = map(str, tup)
print(''.join(map_str))
```
Map takes two arguments. First argument is the function which has to be used for each element of the list. Second argument is the iterable. |
46,334,316 | Scenario- There is a master lambda who is splitting work and giving it off to multiple other lambdas (workers). The first lambda iterates and invokes the other lambdas asynchronously
If the number of lambdas which are getting spawned are more than 1000, will it fail?
Should there be an SNS between the two lambdas... so that the SNS will retry?
Or a more complicated approach of putting the messages into a queue and then sending notification of 'X' number of worker lambdas to start polling the queue?
Is there a better way? | 2017/09/21 | [
"https://Stackoverflow.com/questions/46334316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8604662/"
] | The given `filter` is useless for this but the given `accumulate` can be used easily:
```
>>> t = ('h','e',4)
>>> accumulate(lambda x, s: str(x) + s, '', t)
'he4'
``` | You can use map to and join.
```
tup = ('h','e',4)
map_str = map(str, tup)
print(''.join(map_str))
```
Map takes two arguments. First argument is the function which has to be used for each element of the list. Second argument is the iterable. |
46,334,316 | Scenario- There is a master lambda who is splitting work and giving it off to multiple other lambdas (workers). The first lambda iterates and invokes the other lambdas asynchronously
If the number of lambdas which are getting spawned are more than 1000, will it fail?
Should there be an SNS between the two lambdas... so that the SNS will retry?
Or a more complicated approach of putting the messages into a queue and then sending notification of 'X' number of worker lambdas to start polling the queue?
Is there a better way? | 2017/09/21 | [
"https://Stackoverflow.com/questions/46334316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8604662/"
] | The given `filter` is useless for this but the given `accumulate` can be used easily:
```
>>> t = ('h','e',4)
>>> accumulate(lambda x, s: str(x) + s, '', t)
'he4'
``` | 1) Use `reduce` function:
```
>>> t = ('h','e',4)
>>> reduce(lambda x,y: str(x)+str(y), t, '')
'he4'
```
2) Use foolish recursion:
```
>>> def str_by_recursion(t,s=''):
if not t: return ''
return str(t[0]) + str_by_recursion(t[1:])
>>> str_by_recursion(t)
'he4'
``` |
2,417,197 | Currently I am using ViewData or TempData for object persistance in my ASP.NET MVC application.
However in a few cases where I am storing objects into ViewData through my base controller class, I am hitting the database on every request (when ViewData["whatever"] == null).
It would be good to persist these into something with a longer lifespan, namely session. Similarly in an order processing pipeline, I don't want things like Order to be saved to the database on creation. I would rather populate the object in memory and then when the order gets to a certain state, save it.
So it would seem that session is the best place for this? Or would you recommend that in the case of order, to retrieve the order from the database on each request, rather than using session?
Thoughts, suggestions appreciated.
Thanks
Ben | 2010/03/10 | [
"https://Stackoverflow.com/questions/2417197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231216/"
] | I believe this is what Session was designed for - to temporarily store session specific data.
However, due to increased complexity connected with using the Session, even if negligible - in my own ASP.NET MVC project, I have decided to hit the database on every Order creation step page (only ID is passed between the steps). I am ready to optimize and start using session as soon as I will see that the extra database hit for every request is a performance bottleneck. | You can serialize what you wish to persist and place it in a hidden input field like ViewState in WebForms.
Here's an article that should get you started: <http://weblogs.asp.net/shijuvarghese/archive/2010/03/06/persisting-model-state-in-asp-net-mvc-using-html-serialize.aspx> |
2,417,197 | Currently I am using ViewData or TempData for object persistance in my ASP.NET MVC application.
However in a few cases where I am storing objects into ViewData through my base controller class, I am hitting the database on every request (when ViewData["whatever"] == null).
It would be good to persist these into something with a longer lifespan, namely session. Similarly in an order processing pipeline, I don't want things like Order to be saved to the database on creation. I would rather populate the object in memory and then when the order gets to a certain state, save it.
So it would seem that session is the best place for this? Or would you recommend that in the case of order, to retrieve the order from the database on each request, rather than using session?
Thoughts, suggestions appreciated.
Thanks
Ben | 2010/03/10 | [
"https://Stackoverflow.com/questions/2417197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231216/"
] | Just thought I would share how I am using session in my application. I really like this implementation ([Suggestions for Accessing ASP.NET MVC Session[] Data in Controllers and Extension Methods?](https://stackoverflow.com/questions/2213052/suggestions-for-accessing-asp-net-mvc-session-data-in-controllers-and-extension/2213232#2213232)) of using session as it makes it easy to swap out session for another store or for testing purposes.
Looking at the implementation it reminded me of the ObjectStore I have used in other projects to serialize objects as binary or xml and store in a database or on the filesystem.
I therefore simplified my interface (previously T had to be a class) and came up with the following:
```
public interface IObjectStore {
void Delete(string key);
T Get<T>(string key);
void Store<T>(string key, T value);
IList<T> GetList<T>(string key);
}
```
And my session store implementation:
```
public class SessionStore : IObjectStore
{
public void Delete(string key) {
HttpContext.Current.Session.Remove(key);
}
public T Get<T>(string key) {
return (T)HttpContext.Current.Session[key];
}
public void Store<T>(string key, T value) {
HttpContext.Current.Session[key] = value;
}
public IList<T> GetList<T>(string key) {
throw new NotImplementedException();
}
}
```
I then take in an IObjectStore in my base controller's constructor and can then use it like so to expose properties to my other controllers:
```
public string CurrentCustomer {
get {
string currentCustomer =
sessionStore.Get<string>(SessionKeys.CustomerSessionKey);
if (currentCustomer == null) {
currentCustomer = Guid.NewGuid().ToString();
sessionStore.Store<string>(SessionKeys.CustomerSessionKey, currentCustomer);
}
return currentCustomer;
}
}
```
Am quite pleased with this approach. | You can serialize what you wish to persist and place it in a hidden input field like ViewState in WebForms.
Here's an article that should get you started: <http://weblogs.asp.net/shijuvarghese/archive/2010/03/06/persisting-model-state-in-asp-net-mvc-using-html-serialize.aspx> |
2,417,197 | Currently I am using ViewData or TempData for object persistance in my ASP.NET MVC application.
However in a few cases where I am storing objects into ViewData through my base controller class, I am hitting the database on every request (when ViewData["whatever"] == null).
It would be good to persist these into something with a longer lifespan, namely session. Similarly in an order processing pipeline, I don't want things like Order to be saved to the database on creation. I would rather populate the object in memory and then when the order gets to a certain state, save it.
So it would seem that session is the best place for this? Or would you recommend that in the case of order, to retrieve the order from the database on each request, rather than using session?
Thoughts, suggestions appreciated.
Thanks
Ben | 2010/03/10 | [
"https://Stackoverflow.com/questions/2417197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231216/"
] | I believe this is what Session was designed for - to temporarily store session specific data.
However, due to increased complexity connected with using the Session, even if negligible - in my own ASP.NET MVC project, I have decided to hit the database on every Order creation step page (only ID is passed between the steps). I am ready to optimize and start using session as soon as I will see that the extra database hit for every request is a performance bottleneck. | Just thought I would share how I am using session in my application. I really like this implementation ([Suggestions for Accessing ASP.NET MVC Session[] Data in Controllers and Extension Methods?](https://stackoverflow.com/questions/2213052/suggestions-for-accessing-asp-net-mvc-session-data-in-controllers-and-extension/2213232#2213232)) of using session as it makes it easy to swap out session for another store or for testing purposes.
Looking at the implementation it reminded me of the ObjectStore I have used in other projects to serialize objects as binary or xml and store in a database or on the filesystem.
I therefore simplified my interface (previously T had to be a class) and came up with the following:
```
public interface IObjectStore {
void Delete(string key);
T Get<T>(string key);
void Store<T>(string key, T value);
IList<T> GetList<T>(string key);
}
```
And my session store implementation:
```
public class SessionStore : IObjectStore
{
public void Delete(string key) {
HttpContext.Current.Session.Remove(key);
}
public T Get<T>(string key) {
return (T)HttpContext.Current.Session[key];
}
public void Store<T>(string key, T value) {
HttpContext.Current.Session[key] = value;
}
public IList<T> GetList<T>(string key) {
throw new NotImplementedException();
}
}
```
I then take in an IObjectStore in my base controller's constructor and can then use it like so to expose properties to my other controllers:
```
public string CurrentCustomer {
get {
string currentCustomer =
sessionStore.Get<string>(SessionKeys.CustomerSessionKey);
if (currentCustomer == null) {
currentCustomer = Guid.NewGuid().ToString();
sessionStore.Store<string>(SessionKeys.CustomerSessionKey, currentCustomer);
}
return currentCustomer;
}
}
```
Am quite pleased with this approach. |
118,290 | Basic Problem
-------------
I'm trying to produce a command sequence `\m` that
1. Is called from within math mode, with a single argument `list`
2. "Replaces" all instances `,` and `;` in `list` with `&` and `\\` (resp.)
3. Creates a `pmatrix` with output of 1 & 2 as its content
For example, the code
```
\[
\m{a,b;c,d}\m{x;y}
\]
```
should expand to be equivalent to the following:
```
\[
\begin{pmatrix}a&b \\ c&d\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}
\]
```
After spending more time on this than I should have, I came up with the following partial solution to the problem using `xstring`:
```
\def\foo#1{
\StrSubstitute{#1}{,}{&}[\result]
\verbtocs{\bslashes}|\\|
\expandarg
\StrSubstitute{\result}{;}{ \bslashes }[\result]
}
\begin{document}
% Fairly complicated matrix input
\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\[
\begin{pmatrix}
\result
\end{pmatrix}
\]
\end{document}
```
This is close to what I was hoping for, but I haven't come up with a way to use `\foo` as a helper function for `\m` to get exactly what I want. I was thinking that defining `\m` in the preamble by
```
\def\m#1{
\foo{#1}
\begin{pmatrix}
\result
\end{pmatrix}
}
```
would work, but then even calling something as simple as `\m{a}` in math mode will produce errors. It looks as though math mode attempts to parse the definition of `\m` instead of letting `\m` fully expand, but my understanding of how this works is too limited to know how to fix my code.
Is there a way to modify the definitions of `\m` and `\foo` to make them do what I want? Can I somehow escape math mode in the definition of `\m` to avoid these errors? | 2013/06/08 | [
"https://tex.stackexchange.com/questions/118290",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/14062/"
] | Here's an implementation with `expl3` functions:
```
\documentclass{article}
\usepackage{xparse,amsmath}
\ExplSyntaxOn
\NewDocumentCommand{\matlabmatrix}{m}
{
\asql_matlab_matrix:n { #1 }
}
\seq_new:N \l_asql_rows_seq
\seq_new:N \l_asql_one_row_seq
\tl_new:N \l_asql_matrix_tl
\cs_new_protected:Npn \asql_matlab_matrix:n #1
{
% clear the token list variable containing the final data
\tl_clear:N \l_asql_matrix_tl
% split the argument at the semicolon
\seq_set_split:Nnn \l_asql_rows_seq { ; } { #1 }
% build one row at a time
\seq_map_inline:Nn \l_asql_rows_seq
{
\__asql_build_row:n { ##1 }
}
% print the matrix
\begin{pmatrix}
\tl_use:N \l_asql_matrix_tl
\end{pmatrix}
}
% the inner function
\cs_new_protected:Npn \__asql_build_row:n #1
{
% split the input at commas
\seq_set_split:Nnn \l_asql_one_row_seq { , } { #1 }
% add the row to the token list variable
% items are separated by &
\tl_put_right:Nx \l_asql_matrix_tl
{ \seq_use:Nnnn \l_asql_one_row_seq { & } { & } { & } }
% add also the \\ row terminator
\tl_put_right:Nn \l_asql_matrix_tl { \\ }
}
\ExplSyntaxOff
\begin{document}
\begin{gather*}
\matlabmatrix{a,b;c,d}\matlabmatrix{x;y} \\
\matlabmatrix{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\end{gather*}
\end{document}
```
It's quite straightforward: we first split the argument at semicolons; then each item is split at commas and, row by row, the contents of the matrix is built. Finally the contents is inserted between `\begin{pmatrix}` and `\end{pmatrix}` for printing.
 | This code should do what you want:
```
\documentclass{article}
\usepackage{xstring,amsmath}
\newcommand*\mmm[1]{%
\begingroup\expandarg
\StrSubstitute{\noexpand#1},&[\result]%
\StrSubstitute\result{\noexpand;}{\noexpand\\}[\result]%
\begin{pmatrix}\result\end{pmatrix}\endgroup
}
\begin{document}
\[\mmm{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}\]
\[\mmm{a}\]
\end{document}
``` |
118,290 | Basic Problem
-------------
I'm trying to produce a command sequence `\m` that
1. Is called from within math mode, with a single argument `list`
2. "Replaces" all instances `,` and `;` in `list` with `&` and `\\` (resp.)
3. Creates a `pmatrix` with output of 1 & 2 as its content
For example, the code
```
\[
\m{a,b;c,d}\m{x;y}
\]
```
should expand to be equivalent to the following:
```
\[
\begin{pmatrix}a&b \\ c&d\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}
\]
```
After spending more time on this than I should have, I came up with the following partial solution to the problem using `xstring`:
```
\def\foo#1{
\StrSubstitute{#1}{,}{&}[\result]
\verbtocs{\bslashes}|\\|
\expandarg
\StrSubstitute{\result}{;}{ \bslashes }[\result]
}
\begin{document}
% Fairly complicated matrix input
\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\[
\begin{pmatrix}
\result
\end{pmatrix}
\]
\end{document}
```
This is close to what I was hoping for, but I haven't come up with a way to use `\foo` as a helper function for `\m` to get exactly what I want. I was thinking that defining `\m` in the preamble by
```
\def\m#1{
\foo{#1}
\begin{pmatrix}
\result
\end{pmatrix}
}
```
would work, but then even calling something as simple as `\m{a}` in math mode will produce errors. It looks as though math mode attempts to parse the definition of `\m` instead of letting `\m` fully expand, but my understanding of how this works is too limited to know how to fix my code.
Is there a way to modify the definitions of `\m` and `\foo` to make them do what I want? Can I somehow escape math mode in the definition of `\m` to avoid these errors? | 2013/06/08 | [
"https://tex.stackexchange.com/questions/118290",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/14062/"
] | Here is a method the old-fashioned way, using macros with delimited parameters.
```
\documentclass{article}
\usepackage{amsmath} % for pmatrix environment
\newtoks\asqltoks
\makeatletter
\def\gobtilundef #1\undef {}
\def\matlabmatrix #1{\asqltoks{\begin{pmatrix}}\@asqlA #1;\undef;}
% \def\@asqlA #1;{\gobtilundef #1\@asqlE\undef\@asqlR #1,\undef,}
% update: simplified to ->
\def\@asqlA #1;{\@asqlR #1,\undef,}
\def\@asqlB #1;{\gobtilundef #1\@asqlE\undef
\asqltoks\expandafter{\the\asqltoks \\}\@asqlR #1,\undef,}
\def\@asqlE\undef #1\undef,\undef,{%
\asqltoks\expandafter{\the\asqltoks\end{pmatrix}}\the\asqltoks }
\def\@asqlR #1,{\asqltoks\expandafter{\the\asqltoks #1}\@asqlS }
\def\@asqlS #1,{\gobtilundef #1\@asqlZ\undef
\asqltoks\expandafter{\the\asqltoks }\@asqlS }
\def\@asqlZ #1\@asqlS {\@asqlB }
\makeatother
\begin{document}
$\matlabmatrix {m}$
$\matlabmatrix {m,n}$
$\matlabmatrix {m,n;p,q}$
$\matlabmatrix {a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
\end{document}
```
 | This code should do what you want:
```
\documentclass{article}
\usepackage{xstring,amsmath}
\newcommand*\mmm[1]{%
\begingroup\expandarg
\StrSubstitute{\noexpand#1},&[\result]%
\StrSubstitute\result{\noexpand;}{\noexpand\\}[\result]%
\begin{pmatrix}\result\end{pmatrix}\endgroup
}
\begin{document}
\[\mmm{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}\]
\[\mmm{a}\]
\end{document}
``` |
118,290 | Basic Problem
-------------
I'm trying to produce a command sequence `\m` that
1. Is called from within math mode, with a single argument `list`
2. "Replaces" all instances `,` and `;` in `list` with `&` and `\\` (resp.)
3. Creates a `pmatrix` with output of 1 & 2 as its content
For example, the code
```
\[
\m{a,b;c,d}\m{x;y}
\]
```
should expand to be equivalent to the following:
```
\[
\begin{pmatrix}a&b \\ c&d\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}
\]
```
After spending more time on this than I should have, I came up with the following partial solution to the problem using `xstring`:
```
\def\foo#1{
\StrSubstitute{#1}{,}{&}[\result]
\verbtocs{\bslashes}|\\|
\expandarg
\StrSubstitute{\result}{;}{ \bslashes }[\result]
}
\begin{document}
% Fairly complicated matrix input
\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\[
\begin{pmatrix}
\result
\end{pmatrix}
\]
\end{document}
```
This is close to what I was hoping for, but I haven't come up with a way to use `\foo` as a helper function for `\m` to get exactly what I want. I was thinking that defining `\m` in the preamble by
```
\def\m#1{
\foo{#1}
\begin{pmatrix}
\result
\end{pmatrix}
}
```
would work, but then even calling something as simple as `\m{a}` in math mode will produce errors. It looks as though math mode attempts to parse the definition of `\m` instead of letting `\m` fully expand, but my understanding of how this works is too limited to know how to fix my code.
Is there a way to modify the definitions of `\m` and `\foo` to make them do what I want? Can I somehow escape math mode in the definition of `\m` to avoid these errors? | 2013/06/08 | [
"https://tex.stackexchange.com/questions/118290",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/14062/"
] | Here's an implementation with `expl3` functions:
```
\documentclass{article}
\usepackage{xparse,amsmath}
\ExplSyntaxOn
\NewDocumentCommand{\matlabmatrix}{m}
{
\asql_matlab_matrix:n { #1 }
}
\seq_new:N \l_asql_rows_seq
\seq_new:N \l_asql_one_row_seq
\tl_new:N \l_asql_matrix_tl
\cs_new_protected:Npn \asql_matlab_matrix:n #1
{
% clear the token list variable containing the final data
\tl_clear:N \l_asql_matrix_tl
% split the argument at the semicolon
\seq_set_split:Nnn \l_asql_rows_seq { ; } { #1 }
% build one row at a time
\seq_map_inline:Nn \l_asql_rows_seq
{
\__asql_build_row:n { ##1 }
}
% print the matrix
\begin{pmatrix}
\tl_use:N \l_asql_matrix_tl
\end{pmatrix}
}
% the inner function
\cs_new_protected:Npn \__asql_build_row:n #1
{
% split the input at commas
\seq_set_split:Nnn \l_asql_one_row_seq { , } { #1 }
% add the row to the token list variable
% items are separated by &
\tl_put_right:Nx \l_asql_matrix_tl
{ \seq_use:Nnnn \l_asql_one_row_seq { & } { & } { & } }
% add also the \\ row terminator
\tl_put_right:Nn \l_asql_matrix_tl { \\ }
}
\ExplSyntaxOff
\begin{document}
\begin{gather*}
\matlabmatrix{a,b;c,d}\matlabmatrix{x;y} \\
\matlabmatrix{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\end{gather*}
\end{document}
```
It's quite straightforward: we first split the argument at semicolons; then each item is split at commas and, row by row, the contents of the matrix is built. Finally the contents is inserted between `\begin{pmatrix}` and `\end{pmatrix}` for printing.
 | Well, if you’re already using [`xstring`](http://www.ctan.org/pkg/xstring), you could just try [`xparse`](http://www.ctan.org/pkg/xparse).
Apparently, `pmatrix` survives an additional `\\` after the last line without adding an extra line (unlike the usual math environments). The additional `&` though has to be removed, this is done by `\@gobblesecondoftwo`.
I have also included a solution that does use TeX’s delimited parameter to split the argument at `;`s and `,`s.
Code
----
```
\documentclass{article}
\usepackage{amsmath}
% xparse solution
\usepackage{xparse}
\makeatletter
\def\@gobblethirdofthree#1#2#3{#1#2}
\def\@gobblesecondoftwo#1#2{#1}
\NewDocumentCommand\foo{>{\SplitList;}m}{
\begin{pmatrix} \ProcessList{#1}\@foo \end{pmatrix}
}
\NewDocumentCommand\@foo{>{\SplitList,}m}{
\expandafter\expandafter\expandafter\@gobblesecondoftwo
\ProcessList{#1}\@@foo \\
}
\def\@@foo#1{}
% plain delimited parameters
\newcommand*{\fooo}[1]{%
\begin{pmatrix}\foo@split@semi#1;\foo@@@split@semi\foo@@split@semi\end{pmatrix}
}
\def\foo@split@semi#1;#2\foo@@split@semi{%
\foo@split@comma#1,\foo@@@split@comma\foo@@split@comma
\ifx\foo@@@split@semi#2\expandafter\@gobble\else
\expandafter\@firstofone
\fi{\\\foo@split@semi#2\foo@@split@semi}%
}
\def\foo@split@comma#1,#2\foo@@split@comma{%
#1
\ifx\foo@@@split@comma#2\expandafter\@gobble\else
\expandafter\@firstofone
\fi{&\foo@split@comma#2\foo@@split@comma}%
}
\makeatother
\begin{document}
$\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
$\foo{m}$
$\fooo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
$\fooo{m}$
\end{document}
```
Output
------
 |
118,290 | Basic Problem
-------------
I'm trying to produce a command sequence `\m` that
1. Is called from within math mode, with a single argument `list`
2. "Replaces" all instances `,` and `;` in `list` with `&` and `\\` (resp.)
3. Creates a `pmatrix` with output of 1 & 2 as its content
For example, the code
```
\[
\m{a,b;c,d}\m{x;y}
\]
```
should expand to be equivalent to the following:
```
\[
\begin{pmatrix}a&b \\ c&d\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}
\]
```
After spending more time on this than I should have, I came up with the following partial solution to the problem using `xstring`:
```
\def\foo#1{
\StrSubstitute{#1}{,}{&}[\result]
\verbtocs{\bslashes}|\\|
\expandarg
\StrSubstitute{\result}{;}{ \bslashes }[\result]
}
\begin{document}
% Fairly complicated matrix input
\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\[
\begin{pmatrix}
\result
\end{pmatrix}
\]
\end{document}
```
This is close to what I was hoping for, but I haven't come up with a way to use `\foo` as a helper function for `\m` to get exactly what I want. I was thinking that defining `\m` in the preamble by
```
\def\m#1{
\foo{#1}
\begin{pmatrix}
\result
\end{pmatrix}
}
```
would work, but then even calling something as simple as `\m{a}` in math mode will produce errors. It looks as though math mode attempts to parse the definition of `\m` instead of letting `\m` fully expand, but my understanding of how this works is too limited to know how to fix my code.
Is there a way to modify the definitions of `\m` and `\foo` to make them do what I want? Can I somehow escape math mode in the definition of `\m` to avoid these errors? | 2013/06/08 | [
"https://tex.stackexchange.com/questions/118290",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/14062/"
] | Here's an implementation with `expl3` functions:
```
\documentclass{article}
\usepackage{xparse,amsmath}
\ExplSyntaxOn
\NewDocumentCommand{\matlabmatrix}{m}
{
\asql_matlab_matrix:n { #1 }
}
\seq_new:N \l_asql_rows_seq
\seq_new:N \l_asql_one_row_seq
\tl_new:N \l_asql_matrix_tl
\cs_new_protected:Npn \asql_matlab_matrix:n #1
{
% clear the token list variable containing the final data
\tl_clear:N \l_asql_matrix_tl
% split the argument at the semicolon
\seq_set_split:Nnn \l_asql_rows_seq { ; } { #1 }
% build one row at a time
\seq_map_inline:Nn \l_asql_rows_seq
{
\__asql_build_row:n { ##1 }
}
% print the matrix
\begin{pmatrix}
\tl_use:N \l_asql_matrix_tl
\end{pmatrix}
}
% the inner function
\cs_new_protected:Npn \__asql_build_row:n #1
{
% split the input at commas
\seq_set_split:Nnn \l_asql_one_row_seq { , } { #1 }
% add the row to the token list variable
% items are separated by &
\tl_put_right:Nx \l_asql_matrix_tl
{ \seq_use:Nnnn \l_asql_one_row_seq { & } { & } { & } }
% add also the \\ row terminator
\tl_put_right:Nn \l_asql_matrix_tl { \\ }
}
\ExplSyntaxOff
\begin{document}
\begin{gather*}
\matlabmatrix{a,b;c,d}\matlabmatrix{x;y} \\
\matlabmatrix{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\end{gather*}
\end{document}
```
It's quite straightforward: we first split the argument at semicolons; then each item is split at commas and, row by row, the contents of the matrix is built. Finally the contents is inserted between `\begin{pmatrix}` and `\end{pmatrix}` for printing.
 | You don't have to do string replacement you can just define `,` and `;` to do the right thing in `pmatrix`

```
\documentclass{article}
\usepackage{amsmath}
\def\m#1{{%
\mathcode`\,"8000
\mathcode`\;"8000
\begingroup\lccode`\~`\,%
\lowercase{\endgroup\def~}{&}%
\begingroup\lccode`\~`\;%
\lowercase{\endgroup\def~}{\\}%
\begin{pmatrix}#1\end{pmatrix}}}
\begin{document}
\[
\m{a,b;c,d}\m{x;y}
\]
\end{document}
``` |
118,290 | Basic Problem
-------------
I'm trying to produce a command sequence `\m` that
1. Is called from within math mode, with a single argument `list`
2. "Replaces" all instances `,` and `;` in `list` with `&` and `\\` (resp.)
3. Creates a `pmatrix` with output of 1 & 2 as its content
For example, the code
```
\[
\m{a,b;c,d}\m{x;y}
\]
```
should expand to be equivalent to the following:
```
\[
\begin{pmatrix}a&b \\ c&d\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}
\]
```
After spending more time on this than I should have, I came up with the following partial solution to the problem using `xstring`:
```
\def\foo#1{
\StrSubstitute{#1}{,}{&}[\result]
\verbtocs{\bslashes}|\\|
\expandarg
\StrSubstitute{\result}{;}{ \bslashes }[\result]
}
\begin{document}
% Fairly complicated matrix input
\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\[
\begin{pmatrix}
\result
\end{pmatrix}
\]
\end{document}
```
This is close to what I was hoping for, but I haven't come up with a way to use `\foo` as a helper function for `\m` to get exactly what I want. I was thinking that defining `\m` in the preamble by
```
\def\m#1{
\foo{#1}
\begin{pmatrix}
\result
\end{pmatrix}
}
```
would work, but then even calling something as simple as `\m{a}` in math mode will produce errors. It looks as though math mode attempts to parse the definition of `\m` instead of letting `\m` fully expand, but my understanding of how this works is too limited to know how to fix my code.
Is there a way to modify the definitions of `\m` and `\foo` to make them do what I want? Can I somehow escape math mode in the definition of `\m` to avoid these errors? | 2013/06/08 | [
"https://tex.stackexchange.com/questions/118290",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/14062/"
] | Here is a method the old-fashioned way, using macros with delimited parameters.
```
\documentclass{article}
\usepackage{amsmath} % for pmatrix environment
\newtoks\asqltoks
\makeatletter
\def\gobtilundef #1\undef {}
\def\matlabmatrix #1{\asqltoks{\begin{pmatrix}}\@asqlA #1;\undef;}
% \def\@asqlA #1;{\gobtilundef #1\@asqlE\undef\@asqlR #1,\undef,}
% update: simplified to ->
\def\@asqlA #1;{\@asqlR #1,\undef,}
\def\@asqlB #1;{\gobtilundef #1\@asqlE\undef
\asqltoks\expandafter{\the\asqltoks \\}\@asqlR #1,\undef,}
\def\@asqlE\undef #1\undef,\undef,{%
\asqltoks\expandafter{\the\asqltoks\end{pmatrix}}\the\asqltoks }
\def\@asqlR #1,{\asqltoks\expandafter{\the\asqltoks #1}\@asqlS }
\def\@asqlS #1,{\gobtilundef #1\@asqlZ\undef
\asqltoks\expandafter{\the\asqltoks }\@asqlS }
\def\@asqlZ #1\@asqlS {\@asqlB }
\makeatother
\begin{document}
$\matlabmatrix {m}$
$\matlabmatrix {m,n}$
$\matlabmatrix {m,n;p,q}$
$\matlabmatrix {a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
\end{document}
```
 | Well, if you’re already using [`xstring`](http://www.ctan.org/pkg/xstring), you could just try [`xparse`](http://www.ctan.org/pkg/xparse).
Apparently, `pmatrix` survives an additional `\\` after the last line without adding an extra line (unlike the usual math environments). The additional `&` though has to be removed, this is done by `\@gobblesecondoftwo`.
I have also included a solution that does use TeX’s delimited parameter to split the argument at `;`s and `,`s.
Code
----
```
\documentclass{article}
\usepackage{amsmath}
% xparse solution
\usepackage{xparse}
\makeatletter
\def\@gobblethirdofthree#1#2#3{#1#2}
\def\@gobblesecondoftwo#1#2{#1}
\NewDocumentCommand\foo{>{\SplitList;}m}{
\begin{pmatrix} \ProcessList{#1}\@foo \end{pmatrix}
}
\NewDocumentCommand\@foo{>{\SplitList,}m}{
\expandafter\expandafter\expandafter\@gobblesecondoftwo
\ProcessList{#1}\@@foo \\
}
\def\@@foo#1{}
% plain delimited parameters
\newcommand*{\fooo}[1]{%
\begin{pmatrix}\foo@split@semi#1;\foo@@@split@semi\foo@@split@semi\end{pmatrix}
}
\def\foo@split@semi#1;#2\foo@@split@semi{%
\foo@split@comma#1,\foo@@@split@comma\foo@@split@comma
\ifx\foo@@@split@semi#2\expandafter\@gobble\else
\expandafter\@firstofone
\fi{\\\foo@split@semi#2\foo@@split@semi}%
}
\def\foo@split@comma#1,#2\foo@@split@comma{%
#1
\ifx\foo@@@split@comma#2\expandafter\@gobble\else
\expandafter\@firstofone
\fi{&\foo@split@comma#2\foo@@split@comma}%
}
\makeatother
\begin{document}
$\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
$\foo{m}$
$\fooo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
$\fooo{m}$
\end{document}
```
Output
------
 |
118,290 | Basic Problem
-------------
I'm trying to produce a command sequence `\m` that
1. Is called from within math mode, with a single argument `list`
2. "Replaces" all instances `,` and `;` in `list` with `&` and `\\` (resp.)
3. Creates a `pmatrix` with output of 1 & 2 as its content
For example, the code
```
\[
\m{a,b;c,d}\m{x;y}
\]
```
should expand to be equivalent to the following:
```
\[
\begin{pmatrix}a&b \\ c&d\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}
\]
```
After spending more time on this than I should have, I came up with the following partial solution to the problem using `xstring`:
```
\def\foo#1{
\StrSubstitute{#1}{,}{&}[\result]
\verbtocs{\bslashes}|\\|
\expandarg
\StrSubstitute{\result}{;}{ \bslashes }[\result]
}
\begin{document}
% Fairly complicated matrix input
\foo{a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}
\[
\begin{pmatrix}
\result
\end{pmatrix}
\]
\end{document}
```
This is close to what I was hoping for, but I haven't come up with a way to use `\foo` as a helper function for `\m` to get exactly what I want. I was thinking that defining `\m` in the preamble by
```
\def\m#1{
\foo{#1}
\begin{pmatrix}
\result
\end{pmatrix}
}
```
would work, but then even calling something as simple as `\m{a}` in math mode will produce errors. It looks as though math mode attempts to parse the definition of `\m` instead of letting `\m` fully expand, but my understanding of how this works is too limited to know how to fix my code.
Is there a way to modify the definitions of `\m` and `\foo` to make them do what I want? Can I somehow escape math mode in the definition of `\m` to avoid these errors? | 2013/06/08 | [
"https://tex.stackexchange.com/questions/118290",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/14062/"
] | Here is a method the old-fashioned way, using macros with delimited parameters.
```
\documentclass{article}
\usepackage{amsmath} % for pmatrix environment
\newtoks\asqltoks
\makeatletter
\def\gobtilundef #1\undef {}
\def\matlabmatrix #1{\asqltoks{\begin{pmatrix}}\@asqlA #1;\undef;}
% \def\@asqlA #1;{\gobtilundef #1\@asqlE\undef\@asqlR #1,\undef,}
% update: simplified to ->
\def\@asqlA #1;{\@asqlR #1,\undef,}
\def\@asqlB #1;{\gobtilundef #1\@asqlE\undef
\asqltoks\expandafter{\the\asqltoks \\}\@asqlR #1,\undef,}
\def\@asqlE\undef #1\undef,\undef,{%
\asqltoks\expandafter{\the\asqltoks\end{pmatrix}}\the\asqltoks }
\def\@asqlR #1,{\asqltoks\expandafter{\the\asqltoks #1}\@asqlS }
\def\@asqlS #1,{\gobtilundef #1\@asqlZ\undef
\asqltoks\expandafter{\the\asqltoks }\@asqlS }
\def\@asqlZ #1\@asqlS {\@asqlB }
\makeatother
\begin{document}
$\matlabmatrix {m}$
$\matlabmatrix {m,n}$
$\matlabmatrix {m,n;p,q}$
$\matlabmatrix {a,b,c;d,e_{f,g},h;i,j_{1,e^n},k}$
\end{document}
```
 | You don't have to do string replacement you can just define `,` and `;` to do the right thing in `pmatrix`

```
\documentclass{article}
\usepackage{amsmath}
\def\m#1{{%
\mathcode`\,"8000
\mathcode`\;"8000
\begingroup\lccode`\~`\,%
\lowercase{\endgroup\def~}{&}%
\begingroup\lccode`\~`\;%
\lowercase{\endgroup\def~}{\\}%
\begin{pmatrix}#1\end{pmatrix}}}
\begin{document}
\[
\m{a,b;c,d}\m{x;y}
\]
\end{document}
``` |
52,959,822 | Hy I have made an AIR application that uses the flash.desktop.NativeProcess to start an c++ a\* pathsolver. The reason being - Flash needs too much time to solve 250 \* 250 open grid.
The AIR app is fine. It can start the c++ exe file
The exe fine works on its own
The problem is. They don't work in pair :(
When flash sends the arguments the c++ part just dies silently
```
char buf[ 256 ]; std::cin.getline( buf, 256 );
```
I just havent mannaged to find wat is going on. If i use arguments insted of standard imput i get some strange characters. Any idea ? | 2018/10/24 | [
"https://Stackoverflow.com/questions/52959822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3845417/"
] | You can use modle bind to data in your label.Just like this:
in xaml :
```
<Label Text="{Binding Name,StringFormat='You can win {0} dollars for sure'}"
HorizontalOptions="Center"
VerticalOptions="CenterAndExpand" />
```
in ContentPage, should bind context:
```
nativeDataTemple = new NativeDataTemple();
BindingContext = nativeDataTemple;
```
and Molde(**NativeDataTemple** you custom) should contain the binding property,like this:
```
private string name = "520";
public string Name
{
set
{
if (name != value)
{
name = value;
OnPropertyChanged("Name");
}
}
get
{
return name;
}
}
```
and in your modle ,when Name value change in the background,add **INotifyPropertyChanged** to the modle,and method
```
protected virtual void OnPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
```
then where you want change the data ,jusst do that:
```
nativeDataTemple.Name = "550";
```
if have problem ,you can refer to this[Official document](https://learn.microsoft.com/en-us/xamarin/xamarin-forms/xaml/xaml-basics/data-bindings-to-mvvm) | use spans within a Label
```
<Label>
<Label.FormattedText>
<FormattedString>
<Span Text="You can win " />
<Span Text="{Binding DollarAmount}" />
<Span Text=" dollars for sure." />
</FormattedString>
</Label.FormattedText>
</Label>
``` |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | Both `Thread.sleep(long)` and `Object.wait(long)` block current thread. However `wait` may return earlier (spurious wakeup), see javadoc. So for `wait` we need to implement additional logic which guarantees that specified amount of time elapsed. So if you simply want to make a pause - use `Thread.sleep` | You use `Thread.sleep` every time you want to slow things down. In certain scenarios you are unable to synchronize, like in case of communication with external systems over the network, database, etc.
Example scenarios:
* ***Error recovery*** - When your system depends on some external entity that reports temporary errors. You communicate with external system you have no control of and it says it has temporary issues that are not related to your request. You do `Thread.sleep` and retry. If you did not `sleep` you would get error flood. This is quite common pattern in integration middleware.
* ***Timeouts*** - You wait for something to happen but no longer than 10 seconds. You just cannot accept longer waits and want to quit.
* ***Throttling*** - Someone tells you you cannot do something more often than once a 10 seconds.
Please keep in mind that there are various ways to wait, not neccessarily by calling `Thread.sleep`. |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | Although many times the event-driven model is the best way to "wait" for an action to occur, there are sometimes that you need to wait intentionally for a short amount of time and then make an action.
A common case of this is a condition of sampling/polling data (from files, from the network etc) between some periods of time. In this case, you just want to "refresh" your data in a sense between time intervals.
For example, if you have an application that makes requests to a web service via the network, you might want to have a threat to perform this task periodically, having a "sleeping" behavior most of the time, but perform the service request task after some time, repeating this behavior again and again. | If the requirement spec calls for a five-second wait, maybe somewhere deep down in several functions in some process-control thread code, maybe only under some conditions, a Sleep(5000) call is a good solution for the following reasons:
* It does not require simple in-line code to be rewritten as a complex
state-machine so as to be able to use an asynchronous timer.
* It invoves no other timer or pool thread to be run to implement the timeout.
* It's a one-liner that does nor require wait-objects to be constructed etc.
* Sleep() is available, in almost the same form, on all multitasking OS I have ever used.
Sleep() gets bad press because:
* It 'wastes a thread'. In many systems, eg. when the thread is going to be there anyway and will run for the lifetime of the app, who cares?
* It is often misused for inter-thread comms polling loops, so adding CPU waste and latency This is indeed indefensible.
* It often cannot be interrupted so as to allow a 'clean and quick'
shutdown of the thread. Again, in many systems, it does not matter if pool or app-lifetime threads get rudely stopped by a process termination, so why bother trying?
Example of reasonable usage:
```
void StartFeedstockDelivery{
if (airbankPressure()<MIN_PRESSURE){
startCompressor();
sleep(10000); // wait for pressure to build up
openFeedValve();
};
``` |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | It would be impossible to write java.util.Timer without a sleep() method, or at least it would require you to abuse the wait() method, and write a lot of extra code around it to protect against spurious wakeups. | You use `Thread.sleep` every time you want to slow things down. In certain scenarios you are unable to synchronize, like in case of communication with external systems over the network, database, etc.
Example scenarios:
* ***Error recovery*** - When your system depends on some external entity that reports temporary errors. You communicate with external system you have no control of and it says it has temporary issues that are not related to your request. You do `Thread.sleep` and retry. If you did not `sleep` you would get error flood. This is quite common pattern in integration middleware.
* ***Timeouts*** - You wait for something to happen but no longer than 10 seconds. You just cannot accept longer waits and want to quit.
* ***Throttling*** - Someone tells you you cannot do something more often than once a 10 seconds.
Please keep in mind that there are various ways to wait, not neccessarily by calling `Thread.sleep`. |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | It would be impossible to write java.util.Timer without a sleep() method, or at least it would require you to abuse the wait() method, and write a lot of extra code around it to protect against spurious wakeups. | If the requirement spec calls for a five-second wait, maybe somewhere deep down in several functions in some process-control thread code, maybe only under some conditions, a Sleep(5000) call is a good solution for the following reasons:
* It does not require simple in-line code to be rewritten as a complex
state-machine so as to be able to use an asynchronous timer.
* It invoves no other timer or pool thread to be run to implement the timeout.
* It's a one-liner that does nor require wait-objects to be constructed etc.
* Sleep() is available, in almost the same form, on all multitasking OS I have ever used.
Sleep() gets bad press because:
* It 'wastes a thread'. In many systems, eg. when the thread is going to be there anyway and will run for the lifetime of the app, who cares?
* It is often misused for inter-thread comms polling loops, so adding CPU waste and latency This is indeed indefensible.
* It often cannot be interrupted so as to allow a 'clean and quick'
shutdown of the thread. Again, in many systems, it does not matter if pool or app-lifetime threads get rudely stopped by a process termination, so why bother trying?
Example of reasonable usage:
```
void StartFeedstockDelivery{
if (airbankPressure()<MIN_PRESSURE){
startCompressor();
sleep(10000); // wait for pressure to build up
openFeedValve();
};
``` |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | Consider a service launching 2 different threads performing 2 different things connected to each other, one of the threads fails an an exception is caught(network problem, a remote host doesn't reply), you want your service to be up and running in the shortest time possible. The best thing is to wait some time and then to re-run your failing thread. You do not know when the remote host will be up you have to test the connection. In this case the best solution is to wait for some time and then to rerun your thread and not to re-run endlessly the failing thread (CPU load). | You use `Thread.sleep` every time you want to slow things down. In certain scenarios you are unable to synchronize, like in case of communication with external systems over the network, database, etc.
Example scenarios:
* ***Error recovery*** - When your system depends on some external entity that reports temporary errors. You communicate with external system you have no control of and it says it has temporary issues that are not related to your request. You do `Thread.sleep` and retry. If you did not `sleep` you would get error flood. This is quite common pattern in integration middleware.
* ***Timeouts*** - You wait for something to happen but no longer than 10 seconds. You just cannot accept longer waits and want to quit.
* ***Throttling*** - Someone tells you you cannot do something more often than once a 10 seconds.
Please keep in mind that there are various ways to wait, not neccessarily by calling `Thread.sleep`. |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | It would be impossible to write java.util.Timer without a sleep() method, or at least it would require you to abuse the wait() method, and write a lot of extra code around it to protect against spurious wakeups. | Consider a service launching 2 different threads performing 2 different things connected to each other, one of the threads fails an an exception is caught(network problem, a remote host doesn't reply), you want your service to be up and running in the shortest time possible. The best thing is to wait some time and then to re-run your failing thread. You do not know when the remote host will be up you have to test the connection. In this case the best solution is to wait for some time and then to rerun your thread and not to re-run endlessly the failing thread (CPU load). |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | Although many times the event-driven model is the best way to "wait" for an action to occur, there are sometimes that you need to wait intentionally for a short amount of time and then make an action.
A common case of this is a condition of sampling/polling data (from files, from the network etc) between some periods of time. In this case, you just want to "refresh" your data in a sense between time intervals.
For example, if you have an application that makes requests to a web service via the network, you might want to have a threat to perform this task periodically, having a "sleeping" behavior most of the time, but perform the service request task after some time, repeating this behavior again and again. | You use `Thread.sleep` every time you want to slow things down. In certain scenarios you are unable to synchronize, like in case of communication with external systems over the network, database, etc.
Example scenarios:
* ***Error recovery*** - When your system depends on some external entity that reports temporary errors. You communicate with external system you have no control of and it says it has temporary issues that are not related to your request. You do `Thread.sleep` and retry. If you did not `sleep` you would get error flood. This is quite common pattern in integration middleware.
* ***Timeouts*** - You wait for something to happen but no longer than 10 seconds. You just cannot accept longer waits and want to quit.
* ***Throttling*** - Someone tells you you cannot do something more often than once a 10 seconds.
Please keep in mind that there are various ways to wait, not neccessarily by calling `Thread.sleep`. |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | Both `Thread.sleep(long)` and `Object.wait(long)` block current thread. However `wait` may return earlier (spurious wakeup), see javadoc. So for `wait` we need to implement additional logic which guarantees that specified amount of time elapsed. So if you simply want to make a pause - use `Thread.sleep` | If the requirement spec calls for a five-second wait, maybe somewhere deep down in several functions in some process-control thread code, maybe only under some conditions, a Sleep(5000) call is a good solution for the following reasons:
* It does not require simple in-line code to be rewritten as a complex
state-machine so as to be able to use an asynchronous timer.
* It invoves no other timer or pool thread to be run to implement the timeout.
* It's a one-liner that does nor require wait-objects to be constructed etc.
* Sleep() is available, in almost the same form, on all multitasking OS I have ever used.
Sleep() gets bad press because:
* It 'wastes a thread'. In many systems, eg. when the thread is going to be there anyway and will run for the lifetime of the app, who cares?
* It is often misused for inter-thread comms polling loops, so adding CPU waste and latency This is indeed indefensible.
* It often cannot be interrupted so as to allow a 'clean and quick'
shutdown of the thread. Again, in many systems, it does not matter if pool or app-lifetime threads get rudely stopped by a process termination, so why bother trying?
Example of reasonable usage:
```
void StartFeedstockDelivery{
if (airbankPressure()<MIN_PRESSURE){
startCompressor();
sleep(10000); // wait for pressure to build up
openFeedValve();
};
``` |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | It would be impossible to write java.util.Timer without a sleep() method, or at least it would require you to abuse the wait() method, and write a lot of extra code around it to protect against spurious wakeups. | Although many times the event-driven model is the best way to "wait" for an action to occur, there are sometimes that you need to wait intentionally for a short amount of time and then make an action.
A common case of this is a condition of sampling/polling data (from files, from the network etc) between some periods of time. In this case, you just want to "refresh" your data in a sense between time intervals.
For example, if you have an application that makes requests to a web service via the network, you might want to have a threat to perform this task periodically, having a "sleeping" behavior most of the time, but perform the service request task after some time, repeating this behavior again and again. |
23,077,657 | `Thread.sleep()` to me seems like a really useless concept because the only way I see it working is in the following scenarios
```
public void someFunction()
{
//thread does something
Thread.sleep(50000);
//now other threads do something
}
```
The problem is that for me this is just asking for trouble. I mean this can either be to long in which case at best you may have a performance issue and it may be too long and other threads may wait for the result in which case the situation may become critical.
I mean there are other way like using synchronized and wait which seem much more efficient.
So my question is, are there any situations where using thread sleep is the best option? | 2014/04/15 | [
"https://Stackoverflow.com/questions/23077657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2385352/"
] | Consider a service launching 2 different threads performing 2 different things connected to each other, one of the threads fails an an exception is caught(network problem, a remote host doesn't reply), you want your service to be up and running in the shortest time possible. The best thing is to wait some time and then to re-run your failing thread. You do not know when the remote host will be up you have to test the connection. In this case the best solution is to wait for some time and then to rerun your thread and not to re-run endlessly the failing thread (CPU load). | If the requirement spec calls for a five-second wait, maybe somewhere deep down in several functions in some process-control thread code, maybe only under some conditions, a Sleep(5000) call is a good solution for the following reasons:
* It does not require simple in-line code to be rewritten as a complex
state-machine so as to be able to use an asynchronous timer.
* It invoves no other timer or pool thread to be run to implement the timeout.
* It's a one-liner that does nor require wait-objects to be constructed etc.
* Sleep() is available, in almost the same form, on all multitasking OS I have ever used.
Sleep() gets bad press because:
* It 'wastes a thread'. In many systems, eg. when the thread is going to be there anyway and will run for the lifetime of the app, who cares?
* It is often misused for inter-thread comms polling loops, so adding CPU waste and latency This is indeed indefensible.
* It often cannot be interrupted so as to allow a 'clean and quick'
shutdown of the thread. Again, in many systems, it does not matter if pool or app-lifetime threads get rudely stopped by a process termination, so why bother trying?
Example of reasonable usage:
```
void StartFeedstockDelivery{
if (airbankPressure()<MIN_PRESSURE){
startCompressor();
sleep(10000); // wait for pressure to build up
openFeedValve();
};
``` |
37,567,638 | I have a HTML button:
```
<button id="reset" type="button">Reset</button>
```
I want to set the `onclick` behaviour - link to a page depending on the URL parameters for this button. On searching, I found that it is only possible through Javascript, through something like this:
```
<script type="text/javascript"charset="utf-8">
function GetURLParameter(sParam) {
var sPageURL = window.location.search.substring(1);
var sURLVariables = sPageURL.split('&');
for (var i = 0; i < sURLVariables.length; i++) {
var sParameterName = sURLVariables[i].split('=');
if (sParameterName[0] == sParam) {
return sParameterName[1];
}
}
}
document.getElementById('reset').onclick = function() { return "location.href=\'index.html?param=" + GetURLParameter('param') + "\'"; };
</script>
```
However, this doesn't seem to be working. My button doesn't do anything when clicked. What am I doing wrong?
P.S. I have seen some questions which work by creating the button dynamically using JS and then set its onclick behaviour. However, I am interested in knowing how one can modify the onclick behaviour of a button which has been created using HTML through JS. | 2016/06/01 | [
"https://Stackoverflow.com/questions/37567638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1770037/"
] | I don't think speed is a concern here, the difference between them is so quick it wont effect your code.. | I think, id selector works faster.
HTML ID attributes are unique in every page and even older browsers can locate a single element very quickly. |
37,567,638 | I have a HTML button:
```
<button id="reset" type="button">Reset</button>
```
I want to set the `onclick` behaviour - link to a page depending on the URL parameters for this button. On searching, I found that it is only possible through Javascript, through something like this:
```
<script type="text/javascript"charset="utf-8">
function GetURLParameter(sParam) {
var sPageURL = window.location.search.substring(1);
var sURLVariables = sPageURL.split('&');
for (var i = 0; i < sURLVariables.length; i++) {
var sParameterName = sURLVariables[i].split('=');
if (sParameterName[0] == sParam) {
return sParameterName[1];
}
}
}
document.getElementById('reset').onclick = function() { return "location.href=\'index.html?param=" + GetURLParameter('param') + "\'"; };
</script>
```
However, this doesn't seem to be working. My button doesn't do anything when clicked. What am I doing wrong?
P.S. I have seen some questions which work by creating the button dynamically using JS and then set its onclick behaviour. However, I am interested in knowing how one can modify the onclick behaviour of a button which has been created using HTML through JS. | 2016/06/01 | [
"https://Stackoverflow.com/questions/37567638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1770037/"
] | I don't think speed is a concern here, the difference between them is so quick it wont effect your code.. | Id is obviously faster then class because there can be only one Id, once found need no more search. However the speed is not concern. If you need to do some work on multiple elements you use class but if you need to do work on a particular element you use Id. |
37,567,638 | I have a HTML button:
```
<button id="reset" type="button">Reset</button>
```
I want to set the `onclick` behaviour - link to a page depending on the URL parameters for this button. On searching, I found that it is only possible through Javascript, through something like this:
```
<script type="text/javascript"charset="utf-8">
function GetURLParameter(sParam) {
var sPageURL = window.location.search.substring(1);
var sURLVariables = sPageURL.split('&');
for (var i = 0; i < sURLVariables.length; i++) {
var sParameterName = sURLVariables[i].split('=');
if (sParameterName[0] == sParam) {
return sParameterName[1];
}
}
}
document.getElementById('reset').onclick = function() { return "location.href=\'index.html?param=" + GetURLParameter('param') + "\'"; };
</script>
```
However, this doesn't seem to be working. My button doesn't do anything when clicked. What am I doing wrong?
P.S. I have seen some questions which work by creating the button dynamically using JS and then set its onclick behaviour. However, I am interested in knowing how one can modify the onclick behaviour of a button which has been created using HTML through JS. | 2016/06/01 | [
"https://Stackoverflow.com/questions/37567638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1770037/"
] | I don't think speed is a concern here, the difference between them is so quick it wont effect your code.. | Id's in html are auto assigned in JS.
Because there can be only 1 `id` on the page with that name. the **entire** element is assigned into a JS variable.
For example: the element `<span id='spanTagOne'>Text</span>` will be a JS variable `spanTagOne`.
So you don't even need to get them since they are already assigned. |
37,567,638 | I have a HTML button:
```
<button id="reset" type="button">Reset</button>
```
I want to set the `onclick` behaviour - link to a page depending on the URL parameters for this button. On searching, I found that it is only possible through Javascript, through something like this:
```
<script type="text/javascript"charset="utf-8">
function GetURLParameter(sParam) {
var sPageURL = window.location.search.substring(1);
var sURLVariables = sPageURL.split('&');
for (var i = 0; i < sURLVariables.length; i++) {
var sParameterName = sURLVariables[i].split('=');
if (sParameterName[0] == sParam) {
return sParameterName[1];
}
}
}
document.getElementById('reset').onclick = function() { return "location.href=\'index.html?param=" + GetURLParameter('param') + "\'"; };
</script>
```
However, this doesn't seem to be working. My button doesn't do anything when clicked. What am I doing wrong?
P.S. I have seen some questions which work by creating the button dynamically using JS and then set its onclick behaviour. However, I am interested in knowing how one can modify the onclick behaviour of a button which has been created using HTML through JS. | 2016/06/01 | [
"https://Stackoverflow.com/questions/37567638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1770037/"
] | I don't think speed is a concern here, the difference between them is so quick it wont effect your code.. | try this way to find
$('parent').find('.child') is a faster way to find element. |
30,353,628 | I get the error `"Communications link failure"` at this line of code:
```
mySqlCon = DriverManager.getConnection("jdbc:mysql://**server ip address**:3306/db-name", "mu-user-name", "my-password");
```
I checked everything in [this post](https://stackoverflow.com/questions/6865538/solving-a-communications-link-failure-with-jdbc-and-mysql):
* I increased max-allowed-packet in my.cnf in etc/mysql: max\_allowed\_packet = 5073741824------ [mysqldump] max\_allowed\_packet = 1G
* The bind-address is: 127.0.0.1
* All timeout values are equal to a number
* Tomcat is not yet installed on server (new server)
* There is no skip-networking in my.cnf
* I can ping the server
* I am connected to the mysql database via ssh
When I change the query string to this:
```
mySqlCon = DriverManager.getConnection("jdbc:mysql://**server ip address**:22/127.0.0.1:3306/db-name", "mu-user-name", "my-password");
```
I get the error `Packet for query is too large (4739923 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.`
While I have changed the packet size on my.cnf and restarted the mysql service after that.
Any suggestions?
NOTE:
I can connect through ssh with this code, but this way doesn't seem rational! I can connect once in main and then I should pass the connection to all the classes.
```
public my-class-constructor() {
try {
go();
} catch (Exception e) {
e.printStackTrace();
}
mySqlCon = null;
String driver = "com.mysql.jdbc.Driver";
String url = "jdbc:mysql://" + rhost + ":" + lport + "/";
String db = "my-db-name";
String dbUser = "dbuser";
String dbPasswd = "pass";
try {
Class.forName(driver);
mySqlCon = DriverManager.getConnection(url + db, dbUser, dbPasswd);
} catch (Exception e) {
e.printStackTrace();
}
}
public static void go() {
String user = "ssh-user";
String password = "ssh-pass";
String host = "ips-address";
int port = 22;
try {
JSch jsch = new JSch();
Session session = jsch.getSession(user, host, port);
lport = 4321;
rhost = "localhost";
rport = 3306;
session.setPassword(password);
session.setConfig("StrictHostKeyChecking", "no");
System.out.println("Establishing Connection...");
session.connect();
int assinged_port = session.setPortForwardingL(lport, rhost, rport);
System.out.println("localhost:" + assinged_port + " -> " + rhost
+ ":" + rport);
} catch (Exception e) {
System.err.print(e);
e.printStackTrace();
}
}
``` | 2015/05/20 | [
"https://Stackoverflow.com/questions/30353628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2959833/"
] | Take a look [here](https://dev.mysql.com/doc/refman/5.7/en/packet-too-large.html) .
Looks the following describes your case
>
> The largest possible packet that can be transmitted to or from a MySQL
> 5.7 server or client is 1GB.
>
>
> When a MySQL client or the mysqld server receives a packet bigger than
> max\_allowed\_packet bytes, it issues an ER\_NET\_PACKET\_TOO\_LARGE error
> and closes the connection. With some clients, you may also get a Lost
> connection to MySQL server during query error if the communication
> packet is too large.
>
>
> Both the client and the server have their own max\_allowed\_packet
> variable, so if you want to handle big packets, you must increase this
> variable both in the client and in the server.
>
>
>
So, it looks like you need to change the `max_allowed_packet` on the client as well:
```
mySqlCon = DriverManager.getConnection("jdbc:mysql://**server ip address**:3306/db-name?max_allowed_packet= 5073741824", "mu-user-name", "my-password");
``` | I changed the binding address in my.cnf file in /etc/mysql to the ip address of the server, and it solved the problem. |
2,009,098 | Let P be an external point of a circle with center in O and also the intersection of two lines r and s that are tangent to the circle. If PAB is a triangle such that AB is also also tangent to the circle, find AÔB knowing that P = 40°.
I draw the problem:
[](https://i.stack.imgur.com/UCZ0G.png)
Then I tried to solve it, found some relations, but don't know how to proceed.
[](https://i.stack.imgur.com/g6NKM.png)
I highly suspect that PAB is isosceles, but couldn't prove it. | 2016/11/11 | [
"https://math.stackexchange.com/questions/2009098",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/78511/"
] | First of all, note that $\angle PAB + \angle PBA = 140^\circ$. That means that $\angle MAB + \angle NBA = 220^\circ$.
Then we see that $AO$ bisects $\angle MAB$, and $BO$ bisects $\angle NBA$, so $\angle OAB + \angle OBA = 110^\circ$.
Lastly, looking at the quadrilateral $AOBP$, we see that $x = 360^\circ - 40^\circ - 140^\circ - 110^\circ = 70^\circ$.
There is no reason to believe $\triangle PAB$ to be isosceles. In fact, from just the given information it might not be. If we move $A$ closer to $M$, we see that $AB$ touching the circle will force $B$ closer to $P$. It's just that you've happened to draw the figure symmetrically. | We know $\angle PON=70°$ and $\angle NOP=140°$.
And we also know that $OB$ and $OA$ are bisectors of $\angle NOT$ and $\angle TOM$ respectively. Therefore $$BOT+TOA=70°$$
[](https://i.stack.imgur.com/19MdE.jpg) |
2,009,098 | Let P be an external point of a circle with center in O and also the intersection of two lines r and s that are tangent to the circle. If PAB is a triangle such that AB is also also tangent to the circle, find AÔB knowing that P = 40°.
I draw the problem:
[](https://i.stack.imgur.com/UCZ0G.png)
Then I tried to solve it, found some relations, but don't know how to proceed.
[](https://i.stack.imgur.com/g6NKM.png)
I highly suspect that PAB is isosceles, but couldn't prove it. | 2016/11/11 | [
"https://math.stackexchange.com/questions/2009098",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/78511/"
] | First of all, note that $\angle PAB + \angle PBA = 140^\circ$. That means that $\angle MAB + \angle NBA = 220^\circ$.
Then we see that $AO$ bisects $\angle MAB$, and $BO$ bisects $\angle NBA$, so $\angle OAB + \angle OBA = 110^\circ$.
Lastly, looking at the quadrilateral $AOBP$, we see that $x = 360^\circ - 40^\circ - 140^\circ - 110^\circ = 70^\circ$.
There is no reason to believe $\triangle PAB$ to be isosceles. In fact, from just the given information it might not be. If we move $A$ closer to $M$, we see that $AB$ touching the circle will force $B$ closer to $P$. It's just that you've happened to draw the figure symmetrically. | Two tangents meet at a point. Therefore MP = NP. Only if OTP is a straight line, triangles MPN and APB are similar, hence both isoceles in this particular case - otherwise the triangles are not similar and only MPN is isoceles. |
40,111,882 | I was browsing the w3.org page about the `article` element and one of the exemples surprised me:
```
<article>
<header>
<h1>The Very First Rule of Life</h1>
<p><time pubdate datetime="2009-10-09T14:28-08:00"></time></p>
</header>
<p>If there's a microphone anywhere near you, assume it's hot and
sending whatever you're saying to the world. Seriously.</p>
<p>...</p>
<section>
<h1>Comments</h1>
<article>
<footer>
<p>Posted by: George Washington</p>
<p><time pubdate datetime="2009-10-10T19:10-08:00"></time></p>
</footer>
<p>Yeah! Especially when talking about your lobbyist friends!</p>
</article>
<article>
<footer>
<p>Posted by: George Hammond</p>
<p><time pubdate datetime="2009-10-10T19:15-08:00"></time></p>
</footer>
<p>Hey, you have the same first name as me.</p>
</article>
</section>
</article>
```
As you can see, the comments info (poster name and date) are in a `footer` element at the begining of each comment.
According to [W3.org 4.3.8 The `footer` element](https://www.w3.org/TR/html5/sections.html#the-footer-element) it is a valid usage, but it seems quite strange to use it that way.
>
> A footer typically contains information about its section such as who wrote it, links to related documents, copyright data, and the like.
>
>
>
It is right, nothing says that it should sit under the actual article.
I would have used a `header` element for this usage but on [4.3.7 The `header` element](https://www.w3.org/TR/html5/sections.html#the-header-element) it is precised that
>
> A header typically contains a group of introductory or navigational aids.
>
>
>
But they also say about the `footer` element:
>
> The primary purpose of these elements is merely to help the author
> write self-explanatory markup that is easy to maintain and style; they
> are not intended to impose specific structures on authors.
>
>
>
So why are they using the `footer` element in the example? Wouldn't a `header` element be more intuitive and semantic?
```
<section>
<h1>Comments</h1>
<article>
<header>
<p>Posted by: George Washington</p>
<p><time pubdate datetime="2009-10-10T19:10-08:00"></time></p>
</header>
<p>Yeah! Especially when talking about your lobbyist friends!</p>
</article>
<article>
<header>
<p>Posted by: George Hammond</p>
<p><time pubdate datetime="2009-10-10T19:15-08:00"></time></p>
</header>
<p>Hey, you have the same first name as me.</p>
</article>
</section>
```
Is there a particular reason for that? | 2016/10/18 | [
"https://Stackoverflow.com/questions/40111882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6456842/"
] | According to what you have in your question and the astropy docs (<http://docs.astropy.org/en/stable/io/fits/>), it looks like you just need to do:
```
from astropy.io import fits
import pandas
with fits.open('datafile') as data:
df = pandas.DataFrame(data[0].data)
```
Edit:
I don't have much experience we astropy, but other have mentioned that you can read the fits files into a `Table` object, which has a `to_pandas()` method:
```
from astropy.table import Table
dat = Table.read('datafile', format='fits')
df = dat.to_pandas()
```
Might be worth investigating.
<http://docs.astropy.org/en/latest/table/pandas.html> | Note: the second option with Table is better for most cases, since the way FITS files store data is big-endian, which can cause problems when reading into a DataFrame object which is little-endian. See <https://github.com/astropy/astropy/issues/1156> |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | >
> I want to get html-text few seconds after opening url.
>
>
>
Well, the webpage HTML stays the same right after you "get" the url using Requests, so there's no need to wait a few seconds as the HTML will not change.
I assume the reason that you would like to wait is for the page to load all the relevant resources (e.g. CSS/JS) that modifies the HTML?
If it's so, I wouldn't recommend you using the Requests module as you will have to manipulate and load all of the relevant resources by yourself.
I suggest you to have a look at **[Selenium](http://selenium-python.readthedocs.io/) for Python**.
Selenium fully simulates a browser, hence you can wait and it will load all the resources for your webpage. | You want to change the last line to:
```
html = requests.get(url).text
``` |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | try using [`time.sleep(t)`](https://docs.python.org/2/library/time.html#time.sleep)
```
response = request.get(url)
time.sleep(5) # suspend execution for 5 secs
html = response.text
``` | You want to change the last line to:
```
html = requests.get(url).text
``` |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | You want to change the last line to:
```
html = requests.get(url).text
``` | basically you can give a sleep to the request as a parameter as bellow:
```
import requests
import time
url = "http://XXXXX…"
seconds = 5
html = requests.get(url,time.sleep(seconds)).text #for example 5 seconds
``` |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | >
> I want to get html-text few seconds after opening url.
>
>
>
Well, the webpage HTML stays the same right after you "get" the url using Requests, so there's no need to wait a few seconds as the HTML will not change.
I assume the reason that you would like to wait is for the page to load all the relevant resources (e.g. CSS/JS) that modifies the HTML?
If it's so, I wouldn't recommend you using the Requests module as you will have to manipulate and load all of the relevant resources by yourself.
I suggest you to have a look at **[Selenium](http://selenium-python.readthedocs.io/) for Python**.
Selenium fully simulates a browser, hence you can wait and it will load all the resources for your webpage. | try using [`time.sleep(t)`](https://docs.python.org/2/library/time.html#time.sleep)
```
response = request.get(url)
time.sleep(5) # suspend execution for 5 secs
html = response.text
``` |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | >
> I want to get html-text few seconds after opening url.
>
>
>
Well, the webpage HTML stays the same right after you "get" the url using Requests, so there's no need to wait a few seconds as the HTML will not change.
I assume the reason that you would like to wait is for the page to load all the relevant resources (e.g. CSS/JS) that modifies the HTML?
If it's so, I wouldn't recommend you using the Requests module as you will have to manipulate and load all of the relevant resources by yourself.
I suggest you to have a look at **[Selenium](http://selenium-python.readthedocs.io/) for Python**.
Selenium fully simulates a browser, hence you can wait and it will load all the resources for your webpage. | basically you can give a sleep to the request as a parameter as bellow:
```
import requests
import time
url = "http://XXXXX…"
seconds = 5
html = requests.get(url,time.sleep(seconds)).text #for example 5 seconds
``` |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | >
> I want to get html-text few seconds after opening url.
>
>
>
Well, the webpage HTML stays the same right after you "get" the url using Requests, so there's no need to wait a few seconds as the HTML will not change.
I assume the reason that you would like to wait is for the page to load all the relevant resources (e.g. CSS/JS) that modifies the HTML?
If it's so, I wouldn't recommend you using the Requests module as you will have to manipulate and load all of the relevant resources by yourself.
I suggest you to have a look at **[Selenium](http://selenium-python.readthedocs.io/) for Python**.
Selenium fully simulates a browser, hence you can wait and it will load all the resources for your webpage. | I have found the library `requests-html` handy for this purpose, though mostly I use Selenium (as already proposed in Danny answer).
```
from requests_html import HTMLSession, HTMLResponse
session = HTMLSession()
req = cast(HTMLResponse, session.get("http://XXXXX"))
req.html.render(sleep=5, keep_page=True)
```
Now, the `req.html` is a HTML object. In order to get the raw text or the html as a string you can use:
```
text = req.text
```
or:
```
text = req.html.html
```
Then you can parse your `text` string, e.g. with Beautiful Soup. |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | try using [`time.sleep(t)`](https://docs.python.org/2/library/time.html#time.sleep)
```
response = request.get(url)
time.sleep(5) # suspend execution for 5 secs
html = response.text
``` | basically you can give a sleep to the request as a parameter as bellow:
```
import requests
import time
url = "http://XXXXX…"
seconds = 5
html = requests.get(url,time.sleep(seconds)).text #for example 5 seconds
``` |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | try using [`time.sleep(t)`](https://docs.python.org/2/library/time.html#time.sleep)
```
response = request.get(url)
time.sleep(5) # suspend execution for 5 secs
html = response.text
``` | I have found the library `requests-html` handy for this purpose, though mostly I use Selenium (as already proposed in Danny answer).
```
from requests_html import HTMLSession, HTMLResponse
session = HTMLSession()
req = cast(HTMLResponse, session.get("http://XXXXX"))
req.html.render(sleep=5, keep_page=True)
```
Now, the `req.html` is a HTML object. In order to get the raw text or the html as a string you can use:
```
text = req.text
```
or:
```
text = req.html.html
```
Then you can parse your `text` string, e.g. with Beautiful Soup. |
44,014,722 | I built an npm module named `emeraldfw` and published it. My `package.json` file is
```
{
"name": "emeraldfw",
"version": "0.6.0",
"bin": "./emeraldfw.js",
"description": "Emerald Framework is a language-agnostig web development framework, designed to make developer's lives easier and fun while coding.",
"main": "emeraldfw.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
"test": "mocha"
},
"repository": {
"type": "git",
"url": "git+https://github.com/EdDeAlmeidaJr/emeraldfw.git"
},
"keywords": [
"web",
"development",
"framework",
"language",
"agnostic",
"react"
],
"author": "Ed de Almeida",
"license": "MIT",
"bugs": {
"url": "https://github.com/EdDeAlmeidaJr/emeraldfw/issues"
},
"homepage": "https://github.com/EdDeAlmeidaJr/emeraldfw#readme",
"devDependencies": {
"jshint": "^2.9.4",
"mocha": "^3.3.0"
},
"dependencies": {
"jsonfile": "^3.0.0",
"react": "^15.5.4",
"vorpal": "^1.12.0"
}
}
```
As you may see, I declared a `"bin": "./emeraldfw.js"` binary, which corresponds to the application itself. The `package.json` documentations says this is going to create a link to the application executable at node.js bin/ directory. This worked fine, but when I install it globally (`npm install emeraldfw -g`) and then run it from the command line I receive an error messsage[](https://i.stack.imgur.com/XveVy.png)
All other node modules are working fine and my application is passing in all tests and when I run it directly inside the development directory (with `node emeraldfw.js`) it works really fine.
I'm not a node.js expert and after having fought this error for two days, here I am to ask for help.
Any ideas?
**EDIT:**
I checked the permissions for my node binary (emeraldfw.js) and it belongs to edvaldo:edvaldo, my user and group. And it is with executable permissions set. I should have no permission issues inside my own area with these settings, don't you think? | 2017/05/17 | [
"https://Stackoverflow.com/questions/44014722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5021963/"
] | I have found the library `requests-html` handy for this purpose, though mostly I use Selenium (as already proposed in Danny answer).
```
from requests_html import HTMLSession, HTMLResponse
session = HTMLSession()
req = cast(HTMLResponse, session.get("http://XXXXX"))
req.html.render(sleep=5, keep_page=True)
```
Now, the `req.html` is a HTML object. In order to get the raw text or the html as a string you can use:
```
text = req.text
```
or:
```
text = req.html.html
```
Then you can parse your `text` string, e.g. with Beautiful Soup. | basically you can give a sleep to the request as a parameter as bellow:
```
import requests
import time
url = "http://XXXXX…"
seconds = 5
html = requests.get(url,time.sleep(seconds)).text #for example 5 seconds
``` |
24,950,742 | I am trying to iterate both index of **JSON**(Players and Buildings), so that i can a get new result in **jQuery**
I have two index of **JSON** one having information of Players and second index having information of Building related Player.
I want to Parse it so that i can get Player and its building name.
**My Actual JSON result**
```
{
"Players": [
{
"id": "35",
"building_id": "8",
"room_num": "101",
},
{
"id": "36",
"building_id": "9",
"room_num": "102",
},
{
"id": "37",
"building_id": "10",
"room_num": "103",
},
{
"id": "38",
"building_id": "11",
"room_num": "104",
}
],
"Buildings": [
{
"id": "8",
"name": "ABC"
},
{
"id": "9",
"name": "DEF"
},
{
"id": "10",
"name": "GHI"
},
{
"id": "11",
"name": "JKL"
}
]
}
```
**Need this**
```
"information": [
{
"player_id": "35",
"Buildings_name": "ABC"
},
{
"player_id": "36",
"Buildings_name": "DEF"
},
{
"player_id": "37",
"Buildings_name": "GHI"
},
{
"player_id": "38",
"Buildings_name": "JKL"
}
]
}
``` | 2014/07/25 | [
"https://Stackoverflow.com/questions/24950742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1868277/"
] | Here you go, this go for each player and check if there are buildings and will map them to new structure. This will not filter values for buildings that do not have mapping to players, and will not include the buildings with no players.
```
var x = {
"Players": [
{
"id": "35",
"building_id": "8",
"room_num": "101",
},
{
"id": "36",
"building_id": "9",
"room_num": "102",
},
{
"id": "37",
"building_id": "10",
"room_num": "103",
},
{
"id": "38",
"building_id": "11",
"room_num": "104",
}
],
"Buildings": [
{
"id": "8",
"name": "ABC"
},
{
"id": "9",
"name": "DEF"
},
{
"id": "10",
"name": "GHI"
},
{
"id": "11",
"name": "JKL"
}
]
};
var res = $.map(x.Players, function(item) {
return {
player_id: item.id,
building_name: $.grep(x.Buildings, function(i) {
return i.id == item.building_id
}).length != 0 ? $.grep(x.Buildings, function(i) {
return i.id == item.building_id
})[0].name : undefined
}
})
```
and if you want to filter values that do not have relationships e.g INNER join
```
var resInnerJoin = $.grep($.map(x.Players, function(item) {
return {
player_id: item.id,
building_name: $.grep(x.Buildings, function(i) {
return i.id == item.building_id
}).length != 0 ? $.grep(x.Buildings, function(i) {
return i.id == item.building_id
})[0].name : undefined
}
}), function(it) {
return it.building_name != undefined
})
``` | If you need it in PHP :
```
$json = '{...}';
// create and PHP array with you json data.
$array = json_decode($json, true);
// make an array with buildings informations and with building id as key
$buildings = array();
foreach( $array['Buildings'] as $b ) $buildings[$b['id']] = $b;
$informations = array();
for ( $i = 0 ; $i < count($array['Players']) ; $i++ )
{
$informations[$i] = array(
'player_id' => $array['Players'][$i]['id'],
'Buildings_name' => $buildings[$array['Players'][$i]['building_id']]['name']
);
}
$informations = json_encode($informations);
var_dump($informations);
``` |
24,343,943 | I am currently getting absolutely bally nowhere with a problem with Hibernate where I am given the message:
```
Your page request has caused a QueryException: could not resolve property: PERSON_ID of: library.model.Person [FROM library.model.Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = 2] error:
```
In the method below:
```
@Override
public Person getPersonAndBooks(Integer personId) {
logger.info(PersonDAOImpl.class.getName() + ".listBooksForPerson() method called.");
Session session = sessionFactory.openSession();
Query query = session.createQuery("FROM Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = " + personId);
List<Person> persons = query.setResultTransformer(Transformers.aliasToBean(Person.class)).list();
List<Book> books = persons.get(0).getBooks();
for (Book b : books) {
System.out.println("Here " + b.toString());
}
return persons.get(0);
}
finally {
session.close();
}
}
```
But I see nothing wrong in the SQL and it works perfectly well in Apache Derby.
I've tried a number of things on StackOverflow and elsewhere but nothing resolves the issue.
There are two classes in a simple application:
```
@Entity
@Table(name = "PERSON")
public class Person implements Serializable {
// Attributes.
@Id
@Column(name="PERSON_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer personId;
@Column(name="NAME", nullable=false, length=50)
private String name;
@Column(name="ADDRESS", nullable=false, length=100)
private String address;
@Column(name="TELEPHONE", nullable=false, length=10)
private String telephone;
@Column(name="EMAIL", nullable=false, length=50)
private String email;
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.LAZY)
private List<Book> books;
```
And Book:
```
Entity
@Table(name = "BOOK")
public class Book implements Serializable {
// Attributes.
@Id
@Column(name="BOOK_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer bookId;
@Column(name="AUTHOR", nullable=false, length=50)
private String author;
@Column(name="TITLE", nullable=false, length=50)
private String title;
@Column(name="DESCRIPTION", nullable=false, length=500)
private String description;
@Column(name="ONLOAN", nullable=false, length=5)
private String onLoan;
@ManyToOne
@JoinColumn(name="person_id")
private Person person;
```
Each maps to database tables:
```
CREATE TABLE PERSON (
PERSON_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
NAME VARCHAR(50) NOT NULL,
ADDRESS VARCHAR(100) NOT NULL,
TELEPHONE VARCHAR(10) NOT NULL,
EMAIL VARCHAR(50) NOT NULL,
CONSTRAINT PRIMARY_KEY_PERSON PRIMARY KEY(PERSON_ID)
)
```
And Book is:
```
CREATE TABLE BOOK (
BOOK_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
AUTHOR VARCHAR(50) NOT NULL,
TITLE VARCHAR(100) NOT NULL,
DESCRIPTION VARCHAR(500) NOT NULL,
ONLOAN VARCHAR(5) NOT NULL,
PERSON_ID INTEGER,
CONSTRAINT PRIMARY_KEY_BOOK PRIMARY KEY(ID),
CONSTRAINT FOREIGN_KEY_BOOK FOREIGN KEY(PERSON_ID) REFERENCES PERSON(PERSON_ID)
)
```
Can someone please tell me where I am going wrong?
And if when the SQL finally works, if I am using the right method to convert the output into a Person object where a Person has an arraylist of Book?
My method to get a books for a Person is:
```
// Calls books.jsp for a Person.
@RequestMapping(value = "/books", method = RequestMethod.GET)
public String listBooks(@RequestParam("personId") String personId,
Model model) {
logger.info(PersonController.class.getName() + ".listBooks() method called.");
Person person = personService.get(Integer.parseInt(personId));
List<Book> books = bookService.listBooksForPerson(Integer.parseInt(personId));
// Set view.
model.addAttribute("person", person);
model.addAttribute("books", books);
return "view/books";
}
```
Which does work.
Full stack trace follows:
>
> Your page request has caused a LazyInitializationException: failed to lazily initialize a collection of role: library.model.Person.books, could not initialize proxy - no Session error:
>
>
>
```
org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:575)
org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:214)
org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:554)
org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:142)
org.hibernate.collection.internal.PersistentBag.iterator(PersistentBag.java:294)
library.controller.PersonController.getLogin(PersonController.java:104)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:304)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:744)
``` | 2014/06/21 | [
"https://Stackoverflow.com/questions/24343943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/953331/"
] | You wrote your HQL query as if it was a SQL query. It's not. HQL and JPQL are different languages.
HQL never uses table and column names. It always uses entity names and their persistent field names (i.e. personId and not PERSON\_ID) and their associations. Joins in HQL consists in navigating through association between entities. HQL queries thus don't have ON clauses.
A correct HQL query would be
```
select p from Person p join p.books where p.id = :personId
```
Note that I use a named parameter in the query, that must be bound, instead of using concatenation which is the open door to SQL injection attacks (just as in SQL).
The above query would select the person identified by the given ID, unless it doesn't have any book. You don't need any result transformer to get the result of this query: it's a Person instance.
I strongly suggest you read [the Hibernate documentation](http://docs.jboss.org/hibernate/core/4.3/manual/en-US/html_single/#queryhql), which explains HQL queries.
That said, you don't need any query to implement a method to get a person by ID. All you need is
```
Person p = (Person) session.get(Person.class, personId);
// now you can display the person and its books.
``` | If i understand correctly after you get Person from controller `return (Person) session.get(Person.class, personId);` This person instance is not having books as books are loaded by lazily. And when you call person.getBooks() it requires an open session to load the books but in your DAO session got already closed in finally block which internally causes `LazyInitializationException: failed to lazily initialize a collection of role: library.model.Person.books, could not initialize proxy - no Session error:`
Try to load books EAGERLY.
Change your code
```
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER)
private List<Book> books;
``` |
24,343,943 | I am currently getting absolutely bally nowhere with a problem with Hibernate where I am given the message:
```
Your page request has caused a QueryException: could not resolve property: PERSON_ID of: library.model.Person [FROM library.model.Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = 2] error:
```
In the method below:
```
@Override
public Person getPersonAndBooks(Integer personId) {
logger.info(PersonDAOImpl.class.getName() + ".listBooksForPerson() method called.");
Session session = sessionFactory.openSession();
Query query = session.createQuery("FROM Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = " + personId);
List<Person> persons = query.setResultTransformer(Transformers.aliasToBean(Person.class)).list();
List<Book> books = persons.get(0).getBooks();
for (Book b : books) {
System.out.println("Here " + b.toString());
}
return persons.get(0);
}
finally {
session.close();
}
}
```
But I see nothing wrong in the SQL and it works perfectly well in Apache Derby.
I've tried a number of things on StackOverflow and elsewhere but nothing resolves the issue.
There are two classes in a simple application:
```
@Entity
@Table(name = "PERSON")
public class Person implements Serializable {
// Attributes.
@Id
@Column(name="PERSON_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer personId;
@Column(name="NAME", nullable=false, length=50)
private String name;
@Column(name="ADDRESS", nullable=false, length=100)
private String address;
@Column(name="TELEPHONE", nullable=false, length=10)
private String telephone;
@Column(name="EMAIL", nullable=false, length=50)
private String email;
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.LAZY)
private List<Book> books;
```
And Book:
```
Entity
@Table(name = "BOOK")
public class Book implements Serializable {
// Attributes.
@Id
@Column(name="BOOK_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer bookId;
@Column(name="AUTHOR", nullable=false, length=50)
private String author;
@Column(name="TITLE", nullable=false, length=50)
private String title;
@Column(name="DESCRIPTION", nullable=false, length=500)
private String description;
@Column(name="ONLOAN", nullable=false, length=5)
private String onLoan;
@ManyToOne
@JoinColumn(name="person_id")
private Person person;
```
Each maps to database tables:
```
CREATE TABLE PERSON (
PERSON_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
NAME VARCHAR(50) NOT NULL,
ADDRESS VARCHAR(100) NOT NULL,
TELEPHONE VARCHAR(10) NOT NULL,
EMAIL VARCHAR(50) NOT NULL,
CONSTRAINT PRIMARY_KEY_PERSON PRIMARY KEY(PERSON_ID)
)
```
And Book is:
```
CREATE TABLE BOOK (
BOOK_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
AUTHOR VARCHAR(50) NOT NULL,
TITLE VARCHAR(100) NOT NULL,
DESCRIPTION VARCHAR(500) NOT NULL,
ONLOAN VARCHAR(5) NOT NULL,
PERSON_ID INTEGER,
CONSTRAINT PRIMARY_KEY_BOOK PRIMARY KEY(ID),
CONSTRAINT FOREIGN_KEY_BOOK FOREIGN KEY(PERSON_ID) REFERENCES PERSON(PERSON_ID)
)
```
Can someone please tell me where I am going wrong?
And if when the SQL finally works, if I am using the right method to convert the output into a Person object where a Person has an arraylist of Book?
My method to get a books for a Person is:
```
// Calls books.jsp for a Person.
@RequestMapping(value = "/books", method = RequestMethod.GET)
public String listBooks(@RequestParam("personId") String personId,
Model model) {
logger.info(PersonController.class.getName() + ".listBooks() method called.");
Person person = personService.get(Integer.parseInt(personId));
List<Book> books = bookService.listBooksForPerson(Integer.parseInt(personId));
// Set view.
model.addAttribute("person", person);
model.addAttribute("books", books);
return "view/books";
}
```
Which does work.
Full stack trace follows:
>
> Your page request has caused a LazyInitializationException: failed to lazily initialize a collection of role: library.model.Person.books, could not initialize proxy - no Session error:
>
>
>
```
org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:575)
org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:214)
org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:554)
org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:142)
org.hibernate.collection.internal.PersistentBag.iterator(PersistentBag.java:294)
library.controller.PersonController.getLogin(PersonController.java:104)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:304)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:744)
``` | 2014/06/21 | [
"https://Stackoverflow.com/questions/24343943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/953331/"
] | You wrote your HQL query as if it was a SQL query. It's not. HQL and JPQL are different languages.
HQL never uses table and column names. It always uses entity names and their persistent field names (i.e. personId and not PERSON\_ID) and their associations. Joins in HQL consists in navigating through association between entities. HQL queries thus don't have ON clauses.
A correct HQL query would be
```
select p from Person p join p.books where p.id = :personId
```
Note that I use a named parameter in the query, that must be bound, instead of using concatenation which is the open door to SQL injection attacks (just as in SQL).
The above query would select the person identified by the given ID, unless it doesn't have any book. You don't need any result transformer to get the result of this query: it's a Person instance.
I strongly suggest you read [the Hibernate documentation](http://docs.jboss.org/hibernate/core/4.3/manual/en-US/html_single/#queryhql), which explains HQL queries.
That said, you don't need any query to implement a method to get a person by ID. All you need is
```
Person p = (Person) session.get(Person.class, personId);
// now you can display the person and its books.
``` | In HQL no need to use ON condition in joins hibernate will generate this on runtime. And only our POJO variable name should be used in our HQL.
FROM Person p JOIN p.book
b WHERE p.persionId=personid |
24,343,943 | I am currently getting absolutely bally nowhere with a problem with Hibernate where I am given the message:
```
Your page request has caused a QueryException: could not resolve property: PERSON_ID of: library.model.Person [FROM library.model.Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = 2] error:
```
In the method below:
```
@Override
public Person getPersonAndBooks(Integer personId) {
logger.info(PersonDAOImpl.class.getName() + ".listBooksForPerson() method called.");
Session session = sessionFactory.openSession();
Query query = session.createQuery("FROM Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = " + personId);
List<Person> persons = query.setResultTransformer(Transformers.aliasToBean(Person.class)).list();
List<Book> books = persons.get(0).getBooks();
for (Book b : books) {
System.out.println("Here " + b.toString());
}
return persons.get(0);
}
finally {
session.close();
}
}
```
But I see nothing wrong in the SQL and it works perfectly well in Apache Derby.
I've tried a number of things on StackOverflow and elsewhere but nothing resolves the issue.
There are two classes in a simple application:
```
@Entity
@Table(name = "PERSON")
public class Person implements Serializable {
// Attributes.
@Id
@Column(name="PERSON_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer personId;
@Column(name="NAME", nullable=false, length=50)
private String name;
@Column(name="ADDRESS", nullable=false, length=100)
private String address;
@Column(name="TELEPHONE", nullable=false, length=10)
private String telephone;
@Column(name="EMAIL", nullable=false, length=50)
private String email;
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.LAZY)
private List<Book> books;
```
And Book:
```
Entity
@Table(name = "BOOK")
public class Book implements Serializable {
// Attributes.
@Id
@Column(name="BOOK_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer bookId;
@Column(name="AUTHOR", nullable=false, length=50)
private String author;
@Column(name="TITLE", nullable=false, length=50)
private String title;
@Column(name="DESCRIPTION", nullable=false, length=500)
private String description;
@Column(name="ONLOAN", nullable=false, length=5)
private String onLoan;
@ManyToOne
@JoinColumn(name="person_id")
private Person person;
```
Each maps to database tables:
```
CREATE TABLE PERSON (
PERSON_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
NAME VARCHAR(50) NOT NULL,
ADDRESS VARCHAR(100) NOT NULL,
TELEPHONE VARCHAR(10) NOT NULL,
EMAIL VARCHAR(50) NOT NULL,
CONSTRAINT PRIMARY_KEY_PERSON PRIMARY KEY(PERSON_ID)
)
```
And Book is:
```
CREATE TABLE BOOK (
BOOK_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
AUTHOR VARCHAR(50) NOT NULL,
TITLE VARCHAR(100) NOT NULL,
DESCRIPTION VARCHAR(500) NOT NULL,
ONLOAN VARCHAR(5) NOT NULL,
PERSON_ID INTEGER,
CONSTRAINT PRIMARY_KEY_BOOK PRIMARY KEY(ID),
CONSTRAINT FOREIGN_KEY_BOOK FOREIGN KEY(PERSON_ID) REFERENCES PERSON(PERSON_ID)
)
```
Can someone please tell me where I am going wrong?
And if when the SQL finally works, if I am using the right method to convert the output into a Person object where a Person has an arraylist of Book?
My method to get a books for a Person is:
```
// Calls books.jsp for a Person.
@RequestMapping(value = "/books", method = RequestMethod.GET)
public String listBooks(@RequestParam("personId") String personId,
Model model) {
logger.info(PersonController.class.getName() + ".listBooks() method called.");
Person person = personService.get(Integer.parseInt(personId));
List<Book> books = bookService.listBooksForPerson(Integer.parseInt(personId));
// Set view.
model.addAttribute("person", person);
model.addAttribute("books", books);
return "view/books";
}
```
Which does work.
Full stack trace follows:
>
> Your page request has caused a LazyInitializationException: failed to lazily initialize a collection of role: library.model.Person.books, could not initialize proxy - no Session error:
>
>
>
```
org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:575)
org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:214)
org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:554)
org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:142)
org.hibernate.collection.internal.PersistentBag.iterator(PersistentBag.java:294)
library.controller.PersonController.getLogin(PersonController.java:104)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:304)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:744)
``` | 2014/06/21 | [
"https://Stackoverflow.com/questions/24343943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/953331/"
] | Safer still than HQL is using the [criteria API](http://www.mkyong.com/hibernate/hibernate-criteria-examples/):
```
@RequestMapping(value = "/books", method = RequestMethod.GET)
public String listBooks(@RequestParam("personId") String personId,
Model model) {
Criteria query = session.createCriteria(Person.class);
query.addRestriction("personId", personId);
Person me = query.list().get(1);
List<Book> myBooks = me.getBooks();
model.setAttribute("person", me);
model.setAttribute("books", myBooks);
return "view/books";
}
```
**UPDATE**
```
@Entity
@Table(name = "PERSON")
public class Person implements Serializable {
// Attributes.
@Id
@Column(name="PERSON_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer personId;
@Column(name="NAME", nullable=false, length=50)
private String name;
@Column(name="ADDRESS", nullable=false, length=100)
private String address;
@Column(name="TELEPHONE", nullable=false, length=10)
private String telephone;
@Column(name="EMAIL", nullable=false, length=50)
private String email;
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER)
private List<Book> books;
}
``` | If i understand correctly after you get Person from controller `return (Person) session.get(Person.class, personId);` This person instance is not having books as books are loaded by lazily. And when you call person.getBooks() it requires an open session to load the books but in your DAO session got already closed in finally block which internally causes `LazyInitializationException: failed to lazily initialize a collection of role: library.model.Person.books, could not initialize proxy - no Session error:`
Try to load books EAGERLY.
Change your code
```
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER)
private List<Book> books;
``` |
24,343,943 | I am currently getting absolutely bally nowhere with a problem with Hibernate where I am given the message:
```
Your page request has caused a QueryException: could not resolve property: PERSON_ID of: library.model.Person [FROM library.model.Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = 2] error:
```
In the method below:
```
@Override
public Person getPersonAndBooks(Integer personId) {
logger.info(PersonDAOImpl.class.getName() + ".listBooksForPerson() method called.");
Session session = sessionFactory.openSession();
Query query = session.createQuery("FROM Person p JOIN Book b ON p.PERSON_ID = b.PERSON_ID WHERE p.PERSON_ID = " + personId);
List<Person> persons = query.setResultTransformer(Transformers.aliasToBean(Person.class)).list();
List<Book> books = persons.get(0).getBooks();
for (Book b : books) {
System.out.println("Here " + b.toString());
}
return persons.get(0);
}
finally {
session.close();
}
}
```
But I see nothing wrong in the SQL and it works perfectly well in Apache Derby.
I've tried a number of things on StackOverflow and elsewhere but nothing resolves the issue.
There are two classes in a simple application:
```
@Entity
@Table(name = "PERSON")
public class Person implements Serializable {
// Attributes.
@Id
@Column(name="PERSON_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer personId;
@Column(name="NAME", nullable=false, length=50)
private String name;
@Column(name="ADDRESS", nullable=false, length=100)
private String address;
@Column(name="TELEPHONE", nullable=false, length=10)
private String telephone;
@Column(name="EMAIL", nullable=false, length=50)
private String email;
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.LAZY)
private List<Book> books;
```
And Book:
```
Entity
@Table(name = "BOOK")
public class Book implements Serializable {
// Attributes.
@Id
@Column(name="BOOK_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer bookId;
@Column(name="AUTHOR", nullable=false, length=50)
private String author;
@Column(name="TITLE", nullable=false, length=50)
private String title;
@Column(name="DESCRIPTION", nullable=false, length=500)
private String description;
@Column(name="ONLOAN", nullable=false, length=5)
private String onLoan;
@ManyToOne
@JoinColumn(name="person_id")
private Person person;
```
Each maps to database tables:
```
CREATE TABLE PERSON (
PERSON_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
NAME VARCHAR(50) NOT NULL,
ADDRESS VARCHAR(100) NOT NULL,
TELEPHONE VARCHAR(10) NOT NULL,
EMAIL VARCHAR(50) NOT NULL,
CONSTRAINT PRIMARY_KEY_PERSON PRIMARY KEY(PERSON_ID)
)
```
And Book is:
```
CREATE TABLE BOOK (
BOOK_ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
AUTHOR VARCHAR(50) NOT NULL,
TITLE VARCHAR(100) NOT NULL,
DESCRIPTION VARCHAR(500) NOT NULL,
ONLOAN VARCHAR(5) NOT NULL,
PERSON_ID INTEGER,
CONSTRAINT PRIMARY_KEY_BOOK PRIMARY KEY(ID),
CONSTRAINT FOREIGN_KEY_BOOK FOREIGN KEY(PERSON_ID) REFERENCES PERSON(PERSON_ID)
)
```
Can someone please tell me where I am going wrong?
And if when the SQL finally works, if I am using the right method to convert the output into a Person object where a Person has an arraylist of Book?
My method to get a books for a Person is:
```
// Calls books.jsp for a Person.
@RequestMapping(value = "/books", method = RequestMethod.GET)
public String listBooks(@RequestParam("personId") String personId,
Model model) {
logger.info(PersonController.class.getName() + ".listBooks() method called.");
Person person = personService.get(Integer.parseInt(personId));
List<Book> books = bookService.listBooksForPerson(Integer.parseInt(personId));
// Set view.
model.addAttribute("person", person);
model.addAttribute("books", books);
return "view/books";
}
```
Which does work.
Full stack trace follows:
>
> Your page request has caused a LazyInitializationException: failed to lazily initialize a collection of role: library.model.Person.books, could not initialize proxy - no Session error:
>
>
>
```
org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:575)
org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:214)
org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:554)
org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:142)
org.hibernate.collection.internal.PersistentBag.iterator(PersistentBag.java:294)
library.controller.PersonController.getLogin(PersonController.java:104)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:304)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:744)
``` | 2014/06/21 | [
"https://Stackoverflow.com/questions/24343943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/953331/"
] | Safer still than HQL is using the [criteria API](http://www.mkyong.com/hibernate/hibernate-criteria-examples/):
```
@RequestMapping(value = "/books", method = RequestMethod.GET)
public String listBooks(@RequestParam("personId") String personId,
Model model) {
Criteria query = session.createCriteria(Person.class);
query.addRestriction("personId", personId);
Person me = query.list().get(1);
List<Book> myBooks = me.getBooks();
model.setAttribute("person", me);
model.setAttribute("books", myBooks);
return "view/books";
}
```
**UPDATE**
```
@Entity
@Table(name = "PERSON")
public class Person implements Serializable {
// Attributes.
@Id
@Column(name="PERSON_ID", unique=true, nullable=false)
@GeneratedValue(strategy=GenerationType.AUTO)
private Integer personId;
@Column(name="NAME", nullable=false, length=50)
private String name;
@Column(name="ADDRESS", nullable=false, length=100)
private String address;
@Column(name="TELEPHONE", nullable=false, length=10)
private String telephone;
@Column(name="EMAIL", nullable=false, length=50)
private String email;
@OneToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER)
private List<Book> books;
}
``` | In HQL no need to use ON condition in joins hibernate will generate this on runtime. And only our POJO variable name should be used in our HQL.
FROM Person p JOIN p.book
b WHERE p.persionId=personid |
16,435 | I'm working on *yet* another time-tracking web application. This application has, next to the normal site, a mobile version. Now I wonder, is it ok for the mobile version to be a limited subset of the full site?
For example, the mobile version shows a very limited overview of today's activities, while in the full site you get a full blown view of today's activities. Also, in the full site you can view statistics and manage *all* settings, which you can't in the mobile version.
Do users care about this? The way I see it (so far) is that the mobile version should just have the functionalities that you really need being on the road. Also, if I would make the mobile version contain all the features, I feel like it would be too bloated and I would end up with a lot of usability questions/problems. | 2012/01/22 | [
"https://ux.stackexchange.com/questions/16435",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/11078/"
] | Mobile websites should not be mobile versions of desktop websites - they should be a service or product *for mobile usage*. That means that yes, if certain content isn't relevant 'on the road', or if its inclusion makes it harder to provide an interface that works better in a mobile context, you should consider chopping it.
And remember, if needs be, you can always offer the ability to switch back into the non-mobile version. Try to set cookies or session data so users don't need to do so repeatedly, though.
Another, practical matter: delivering your entire desktop service to mobile is going to be a big, chunky, waterfall project. And waterfall sucks. Stay agile and deliver iteratively, and only adapt your mobile service to include desktop extras as and when you can prove the business need with empirical evidence. The alternative is madness. | **Certainly**.
Mobile users have much less attention and time to spend on your site (as discussed in the excelent resource [Mobile First](http://www.abookapart.com/products/mobile-first) by Luke Wroblewski). This means extraneous, rarely used actions will get in the way.
[Take a page from mobile apps](http://www.lukew.com/ff/entry.asp?870). Mobile apps are effective because they give the features a *mobile* user needs without the adds, sales pitches and generally wasted space of a website.
Full featured sites on a 3.5 inch screen are simple overwhelming and hard to structure well. Your site's structure will benefit greatly by trimming the fat and allow you to present everything much more elegantly. Including every feature forces you to hide things behind menus, decide what gets hidden and how--the research needed to make that work is much more complicated than just finding out the important thing; **what mobile users can be without**.
The facebook mobile site is a great example of this. What's my priority on mobile facebook? Sharing my status, photos and location--these are all handled in a nice top bar for instant posting of content. Reading others' content is also important; it's right there on the front page in a news feed.
Do mobile users really need some features? Things like managing privacy/account settings are pretty rare and can always wait until a user gets to their desktop machine. Extraneous features like "about us" might not be relevant if your mobile site is a web app.
Think like an app first, then think *very hard* if you need any features beyond the core set of actions that your mobile users *need* to do. |
16,435 | I'm working on *yet* another time-tracking web application. This application has, next to the normal site, a mobile version. Now I wonder, is it ok for the mobile version to be a limited subset of the full site?
For example, the mobile version shows a very limited overview of today's activities, while in the full site you get a full blown view of today's activities. Also, in the full site you can view statistics and manage *all* settings, which you can't in the mobile version.
Do users care about this? The way I see it (so far) is that the mobile version should just have the functionalities that you really need being on the road. Also, if I would make the mobile version contain all the features, I feel like it would be too bloated and I would end up with a lot of usability questions/problems. | 2012/01/22 | [
"https://ux.stackexchange.com/questions/16435",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/11078/"
] | Mobile websites should not be mobile versions of desktop websites - they should be a service or product *for mobile usage*. That means that yes, if certain content isn't relevant 'on the road', or if its inclusion makes it harder to provide an interface that works better in a mobile context, you should consider chopping it.
And remember, if needs be, you can always offer the ability to switch back into the non-mobile version. Try to set cookies or session data so users don't need to do so repeatedly, though.
Another, practical matter: delivering your entire desktop service to mobile is going to be a big, chunky, waterfall project. And waterfall sucks. Stay agile and deliver iteratively, and only adapt your mobile service to include desktop extras as and when you can prove the business need with empirical evidence. The alternative is madness. | Does "mobile" include "tablet"? If it does, then the answer is probably no. Expect tablets to become significant enough to replace desktops for at least some of your users.
However, even on tablets the screen space is still a bit scarce. You may need to move less frequently used features off the main screens. |
16,435 | I'm working on *yet* another time-tracking web application. This application has, next to the normal site, a mobile version. Now I wonder, is it ok for the mobile version to be a limited subset of the full site?
For example, the mobile version shows a very limited overview of today's activities, while in the full site you get a full blown view of today's activities. Also, in the full site you can view statistics and manage *all* settings, which you can't in the mobile version.
Do users care about this? The way I see it (so far) is that the mobile version should just have the functionalities that you really need being on the road. Also, if I would make the mobile version contain all the features, I feel like it would be too bloated and I would end up with a lot of usability questions/problems. | 2012/01/22 | [
"https://ux.stackexchange.com/questions/16435",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/11078/"
] | **Certainly**.
Mobile users have much less attention and time to spend on your site (as discussed in the excelent resource [Mobile First](http://www.abookapart.com/products/mobile-first) by Luke Wroblewski). This means extraneous, rarely used actions will get in the way.
[Take a page from mobile apps](http://www.lukew.com/ff/entry.asp?870). Mobile apps are effective because they give the features a *mobile* user needs without the adds, sales pitches and generally wasted space of a website.
Full featured sites on a 3.5 inch screen are simple overwhelming and hard to structure well. Your site's structure will benefit greatly by trimming the fat and allow you to present everything much more elegantly. Including every feature forces you to hide things behind menus, decide what gets hidden and how--the research needed to make that work is much more complicated than just finding out the important thing; **what mobile users can be without**.
The facebook mobile site is a great example of this. What's my priority on mobile facebook? Sharing my status, photos and location--these are all handled in a nice top bar for instant posting of content. Reading others' content is also important; it's right there on the front page in a news feed.
Do mobile users really need some features? Things like managing privacy/account settings are pretty rare and can always wait until a user gets to their desktop machine. Extraneous features like "about us" might not be relevant if your mobile site is a web app.
Think like an app first, then think *very hard* if you need any features beyond the core set of actions that your mobile users *need* to do. | Does "mobile" include "tablet"? If it does, then the answer is probably no. Expect tablets to become significant enough to replace desktops for at least some of your users.
However, even on tablets the screen space is still a bit scarce. You may need to move less frequently used features off the main screens. |
74,224,393 | So I was watching a course and stumbled on a method called `Object.assign()` which merges two objects together. But since objects are orderless, I was wondering in which order the properties of the objects would be merged. Kudos to any good answers. | 2022/10/27 | [
"https://Stackoverflow.com/questions/74224393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can use a `sequence` to produce a cycle of alphabetic values. Start with a sequence that cycles through values from `1` to `26`:
```
create sequence AlphaValue as Int MinValue 1 MaxValue 26 Cycle;
```
Then you can use `Char` to convert the returned value into the corresponding letter:
```
declare @Count as Int = 0;
declare @AlphaValue as Int;
while @Count < 30
begin
set @AlphaValue = next value for AlphaValue;
select @Count as Count, @AlphaValue as AlphaValueInt,
-- Convert the 1..26 value into the corresponding letter.
Char( ASCII( 'A' ) + @AlphaValue - 1 ) as AlphaValueChar;
set @Count += 1;
end;
```
[dbfiddle](https://dbfiddle.uk/FxqZev6L).
Note that the usual warnings about sequences sometimes skipping values apply. | Thank you, was able to figure it out, with your help of course.. Please see the code below, This will print A to Z individually every time I run this block of code
Declare @Alphabet TABLE (Alpha1 VARCHAR(1), Alphaint VARCHAR(2), Count1 INT)
Declare @Count as Int = 0;
Declare @AlphaValue as Int;
while @Count < 30
begin
set @AlphaValue = next value for AlphaValue;
INSERT @Alphabet (Count1, Alphaint, Alpha1)
select @Count as Count, @AlphaValue as AlphaValueInt,
-- Convert the 1..26 value into the corresponding letter.
Char( ASCII( 'A' ) + @AlphaValue - 1 ) as AlphaValueChar;
set @Count += 1;
end;
Select Alpha1 from @Alphabet where Count1=0 |
38,469,634 | There's any way to edit an $.ajax function, to include the proprierty "data", when she's receive any value, and remove her, when there's not? I mean, dynamically? e.g.:
When there's a value to the variable:
```
var parametro = "{id: "1"}";
$.ajax({
type: "POST",
url: url,
data: parametro ,
async: async,
...
```
When there's no variable, or, no value to her:
```
var parametro = "";
$.ajax({
type: "POST",
url: url,
async: async,
...
```
So, my idea is something like that:
```
$.ajax({
type: "POST",
url: url,
if(parametro)
{
data: parametro,
},
async: async,
...
```
I tried other stuff like:
```
data: undefined
data: ""
data: null
```
But nothing seems to work. The only way to send an empty "data" property, it is to remove it completely. Thus, in addition to creating two different functions, one with a "data" property, and another without her, there's any soloution?
Thank you all for your attention. | 2016/07/19 | [
"https://Stackoverflow.com/questions/38469634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1717381/"
] | Because both of them are objects, you can use `jQuery.extend()` to extend the options object.
Like this:
```
var parametro = {
id: "1"
};
var defaults = {
type: "POST",
url: url,
async: async,
...
};
var options = $.extend(defaults, parametro);
$.ajax(options);
```
Link:
<https://api.jquery.com/jquery.extend/> | As far as I know
I would suggest put the if condition outside of `$.ajax` such as
```
if(parametro){
$.ajax({
type: "POST",
url: url,
data: parametro ,
async: async,
...
});
}else{
$.ajax({
type: "POST",
url: url,
async: async,
...
}
``` |
38,469,634 | There's any way to edit an $.ajax function, to include the proprierty "data", when she's receive any value, and remove her, when there's not? I mean, dynamically? e.g.:
When there's a value to the variable:
```
var parametro = "{id: "1"}";
$.ajax({
type: "POST",
url: url,
data: parametro ,
async: async,
...
```
When there's no variable, or, no value to her:
```
var parametro = "";
$.ajax({
type: "POST",
url: url,
async: async,
...
```
So, my idea is something like that:
```
$.ajax({
type: "POST",
url: url,
if(parametro)
{
data: parametro,
},
async: async,
...
```
I tried other stuff like:
```
data: undefined
data: ""
data: null
```
But nothing seems to work. The only way to send an empty "data" property, it is to remove it completely. Thus, in addition to creating two different functions, one with a "data" property, and another without her, there's any soloution?
Thank you all for your attention. | 2016/07/19 | [
"https://Stackoverflow.com/questions/38469634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1717381/"
] | Because both of them are objects, you can use `jQuery.extend()` to extend the options object.
Like this:
```
var parametro = {
id: "1"
};
var defaults = {
type: "POST",
url: url,
async: async,
...
};
var options = $.extend(defaults, parametro);
$.ajax(options);
```
Link:
<https://api.jquery.com/jquery.extend/> | Try this way,
```
if(parametro)
{
$.ajax({
type: "POST",
url: url,
data: parametro,
async: false,
});
}
```
if no data then no needs to do ajax call, I think. |
3,472,124 | I'm working on learning python, here is a simple program, I wrote:
```
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
``` | 2010/08/12 | [
"https://Stackoverflow.com/questions/3472124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417449/"
] | Before doing it more pythonic I will probably make it more simple... the algorithm is much more complex than necessary. No need to use a list when two ints are enough.
```
def guesser(low = 0, up = 100):
print("Choose a number between %d and %d" % (low, up-1))
while low < up - 1:
mid = (low+up)//2
yn = raw_input("Is Your Number Smaller Than %s? (y/n): " % mid)
if yn not in ['y', 'n']: continue
low, up = (low, mid) if yn == 'y' else (mid, up)
print "Your Number is:", low
guesser()
``` | Normally I would try to help with your code, but you have made it so *way* much too complicated that I think it would be easier for you to look at some code.
```
def guesser( bounds ):
a, b = bounds
mid = ( a + b ) // 2
if a == b: return a
if input( "over {0}? ".format( mid ) ) == "y":
new_bounds = ( mid, b )
else:
new_bounds = ( a, mid )
return guesser( new_bounds )
```
You should think about how your algorithm will work in abstract terms before diving in.
EDIT: Simplified the code at the expense of brevity. |
3,472,124 | I'm working on learning python, here is a simple program, I wrote:
```
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
``` | 2010/08/12 | [
"https://Stackoverflow.com/questions/3472124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417449/"
] | Before doing it more pythonic I will probably make it more simple... the algorithm is much more complex than necessary. No need to use a list when two ints are enough.
```
def guesser(low = 0, up = 100):
print("Choose a number between %d and %d" % (low, up-1))
while low < up - 1:
mid = (low+up)//2
yn = raw_input("Is Your Number Smaller Than %s? (y/n): " % mid)
if yn not in ['y', 'n']: continue
low, up = (low, mid) if yn == 'y' else (mid, up)
print "Your Number is:", low
guesser()
``` | This is not as elegant as katrielalex's recursion, but it illustrates a basic class.
```
class guesser:
def __init__(self, l_bound, u_bound):
self.u_bound = u_bound
self.l_bound = l_bound
self.nextguess()
def nextguess(self):
self.guess = int((self.u_bound + self.l_bound)/2)
print 'Higher or lower than %i?' % self.guess
def mynumberishigher(self):
self.l_bound = self.guess
self.nextguess()
def mynumberislower(self):
self.u_bound = self.guess
self.nextguess()
``` |
3,472,124 | I'm working on learning python, here is a simple program, I wrote:
```
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
``` | 2010/08/12 | [
"https://Stackoverflow.com/questions/3472124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417449/"
] | Before doing it more pythonic I will probably make it more simple... the algorithm is much more complex than necessary. No need to use a list when two ints are enough.
```
def guesser(low = 0, up = 100):
print("Choose a number between %d and %d" % (low, up-1))
while low < up - 1:
mid = (low+up)//2
yn = raw_input("Is Your Number Smaller Than %s? (y/n): " % mid)
if yn not in ['y', 'n']: continue
low, up = (low, mid) if yn == 'y' else (mid, up)
print "Your Number is:", low
guesser()
``` | More pythonic to use the `bisect` module - and a `class` of course :)
```
import bisect
hival= 50
class Guesser(list):
def __getitem__(self, idx):
return 0 if raw_input("Is your number bigger than %s? (y/n)"%idx)=='y' else hival
g=Guesser()
print "Think of a number between 0 and %s"%hival
print "Your number is: %s"%bisect.bisect(g,0,hi=hival)
```
---
Here is the definition of `bisect.bisect` from the python library. As you can see, most of the algorithm is implemented here for you
```
def bisect_right(a, x, lo=0, hi=None):
"""Return the index where to insert item x in list a, assuming a is sorted.
The return value i is such that all e in a[:i] have e <= x, and all e in
a[i:] have e > x. So if x already appears in the list, a.insert(x) will
insert just after the rightmost x already there.
Optional args lo (default 0) and hi (default len(a)) bound the
slice of a to be searched.
"""
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if x < a[mid]: hi = mid
else: lo = mid+1
return lo
bisect = bisect_right # backward compatibility
``` |
3,472,124 | I'm working on learning python, here is a simple program, I wrote:
```
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
``` | 2010/08/12 | [
"https://Stackoverflow.com/questions/3472124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417449/"
] | Normally I would try to help with your code, but you have made it so *way* much too complicated that I think it would be easier for you to look at some code.
```
def guesser( bounds ):
a, b = bounds
mid = ( a + b ) // 2
if a == b: return a
if input( "over {0}? ".format( mid ) ) == "y":
new_bounds = ( mid, b )
else:
new_bounds = ( a, mid )
return guesser( new_bounds )
```
You should think about how your algorithm will work in abstract terms before diving in.
EDIT: Simplified the code at the expense of brevity. | This is not as elegant as katrielalex's recursion, but it illustrates a basic class.
```
class guesser:
def __init__(self, l_bound, u_bound):
self.u_bound = u_bound
self.l_bound = l_bound
self.nextguess()
def nextguess(self):
self.guess = int((self.u_bound + self.l_bound)/2)
print 'Higher or lower than %i?' % self.guess
def mynumberishigher(self):
self.l_bound = self.guess
self.nextguess()
def mynumberislower(self):
self.u_bound = self.guess
self.nextguess()
``` |
3,472,124 | I'm working on learning python, here is a simple program, I wrote:
```
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
``` | 2010/08/12 | [
"https://Stackoverflow.com/questions/3472124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417449/"
] | More pythonic to use the `bisect` module - and a `class` of course :)
```
import bisect
hival= 50
class Guesser(list):
def __getitem__(self, idx):
return 0 if raw_input("Is your number bigger than %s? (y/n)"%idx)=='y' else hival
g=Guesser()
print "Think of a number between 0 and %s"%hival
print "Your number is: %s"%bisect.bisect(g,0,hi=hival)
```
---
Here is the definition of `bisect.bisect` from the python library. As you can see, most of the algorithm is implemented here for you
```
def bisect_right(a, x, lo=0, hi=None):
"""Return the index where to insert item x in list a, assuming a is sorted.
The return value i is such that all e in a[:i] have e <= x, and all e in
a[i:] have e > x. So if x already appears in the list, a.insert(x) will
insert just after the rightmost x already there.
Optional args lo (default 0) and hi (default len(a)) bound the
slice of a to be searched.
"""
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if x < a[mid]: hi = mid
else: lo = mid+1
return lo
bisect = bisect_right # backward compatibility
``` | This is not as elegant as katrielalex's recursion, but it illustrates a basic class.
```
class guesser:
def __init__(self, l_bound, u_bound):
self.u_bound = u_bound
self.l_bound = l_bound
self.nextguess()
def nextguess(self):
self.guess = int((self.u_bound + self.l_bound)/2)
print 'Higher or lower than %i?' % self.guess
def mynumberishigher(self):
self.l_bound = self.guess
self.nextguess()
def mynumberislower(self):
self.u_bound = self.guess
self.nextguess()
``` |
70,600,836 | I want it to show on the screen as soon as I enter the location, how can I do it?
i have to use ctrl + s when i try this way
```
Future<void> getData() async{
data = await client.getCurrentWeather(locatian.text.toString());
}
TextEditingController locatian = new TextEditingController();
body: FutureBuilder(
future: getData(),
builder: (context, snapshot){
if(snapshot.connectionState==ConnectionState.done){
return Column(
children: [
Padding(
padding: const EdgeInsets.all(9.0),
child: TextField(
controller: locatian,
decoration: InputDecoration(
border: OutlineInputBorder(),
hintText: "Sehir Giriniz",
prefixIcon: Icon(Icons.search),
),
),
),
GuncelVeri(Icons.wb_sunny_rounded, "${data?.derece}", "${data?.sehir}"),
bilgiler("${data?.humidity}","${data?.feels_like}", "${data?.pressure}"),],
);
``` | 2022/01/05 | [
"https://Stackoverflow.com/questions/70600836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15332001/"
] | Use the builtin `process` class
```
begin
process pid;
pid = process::self();
...
end
```
See section *9.7 Fine-grain process control* in the [IEEE 1800-2017 SystemVerilog LRM](https://ieeexplore.ieee.org/document/8299595) | SystemVerilog does not have any facility to get 'pid' of its process. It provides an object to do limited process control in a system-independent way. You can check lrm 9.7 for available controls.
However, it is possible to get pid using DPI or PLI functions, using 'c' calls. But implementation could be system and simulator dependent.
For example, the following works with VCS on linux:
```
module pid();
import "DPI-C" function int getpid();
initial begin
$display("%d", getpid());
end
endmodule // pid
```
In the above `getpid()` is a standard *libc* function which is callable from the simulator. It also seems to work with vcs, mentor, and cadence in EDA playground, but fails with aldec.
Since the function is globally defined, there is no need to define a dpi function body at least for the three simulators. However, you might need to define a different dpi function with a 'c' body to make it more portable. |
39,546 | Junto a una imagen de la Virgen en Buenos Aires estaba la siguiente inscripción:
[](https://i.stack.imgur.com/MST9h.jpg)
Si el sujeto es María, ¿no debería decir "ampárame y guíame"?
¿El sujeto no es María? ¿esta forma aparentemente plural puede usarse en singular en este contexto? ¿Era así en otra época? | 2021/08/23 | [
"https://spanish.stackexchange.com/questions/39546",
"https://spanish.stackexchange.com",
"https://spanish.stackexchange.com/users/33/"
] | Primero veamos de donde viene.
>
> Amparadme: Forma **enclítica** del Singular del modo Imperativo del
> verbo amparar
>
>
>
El significado de amparar se refiere a valerse del favor de alguno.
En este caso, Él pide que lo ampare, pero utiliza un recurso llamado el **Voseo**
que como mencionaron anteriormente es solo una forma arcaica de como referirse a alguien con mucho respeto o dirigirse a una alta autoridad, antiguamente así se referían los sirvientes o mayordomos a su ''Amo o Señora''
El caso de '**Amparadme y Guiadme**' utilizan este '**Voseo**'.
Pero, ¿Qué es el voceo realmente?
>
> voseo es un fenómeno lingüístico dentro de la lengua española en el
> que se emplea el pronombre «vos» junto a ciertas conjugaciones
> verbales particulares para dirigirse al interlocutor en lugar de
> emplear el pronombre «tú» en situaciones de familiaridad.
>
>
>
Actualmente el **Voseo** ya no se habla, solo se usa como recurso literario para escribir poesía y el **ustedeo** es el usado para referirse a alguien con respeto *(Pero este es otro tema)*
Así que, respondiendo tu pregunta, sí, es correcto usar el 'Amparadme' de esa forma, porque esta refiriéndose con respecto a 'La Virgencita de Lujan' que es considerada como una autoridad
Espero haberte ayudado, *good luck* | Actualmente esa forma no aparece recogida en el DLE en la conjugación del verbo [amparar](https://dle.rae.es/amparar) pero a mí no me parece que sea incorrecta, solo **arcaica**. Es de cuando se empleaba el pronombre vos como indicativo de **respeto**.
>
> Amparadme vos
>
>
>
Algo sobre la [evolución del voseo](https://es.wikipedia.org/wiki/Voseo) puede consultarse en la Wikipedia.
No encuentro ahora mismo fotos pero en una fachada de una de las iglesias de mi ciudad (Málaga, España) creo que también podría encontrar esa forma verbal para dirigirse a la Virgen o a Dios. La iglesia en cuestión no es antigua, es de finales del siglo XX.
Creo que en ese ámbito católico se sigue usando actualmente con ese doble motivo, al tener un matiz arcaico se resalta la antigüedad de la Iglesia Católica y de la biblia, por un lado, y al tener un matiz de respeto, se resalta la adoración a esas figuras y la distancia entre ellas y los meros pecadores mortales. |
39,546 | Junto a una imagen de la Virgen en Buenos Aires estaba la siguiente inscripción:
[](https://i.stack.imgur.com/MST9h.jpg)
Si el sujeto es María, ¿no debería decir "ampárame y guíame"?
¿El sujeto no es María? ¿esta forma aparentemente plural puede usarse en singular en este contexto? ¿Era así en otra época? | 2021/08/23 | [
"https://spanish.stackexchange.com/questions/39546",
"https://spanish.stackexchange.com",
"https://spanish.stackexchange.com/users/33/"
] | Primero veamos de donde viene.
>
> Amparadme: Forma **enclítica** del Singular del modo Imperativo del
> verbo amparar
>
>
>
El significado de amparar se refiere a valerse del favor de alguno.
En este caso, Él pide que lo ampare, pero utiliza un recurso llamado el **Voseo**
que como mencionaron anteriormente es solo una forma arcaica de como referirse a alguien con mucho respeto o dirigirse a una alta autoridad, antiguamente así se referían los sirvientes o mayordomos a su ''Amo o Señora''
El caso de '**Amparadme y Guiadme**' utilizan este '**Voseo**'.
Pero, ¿Qué es el voceo realmente?
>
> voseo es un fenómeno lingüístico dentro de la lengua española en el
> que se emplea el pronombre «vos» junto a ciertas conjugaciones
> verbales particulares para dirigirse al interlocutor en lugar de
> emplear el pronombre «tú» en situaciones de familiaridad.
>
>
>
Actualmente el **Voseo** ya no se habla, solo se usa como recurso literario para escribir poesía y el **ustedeo** es el usado para referirse a alguien con respeto *(Pero este es otro tema)*
Así que, respondiendo tu pregunta, sí, es correcto usar el 'Amparadme' de esa forma, porque esta refiriéndose con respecto a 'La Virgencita de Lujan' que es considerada como una autoridad
Espero haberte ayudado, *good luck* | Es un ejemplo de [plural mayestático](https://blog.lengua-e.com/2011/plural-mayestatico/), una forma arcaica de dirigirse a una alta autoridad, o de que dicha autoridad hable de sí misma.
Ese mismo ejemplo forma parte de una [conocida canción tradicional](https://funjdiaz.net/a_canciones2.php?id=42). |
24,032,282 | I have following Pandas Dataframe:
```
In [66]: hdf.size()
Out[66]:
a b
0 0.0 21004
0.1 119903
0.2 186579
0.3 417349
0.4 202723
0.5 100906
0.6 56386
0.7 6080
0.8 3596
0.9 2391
1.0 1963
1.1 1730
1.2 1663
1.3 1614
1.4 1309
...
186 0.2 15
0.3 9
0.4 21
0.5 4
187 0.2 3
0.3 10
0.4 22
0.5 10
188 0.0 11
0.1 19
0.2 20
0.3 13
0.4 7
0.5 5
0.6 1
Length: 4572, dtype: int64
```
You see, a from 0...188 and b in every group from some value to some value. And as the designated Z-value, the count of the occurence of the pair a/b.
How to get a countour or heatmap plot out of the grouped dataframe?
I have this (asking for the ?):
```
numcols, numrows = 30, 30
xi = np.linspace(0, 200, numcols)
yi = np.linspace(0, 6, numrows)
xi, yi = np.meshgrid(xi, yi)
zi = griddata(?, ?, hdf.size().values, xi, yi)
```
How to get the x and y values out of the Groupby object and plot a contour? | 2014/06/04 | [
"https://Stackoverflow.com/questions/24032282",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3706049/"
] | Thanks a lot! My fault was, that I did not realize, that I have to apply some function to the groupby dataframe, like `.size()`, to work with it...
```
hdf = aggdf.groupby(['a','b']).size()
hdf
```
gives me
```
a b
1 -2.0 1
-1.9 1
-1.8 1
-1.7 2
-1.6 5
-1.5 10
-1.4 9
-1.3 21
-1.2 34
-1.1 67
-1.0 65
-0.9 94
-0.8 180
-0.7 242
-0.6 239
...
187 0.4 22
0.5 10
188 -0.6 2
-0.5 2
-0.4 1
-0.3 2
-0.2 5
-0.1 10
-0.0 18
0.1 19
0.2 20
0.3 13
0.4 7
0.5 5
0.6 1
Length: 8844, dtype: int64
```
With that, and your help CT Zhu, I could then do
```
hdfreset = hdf.reset_index()
hdfreset.columns = ['a', 'b', 'occurrence']
hdfpivot=hdfreset.pivot('a', 'b')
```
and this finally gave me the correct values to
```
X=hdfpivot.columns.levels[1].values
Y=hdfpivot.index.values
Z=hdfpivot.values
Xi,Yi = np.meshgrid(X, Y)
plt.contourf(Yi, Xi, Z, alpha=0.7, cmap=plt.cm.jet);
```
which leads to this beautiful contourf:
 | Welcome to SO.
It looks quite clear that for each of your 'a' level, the numbers of 'b' levels are not the same, thus I will suggest the following solution:
```
In [44]:
print df #an example, you can get your dataframe in to this by rest_index()
a b value
0 0 1 0.336885
1 0 2 0.276750
2 0 3 0.796488
3 1 1 0.156050
4 1 2 0.401942
5 1 3 0.252651
6 2 1 0.861911
7 2 2 0.914803
8 2 3 0.869331
9 3 1 0.284757
10 3 2 0.488330
[11 rows x 3 columns]
In [45]:
#notice that you will have some 'NAN' values
df=df.pivot('a', 'b', 'value')
In [46]:
X=df.columns.values
Y=df.index.values
Z=df.values
x,y=np.meshgrid(X, Y)
plt.contourf(x, y, Z) #the NAN will be plotted as white spaces
Out[46]:
<matplotlib.contour.QuadContourSet instance at 0x1081385a8>
```
 |
41,802,181 | So basically i want to add some custom properties to a word document.
Is this possible yet from the word api 1.3?
I found something along the lines of:
```
context.document.workbook.properties
```
but that only seems to work for excel.
Thanks! | 2017/01/23 | [
"https://Stackoverflow.com/questions/41802181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7047202/"
] | To add more detail to the previous answer: Yes Word.js 1.3 introduces creation and retrieval of custom and built-in document properties. The API is still in preview, you need to at least have the December fork build for this feature to work. Make sure you try it on 16.0.7766+ builds. Also please make sure to use our Preview CDN for Office.js <https://appsforoffice.microsoft.com/lib/beta/hosted/office.js>
Here is a code sample on how to create a custom property in Word:
```js
function createCustomProperty(){
Word.run(function (context) {
//method accepts property name plus value
context.document.properties.customProperties.add("property_name", 123);
return context.sync()
.catch(function (e) {
console.log(e.message);
})
})
}
```
Check out the documentation to see other functionalities, including getting built-in properties
<https://github.com/OfficeDev/office-js-docs/blob/WordJs_1.3_Openspec/reference/word/documentproperties.md>
Hope this helps,
Thanks!
Juan. | [Word API 1.3](https://dev.office.com/reference/add-ins/requirement-sets/word-api-requirement-sets?product=word) introduces documentProperties and customProperty, but the status is still listed as Preview, and requires Word 2016 Desktop Version 1605 (Build 6925.1000) or later or the mobile apps (not yet available online). |
793,192 | August 2015 Summary
===================
Please note, this is still happening. This is **not** related to linuxatemyram.com - the memory is not used for disk cache/buffers. This is what it looks like in NewRelic - the system leaks all the memory, uses up all swap space and then crashes. In this screenshot I rebooted the server before it crashed:
[](https://i.stack.imgur.com/vIkEa.png)
It is impossible to identify the source of the leak using common userspace tools. There is now a chat room to discuss this issue: <http://chat.stackexchange.com/rooms/27309/invisible-memory-leak-on-linux>
Only way to recover the "missing" memory appears to be rebooting the server. This has been a long standing issue reproduced in Ubuntu Server 14.04, 14.10 and 15.04.
Top
===
The memory use does not show in top and cannot be recovered even after killing just about every process (excluding things like kernel processes and ssh). Look at the "cached Mem", "buffers" and "free" fields in top, they are not using up the memory, the memory used is "missing" and unrecoverable without a reboot.
Attempting to use this "missing" memory causes the server to swap, slow to a crawl and eventually freeze.
```
root@XanBox:~# top -o +%MEM
top - 12:12:13 up 15 days, 20:39, 3 users, load average: 0.00, 0.06, 0.77
Tasks: 126 total, 1 running, 125 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem: 2,040,256 total, 1,881,228 used, 159,028 free, 1,348 buffers
KiB Swap: 1,999,868 total, 27,436 used, 1,972,432 free. 67,228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11502 root 20 0 107692 4252 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11336 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11841 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11301 root 20 0 26772 3436 2688 S 0.7 0.2 0:01.30 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/z+
11385 deployer 20 0 19972 2392 1708 S 0.0 0.1 0:00.03 -bash
11553 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.03 -bash
11890 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.02 -bash
11889 deployer 20 0 108008 2280 944 S 0.0 0.1 0:00.25 sshd: deployer@pts/3
12009 root 20 0 18308 2228 1608 S 0.0 0.1 0:00.09 -su
12114 root 20 0 18308 2192 1564 S 0.0 0.1 0:00.04 -su
12007 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12112 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12008 root 20 0 67376 2016 1528 S 0.0 0.1 0:00.01 su -
12113 root 20 0 67376 2012 1528 S 0.0 0.1 0:00.01 su -
1 root 20 0 33644 1988 764 S 0.0 0.1 2:29.77 /sbin/init
11552 deployer 20 0 107692 1952 936 S 0.0 0.1 0:00.07 sshd: deployer@pts/2
11384 deployer 20 0 107692 1948 936 S 0.0 0.1 0:00.06 sshd: deployer@pts/0
12182 root 20 0 20012 1516 1012 R 0.7 0.1 0:00.08 top -o +%MEM
1152 message+ 20 0 39508 1448 920 S 0.0 0.1 1:40.01 dbus-daemon --system --fork
1791 root 20 0 279832 1312 816 S 0.0 0.1 1:16.18 /usr/lib/policykit-1/polkitd --no-debug
1186 root 20 0 43736 984 796 S 0.0 0.0 1:13.07 /lib/systemd/systemd-logind
1212 syslog 20 0 256228 688 184 S 0.0 0.0 1:41.29 rsyslogd
5077 root 20 0 25324 648 520 S 0.0 0.0 0:34.35 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
336 root 20 0 19476 512 376 S 0.0 0.0 0:07.40 upstart-udev-bridge --daemon
342 root 20 0 51228 468 344 S 0.0 0.0 0:00.85 /lib/systemd/systemd-udevd --daemon
1097 root 20 0 15276 364 256 S 0.0 0.0 0:06.39 upstart-file-bridge --daemon
4921 root 20 0 61364 364 240 S 0.0 0.0 0:00.05 /usr/sbin/sshd -D
745 root 20 0 15364 252 180 S 0.0 0.0 0:06.51 upstart-socket-bridge --daemon
4947 root 20 0 23656 168 100 S 0.0 0.0 0:14.70 cron
11290 daemon 20 0 19140 164 0 S 0.0 0.0 0:00.00 atd
850 root 20 0 23420 80 16 S 0.0 0.0 0:11.00 rpcbind
872 statd 20 0 21544 8 4 S 0.0 0.0 0:00.00 rpc.statd -L
4880 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty4
4883 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty5
4890 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty2
4891 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty3
4894 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty6
4919 root 20 0 4368 4 0 S 0.0 0.0 0:00.00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
5224 root 20 0 24048 4 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd --manage-gids
6160 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty1
2 root 20 0 0 0 0 S 0.0 0.0 0:03.44 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:04.63 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 16:03.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 4:08.79 [rcuos/0]
9 root 20 0 0 0 0 S 0.0 0.0 4:10.42 [rcuos/1]
10 root 20 0 0 0 0 S 0.0 0.0 4:30.71 [rcuos/2]
```
Hardware
========
I have observed this on 3 servers out of around 100 so far (though others may be affected). One is an Intel Atom D525 @1.8ghz and the other 2 are Core2Duo E4600 and Q6600. One is using a JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller, the others are using Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0).
I ran lshw on the trouble servers as well as on an example OK server. Problem Servers: <http://pastie.org/10370534> <http://pastie.org/10370537> and <http://pastie.org/10370541> -- OK Server: <http://pastie.org/10370544>
Application
===========
This is an entirely headless application. There is no monitor connected and in fact no XServer installed at all. This should rule out graphics drivers/issues.
The server is used to proxy and analyse RTSP video using live555ProxyServer, ffmpeg and openCV. These servers do crunch through a lot of traffic because this is a CCTV application: <http://pastie.org/9558324>
I have tried both very old and latest trunk versions of live555, ffmpeg and openCV without change. I have also tried using opencv through the python2 and python3 modules, no change.
The exact same software/configuration has been loaded onto close to 100 servers, so far 3 are confirmed to leak memory. The servers slowly and stealthily leak around xMB (one leaking 8MB, one is slower, one is faster) per hour until all ram is gone, the servers start swapping heavily, slow to a crawl and require a reboot.
Meminfo
=======
Again, you can see the Cached and Buffers not using up much memory at all. HugePages are also disabled so this is not the culprit.
```
root@XanBox:~# cat /proc/meminfo
MemTotal: 2,040,256 kB
MemFree: 159,004 kB
Buffers: 1,348 kB
Cached: 67,228 kB
SwapCached: 9,940 kB
Active: 10,788 kB
Inactive: 81,120 kB
Active(anon): 1,900 kB
Inactive(anon): 21,512 kB
Active(file): 8,888 kB
Inactive(file): 59,608 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1,999,868 kB
SwapFree: 1,972,432 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14,496 kB
Mapped: 8,160 kB
Shmem: 80 kB
Slab: 33,472 kB
SReclaimable: 17,660 kB
SUnreclaim: 15,812 kB
KernelStack: 1,064 kB
PageTables: 3,992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3,019,996 kB
Committed_AS: 94,520 kB
VmallocTotal: 34,359,738,367 kB
VmallocUsed: 535,936 kB
VmallocChunk: 34,359,147,772 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2,048 kB
DirectMap4k: 62,144 kB
DirectMap2M: 2,025,472 kB
```
Free Output
===========
Free shows the following (note cached and buffers are both low so this is not disk cache or buffers!) - the memory is not recoverable without a reboot:
```
root@XanBox:~# free -m
total used free shared buffers cached
Mem: 1,992 1,838 153 0 1 66
```
If we subtract/add the buffers/cache to Used and Free, we see:
* 1,772MB Really Used (- Buffers/Cache) = 1,838MB used - 1MB buffers - 66MB cache
* 220MB Really Free (+ Buffers/Cache) = 154MB free + 1MB buffers + 66MB cache
Exactly as we expect:
```
-/+ buffers/cache: 1,772 220
```
So around 1.7GB is not used by userspace and in fact used by the kernel as the system is actually using 53.7MB (see PS Mem output below).
I'm surprised with the amount of comments that think 1.7GB is used for caching/buffers - this is **fundamentally misreading the output!** - this line means used memory **excluding buffers/cache**, see linuxatemyram.com for details.
PS Output
=========
Here is a full list of running processes sorted by memory:
```
# ps -e -o pid,vsz,comm= | sort -n -k 2
2 0 kthreadd
3 0 ksoftirqd/0
5 0 kworker/0:0H
7 0 rcu_sched
8 0 rcuos/0
9 0 rcuos/1
10 0 rcuos/2
11 0 rcuos/3
12 0 rcu_bh
13 0 rcuob/0
14 0 rcuob/1
15 0 rcuob/2
16 0 rcuob/3
17 0 migration/0
18 0 watchdog/0
19 0 watchdog/1
20 0 migration/1
21 0 ksoftirqd/1
23 0 kworker/1:0H
24 0 watchdog/2
25 0 migration/2
26 0 ksoftirqd/2
28 0 kworker/2:0H
29 0 watchdog/3
30 0 migration/3
31 0 ksoftirqd/3
32 0 kworker/3:0
33 0 kworker/3:0H
34 0 khelper
35 0 kdevtmpfs
36 0 netns
37 0 writeback
38 0 kintegrityd
39 0 bioset
41 0 kblockd
42 0 ata_sff
43 0 khubd
44 0 md
45 0 devfreq_wq
46 0 kworker/0:1
47 0 kworker/1:1
48 0 kworker/2:1
50 0 khungtaskd
51 0 kswapd0
52 0 ksmd
53 0 khugepaged
54 0 fsnotify_mark
55 0 ecryptfs-kthrea
56 0 crypto
68 0 kthrotld
70 0 scsi_eh_0
71 0 scsi_eh_1
92 0 deferwq
93 0 charger_manager
94 0 kworker/1:2
95 0 kworker/3:2
149 0 kpsmoused
155 0 jbd2/sda1-8
156 0 ext4-rsv-conver
316 0 jbd2/sda3-8
317 0 ext4-rsv-conver
565 0 kmemstick
770 0 cfg80211
818 0 hd-audio0
853 0 kworker/2:2
953 0 rpciod
PID VSZ
1714 0 kauditd
11335 0 kworker/0:2
12202 0 kworker/u8:2
20228 0 kworker/u8:0
25529 0 kworker/u9:1
28305 0 kworker/u9:2
29822 0 lockd
4919 4368 acpid
4074 7136 ps
6681 10232 dhclient
4880 14540 getty
4883 14540 getty
4890 14540 getty
4891 14540 getty
4894 14540 getty
6160 14540 getty
14486 15260 upstart-socket-
14489 15276 upstart-file-br
12009 18308 bash
12114 18308 bash
12289 18308 bash
4075 19008 sort
11290 19140 atd
14483 19476 upstart-udev-br
11385 19972 bash
11553 19972 bash
11890 19972 bash
29503 21544 rpc.statd
2847 23384 htop
850 23420 rpcbind
29588 23480 rpc.idmapd
4947 23656 cron
29833 24048 rpc.mountd
5077 25324 hostapd
11301 26912 openvpn
1 37356 init
1152 39508 dbus-daemon
14673 43452 systemd-logind
14450 51204 systemd-udevd
4921 61364 sshd
12008 67376 su
12113 67376 su
12288 67376 su
12007 67796 sudo
12112 67796 sudo
12287 67796 sudo
11336 107692 sshd
11384 107692 sshd
11502 107692 sshd
11841 107692 sshd
11552 108008 sshd
11889 108008 sshd
1212 256228 rsyslogd
1791 279832 polkitd
4064 335684 whoopsie
```
Here is a full list of all running processes:
```
root@XanBox:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33644 1988 ? Ss Jul21 2:29 /sbin/init
root 2 0.0 0.0 0 0 ? S Jul21 0:03 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul21 1:04 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Jul21 16:03 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Jul21 4:08 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Jul21 4:10 [rcuos/1]
root 10 0.0 0.0 0 0 ? S Jul21 4:30 [rcuos/2]
root 11 0.0 0.0 0 0 ? S Jul21 4:28 [rcuos/3]
root 12 0.0 0.0 0 0 ? S Jul21 0:00 [rcu_bh]
root 13 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/0]
root 14 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/1]
root 15 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/2]
root 16 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/3]
root 17 0.0 0.0 0 0 ? S Jul21 0:13 [migration/0]
root 18 0.0 0.0 0 0 ? S Jul21 0:08 [watchdog/0]
root 19 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/1]
root 20 0.0 0.0 0 0 ? S Jul21 0:13 [migration/1]
root 21 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/1]
root 23 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/1:0H]
root 24 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/2]
root 25 0.0 0.0 0 0 ? S Jul21 0:23 [migration/2]
root 26 0.0 0.0 0 0 ? S Jul21 1:01 [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/2:0H]
root 29 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/3]
root 30 0.0 0.0 0 0 ? S Jul21 0:23 [migration/3]
root 31 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/3]
root 32 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/3:0]
root 33 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/3:0H]
root 34 0.0 0.0 0 0 ? S< Jul21 0:00 [khelper]
root 35 0.0 0.0 0 0 ? S Jul21 0:00 [kdevtmpfs]
root 36 0.0 0.0 0 0 ? S< Jul21 0:00 [netns]
root 37 0.0 0.0 0 0 ? S< Jul21 0:00 [writeback]
root 38 0.0 0.0 0 0 ? S< Jul21 0:00 [kintegrityd]
root 39 0.0 0.0 0 0 ? S< Jul21 0:00 [bioset]
root 41 0.0 0.0 0 0 ? S< Jul21 0:00 [kblockd]
root 42 0.0 0.0 0 0 ? S< Jul21 0:00 [ata_sff]
root 43 0.0 0.0 0 0 ? S Jul21 0:00 [khubd]
root 44 0.0 0.0 0 0 ? S< Jul21 0:00 [md]
root 45 0.0 0.0 0 0 ? S< Jul21 0:00 [devfreq_wq]
root 46 0.0 0.0 0 0 ? S Jul21 18:51 [kworker/0:1]
root 47 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/1:1]
root 48 0.0 0.0 0 0 ? S Jul21 1:14 [kworker/2:1]
root 50 0.0 0.0 0 0 ? S Jul21 0:01 [khungtaskd]
root 51 0.4 0.0 0 0 ? S Jul21 95:51 [kswapd0]
root 52 0.0 0.0 0 0 ? SN Jul21 0:00 [ksmd]
root 53 0.0 0.0 0 0 ? SN Jul21 0:28 [khugepaged]
root 54 0.0 0.0 0 0 ? S Jul21 0:00 [fsnotify_mark]
root 55 0.0 0.0 0 0 ? S Jul21 0:00 [ecryptfs-kthrea]
root 56 0.0 0.0 0 0 ? S< Jul21 0:00 [crypto]
root 68 0.0 0.0 0 0 ? S< Jul21 0:00 [kthrotld]
root 70 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_0]
root 71 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_1]
root 92 0.0 0.0 0 0 ? S< Jul21 0:00 [deferwq]
root 93 0.0 0.0 0 0 ? S< Jul21 0:00 [charger_manager]
root 94 0.0 0.0 0 0 ? S Jul21 1:05 [kworker/1:2]
root 95 0.0 0.0 0 0 ? S Jul21 1:08 [kworker/3:2]
root 149 0.0 0.0 0 0 ? S< Jul21 0:00 [kpsmoused]
root 155 0.0 0.0 0 0 ? S Jul21 3:39 [jbd2/sda1-8]
root 156 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 316 0.0 0.0 0 0 ? S Jul21 1:28 [jbd2/sda3-8]
root 317 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 336 0.0 0.0 19476 512 ? S Jul21 0:07 upstart-udev-bridge --daemon
root 342 0.0 0.0 51228 468 ? Ss Jul21 0:00 /lib/systemd/systemd-udevd --daemon
root 565 0.0 0.0 0 0 ? S< Jul21 0:00 [kmemstick]
root 745 0.0 0.0 15364 252 ? S Jul21 0:06 upstart-socket-bridge --daemon
root 770 0.0 0.0 0 0 ? S< Jul21 0:00 [cfg80211]
root 818 0.0 0.0 0 0 ? S< Jul21 0:00 [hd-audio0]
root 850 0.0 0.0 23420 80 ? Ss Jul21 0:11 rpcbind
root 853 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/2:2]
statd 872 0.0 0.0 21544 8 ? Ss Jul21 0:00 rpc.statd -L
root 953 0.0 0.0 0 0 ? S< Jul21 0:00 [rpciod]
root 1097 0.0 0.0 15276 364 ? S Jul21 0:06 upstart-file-bridge --daemon
message+ 1152 0.0 0.0 39508 1448 ? Ss Jul21 1:40 dbus-daemon --system --fork
root 1157 0.0 0.0 23480 0 ? Ss Jul21 0:00 rpc.idmapd
root 1186 0.0 0.0 43736 984 ? Ss Jul21 1:13 /lib/systemd/systemd-logind
syslog 1212 0.0 0.0 256228 688 ? Ssl Jul21 1:41 rsyslogd
root 1714 0.0 0.0 0 0 ? S Jul21 0:00 [kauditd]
root 1791 0.0 0.0 279832 1312 ? Sl Jul21 1:16 /usr/lib/policykit-1/polkitd --no-debug
root 4880 0.0 0.0 14540 4 tty4 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty4
root 4883 0.0 0.0 14540 4 tty5 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty5
root 4890 0.0 0.0 14540 4 tty2 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty2
root 4891 0.0 0.0 14540 4 tty3 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty3
root 4894 0.0 0.0 14540 4 tty6 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty6
root 4919 0.0 0.0 4368 4 ? Ss Jul21 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 4921 0.0 0.0 61364 364 ? Ss Jul21 0:00 /usr/sbin/sshd -D
root 4947 0.0 0.0 23656 168 ? Ss Jul21 0:14 cron
root 5077 0.0 0.0 25324 648 ? Ss Jul21 0:34 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
root 5192 0.0 0.0 0 0 ? S Jul21 0:00 [lockd]
root 5224 0.0 0.0 24048 4 ? Ss Jul21 0:00 /usr/sbin/rpc.mountd --manage-gids
root 6160 0.0 0.0 14540 4 tty1 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty1
root 6681 0.0 0.0 10232 0 ? Ss 11:07 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
root 9452 0.0 0.0 0 0 ? S 11:28 0:00 [kworker/u8:1]
root 9943 0.0 0.0 0 0 ? S 11:42 0:00 [kworker/u8:0]
daemon 11290 0.0 0.0 19140 164 ? Ss 11:59 0:00 atd
root 11301 0.2 0.1 26772 3436 ? Ss 12:00 0:01 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/zanvie
root 11335 0.0 0.0 0 0 ? S 12:01 0:00 [kworker/0:2]
root 11336 0.0 0.2 107692 4248 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11384 0.0 0.0 107692 1948 ? S 12:01 0:00 sshd: deployer@pts/0
deployer 11385 0.0 0.1 19972 2392 pts/0 Ss+ 12:01 0:00 -bash
root 11502 0.0 0.2 107692 4252 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11552 0.0 0.0 107692 1952 ? S 12:01 0:00 sshd: deployer@pts/2
deployer 11553 0.0 0.1 19972 2388 pts/2 Ss 12:01 0:00 -bash
root 11841 0.0 0.2 107692 4248 ? Ss 12:02 0:00 sshd: deployer [priv]
deployer 11889 0.0 0.1 108008 2280 ? S 12:02 0:00 sshd: deployer@pts/3
deployer 11890 0.0 0.1 19972 2388 pts/3 Ss 12:02 0:00 -bash
root 12007 0.0 0.1 67796 2136 pts/3 S 12:02 0:00 sudo su -
root 12008 0.0 0.0 67376 2016 pts/3 S 12:02 0:00 su -
root 12009 0.0 0.1 18308 2228 pts/3 S+ 12:02 0:00 -su
root 12112 0.0 0.1 67796 2136 pts/2 S 12:08 0:00 sudo su -
root 12113 0.0 0.0 67376 2012 pts/2 S 12:08 0:00 su -
root 12114 0.0 0.1 18308 2192 pts/2 S 12:08 0:00 -su
root 12180 0.0 0.0 15568 1160 pts/2 R+ 12:09 0:00 ps aux
root 25529 0.0 0.0 0 0 ? S< Jul28 0:09 [kworker/u9:1]
root 28305 0.0 0.0 0 0 ? S< Aug05 0:00 [kworker/u9:2]
```
PS Mem Output
=============
I also tried the ps\_mem.py from <https://github.com/pixelb/ps_mem>
```
root@XanBox:~/ps_mem# python ps_mem.py
Private + Shared = RAM used Program
144.0 KiB + 9.5 KiB = 153.5 KiB acpid
172.0 KiB + 29.5 KiB = 201.5 KiB atd
248.0 KiB + 35.0 KiB = 283.0 KiB cron
272.0 KiB + 84.0 KiB = 356.0 KiB upstart-file-bridge
276.0 KiB + 84.5 KiB = 360.5 KiB upstart-socket-bridge
280.0 KiB + 102.5 KiB = 382.5 KiB upstart-udev-bridge
332.0 KiB + 54.5 KiB = 386.5 KiB rpc.idmapd
368.0 KiB + 91.5 KiB = 459.5 KiB rpcbind
388.0 KiB + 251.5 KiB = 639.5 KiB systemd-logind
668.0 KiB + 43.5 KiB = 711.5 KiB hostapd
576.0 KiB + 157.5 KiB = 733.5 KiB systemd-udevd
676.0 KiB + 65.5 KiB = 741.5 KiB rpc.mountd
604.0 KiB + 163.0 KiB = 767.0 KiB rpc.statd
908.0 KiB + 62.5 KiB = 970.5 KiB dbus-daemon [updated]
932.0 KiB + 117.0 KiB = 1.0 MiB getty [updated] (6)
1.0 MiB + 69.5 KiB = 1.1 MiB openvpn
1.0 MiB + 137.0 KiB = 1.2 MiB polkitd
1.5 MiB + 202.0 KiB = 1.7 MiB htop
1.4 MiB + 306.5 KiB = 1.7 MiB whoopsie
1.4 MiB + 279.0 KiB = 1.7 MiB su (3)
1.5 MiB + 268.5 KiB = 1.8 MiB sudo (3)
2.2 MiB + 11.5 KiB = 2.3 MiB dhclient
3.9 MiB + 741.0 KiB = 4.6 MiB bash (6)
5.3 MiB + 254.5 KiB = 5.5 MiB init
2.7 MiB + 3.3 MiB = 6.1 MiB sshd (7)
18.1 MiB + 56.5 KiB = 18.2 MiB rsyslogd
---------------------------------
53.7 MiB
=================================
```
Slabtop Output
==============
I also tried slabtop:
```
root@XanBox:~# slabtop -sc
Active / Total Objects (% used) : 131306 / 137558 (95.5%)
Active / Total Slabs (% used) : 3888 / 3888 (100.0%)
Active / Total Caches (% used) : 63 / 105 (60.0%)
Active / Total Size (% used) : 27419.31K / 29580.53K (92.7%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8288 7975 96% 0.57K 296 28 4736K inode_cache
14259 12858 90% 0.19K 679 21 2716K dentry
2384 1943 81% 0.96K 149 16 2384K ext4_inode_cache
20916 20494 97% 0.11K 581 36 2324K sysfs_dir_cache
624 554 88% 2.00K 39 16 1248K kmalloc-2048
195 176 90% 5.98K 39 5 1248K task_struct
6447 6387 99% 0.19K 307 21 1228K kmalloc-192
2128 1207 56% 0.55K 76 28 1216K radix_tree_node
768 761 99% 1.00K 48 16 768K kmalloc-1024
176 155 88% 4.00K 22 8 704K kmalloc-4096
1100 1100 100% 0.63K 44 25 704K proc_inode_cache
1008 1008 100% 0.66K 42 24 672K shmem_inode_cache
2640 2262 85% 0.25K 165 16 660K kmalloc-256
300 300 100% 2.06K 20 15 640K sighand_cache
5967 5967 100% 0.10K 153 39 612K buffer_head
1152 1053 91% 0.50K 72 16 576K kmalloc-512
3810 3810 100% 0.13K 127 30 508K ext4_allocation_context
60 60 100% 8.00K 15 4 480K kmalloc-8192
225 225 100% 2.06K 15 15 480K idr_layer_cache
7616 7324 96% 0.06K 119 64 476K kmalloc-64
700 700 100% 0.62K 28 25 448K sock_inode_cache
252 252 100% 1.75K 14 18 448K TCP
8925 8544 95% 0.05K 105 85 420K shared_policy_node
3072 2351 76% 0.12K 96 32 384K kmalloc-128
360 360 100% 1.06K 12 30 384K signal_cache
432 337 78% 0.88K 24 18 384K mm_struct
```
Other
=====
I also tried scanning for a rootkit with rkhunter - it found nothing. And I tried to sync and dump cache with:
```
sync; sync; sync; echo 3 > /proc/sys/vm/drop_caches
```
It made no difference also.
I also tried to force swap or disable swap with:
```
sudo sysctl -w vm.swappiness=100
sudo swapoff /dev/sda2
```
I also tried using htop and sorting by memory and it is not showing where the memory is going either. The kernel version is Linux 3.13.0-40-generic #69-Ubuntu SMP.
Dmesg output: <http://pastie.org/9558255>
smem output: <http://pastie.org/9558290>
Conclusion
==========
What is going on? - Where is all the memory going? - How do I find out? | 2014/08/06 | [
"https://superuser.com/questions/793192",
"https://superuser.com",
"https://superuser.com/users/50300/"
] | Do you change the [Swapiness](http://en.wikipedia.org/wiki/Swappiness) of your Kernel manualy or disable it?
you can whatch you current swappyness-level with
```
cat /proc/sys/vm/swappiness
```
You could try to force your kernel to swap aggressively with
```
sudo sysctl -w vm.swappiness=100
```
if this decrease you problems find a good value between 1 and 100, fitting your requirement. | You are not quite right – yes your free –m command is showing free 220MB but it is also showing that 1771MB is used as buffers.
Buffers and Cached is memory used by the kernel to optimize access to slow access data, usually disks.
So you should consider all memory marked as buffers as free memory because kernel can take it back whenever it is required.
See: <https://serverfault.com/questions/23433/in-linux-what-is-the-difference-between-buffers-and-cache-reported-by-the-f> |
793,192 | August 2015 Summary
===================
Please note, this is still happening. This is **not** related to linuxatemyram.com - the memory is not used for disk cache/buffers. This is what it looks like in NewRelic - the system leaks all the memory, uses up all swap space and then crashes. In this screenshot I rebooted the server before it crashed:
[](https://i.stack.imgur.com/vIkEa.png)
It is impossible to identify the source of the leak using common userspace tools. There is now a chat room to discuss this issue: <http://chat.stackexchange.com/rooms/27309/invisible-memory-leak-on-linux>
Only way to recover the "missing" memory appears to be rebooting the server. This has been a long standing issue reproduced in Ubuntu Server 14.04, 14.10 and 15.04.
Top
===
The memory use does not show in top and cannot be recovered even after killing just about every process (excluding things like kernel processes and ssh). Look at the "cached Mem", "buffers" and "free" fields in top, they are not using up the memory, the memory used is "missing" and unrecoverable without a reboot.
Attempting to use this "missing" memory causes the server to swap, slow to a crawl and eventually freeze.
```
root@XanBox:~# top -o +%MEM
top - 12:12:13 up 15 days, 20:39, 3 users, load average: 0.00, 0.06, 0.77
Tasks: 126 total, 1 running, 125 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem: 2,040,256 total, 1,881,228 used, 159,028 free, 1,348 buffers
KiB Swap: 1,999,868 total, 27,436 used, 1,972,432 free. 67,228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11502 root 20 0 107692 4252 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11336 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11841 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11301 root 20 0 26772 3436 2688 S 0.7 0.2 0:01.30 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/z+
11385 deployer 20 0 19972 2392 1708 S 0.0 0.1 0:00.03 -bash
11553 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.03 -bash
11890 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.02 -bash
11889 deployer 20 0 108008 2280 944 S 0.0 0.1 0:00.25 sshd: deployer@pts/3
12009 root 20 0 18308 2228 1608 S 0.0 0.1 0:00.09 -su
12114 root 20 0 18308 2192 1564 S 0.0 0.1 0:00.04 -su
12007 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12112 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12008 root 20 0 67376 2016 1528 S 0.0 0.1 0:00.01 su -
12113 root 20 0 67376 2012 1528 S 0.0 0.1 0:00.01 su -
1 root 20 0 33644 1988 764 S 0.0 0.1 2:29.77 /sbin/init
11552 deployer 20 0 107692 1952 936 S 0.0 0.1 0:00.07 sshd: deployer@pts/2
11384 deployer 20 0 107692 1948 936 S 0.0 0.1 0:00.06 sshd: deployer@pts/0
12182 root 20 0 20012 1516 1012 R 0.7 0.1 0:00.08 top -o +%MEM
1152 message+ 20 0 39508 1448 920 S 0.0 0.1 1:40.01 dbus-daemon --system --fork
1791 root 20 0 279832 1312 816 S 0.0 0.1 1:16.18 /usr/lib/policykit-1/polkitd --no-debug
1186 root 20 0 43736 984 796 S 0.0 0.0 1:13.07 /lib/systemd/systemd-logind
1212 syslog 20 0 256228 688 184 S 0.0 0.0 1:41.29 rsyslogd
5077 root 20 0 25324 648 520 S 0.0 0.0 0:34.35 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
336 root 20 0 19476 512 376 S 0.0 0.0 0:07.40 upstart-udev-bridge --daemon
342 root 20 0 51228 468 344 S 0.0 0.0 0:00.85 /lib/systemd/systemd-udevd --daemon
1097 root 20 0 15276 364 256 S 0.0 0.0 0:06.39 upstart-file-bridge --daemon
4921 root 20 0 61364 364 240 S 0.0 0.0 0:00.05 /usr/sbin/sshd -D
745 root 20 0 15364 252 180 S 0.0 0.0 0:06.51 upstart-socket-bridge --daemon
4947 root 20 0 23656 168 100 S 0.0 0.0 0:14.70 cron
11290 daemon 20 0 19140 164 0 S 0.0 0.0 0:00.00 atd
850 root 20 0 23420 80 16 S 0.0 0.0 0:11.00 rpcbind
872 statd 20 0 21544 8 4 S 0.0 0.0 0:00.00 rpc.statd -L
4880 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty4
4883 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty5
4890 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty2
4891 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty3
4894 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty6
4919 root 20 0 4368 4 0 S 0.0 0.0 0:00.00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
5224 root 20 0 24048 4 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd --manage-gids
6160 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty1
2 root 20 0 0 0 0 S 0.0 0.0 0:03.44 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:04.63 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 16:03.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 4:08.79 [rcuos/0]
9 root 20 0 0 0 0 S 0.0 0.0 4:10.42 [rcuos/1]
10 root 20 0 0 0 0 S 0.0 0.0 4:30.71 [rcuos/2]
```
Hardware
========
I have observed this on 3 servers out of around 100 so far (though others may be affected). One is an Intel Atom D525 @1.8ghz and the other 2 are Core2Duo E4600 and Q6600. One is using a JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller, the others are using Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0).
I ran lshw on the trouble servers as well as on an example OK server. Problem Servers: <http://pastie.org/10370534> <http://pastie.org/10370537> and <http://pastie.org/10370541> -- OK Server: <http://pastie.org/10370544>
Application
===========
This is an entirely headless application. There is no monitor connected and in fact no XServer installed at all. This should rule out graphics drivers/issues.
The server is used to proxy and analyse RTSP video using live555ProxyServer, ffmpeg and openCV. These servers do crunch through a lot of traffic because this is a CCTV application: <http://pastie.org/9558324>
I have tried both very old and latest trunk versions of live555, ffmpeg and openCV without change. I have also tried using opencv through the python2 and python3 modules, no change.
The exact same software/configuration has been loaded onto close to 100 servers, so far 3 are confirmed to leak memory. The servers slowly and stealthily leak around xMB (one leaking 8MB, one is slower, one is faster) per hour until all ram is gone, the servers start swapping heavily, slow to a crawl and require a reboot.
Meminfo
=======
Again, you can see the Cached and Buffers not using up much memory at all. HugePages are also disabled so this is not the culprit.
```
root@XanBox:~# cat /proc/meminfo
MemTotal: 2,040,256 kB
MemFree: 159,004 kB
Buffers: 1,348 kB
Cached: 67,228 kB
SwapCached: 9,940 kB
Active: 10,788 kB
Inactive: 81,120 kB
Active(anon): 1,900 kB
Inactive(anon): 21,512 kB
Active(file): 8,888 kB
Inactive(file): 59,608 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1,999,868 kB
SwapFree: 1,972,432 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14,496 kB
Mapped: 8,160 kB
Shmem: 80 kB
Slab: 33,472 kB
SReclaimable: 17,660 kB
SUnreclaim: 15,812 kB
KernelStack: 1,064 kB
PageTables: 3,992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3,019,996 kB
Committed_AS: 94,520 kB
VmallocTotal: 34,359,738,367 kB
VmallocUsed: 535,936 kB
VmallocChunk: 34,359,147,772 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2,048 kB
DirectMap4k: 62,144 kB
DirectMap2M: 2,025,472 kB
```
Free Output
===========
Free shows the following (note cached and buffers are both low so this is not disk cache or buffers!) - the memory is not recoverable without a reboot:
```
root@XanBox:~# free -m
total used free shared buffers cached
Mem: 1,992 1,838 153 0 1 66
```
If we subtract/add the buffers/cache to Used and Free, we see:
* 1,772MB Really Used (- Buffers/Cache) = 1,838MB used - 1MB buffers - 66MB cache
* 220MB Really Free (+ Buffers/Cache) = 154MB free + 1MB buffers + 66MB cache
Exactly as we expect:
```
-/+ buffers/cache: 1,772 220
```
So around 1.7GB is not used by userspace and in fact used by the kernel as the system is actually using 53.7MB (see PS Mem output below).
I'm surprised with the amount of comments that think 1.7GB is used for caching/buffers - this is **fundamentally misreading the output!** - this line means used memory **excluding buffers/cache**, see linuxatemyram.com for details.
PS Output
=========
Here is a full list of running processes sorted by memory:
```
# ps -e -o pid,vsz,comm= | sort -n -k 2
2 0 kthreadd
3 0 ksoftirqd/0
5 0 kworker/0:0H
7 0 rcu_sched
8 0 rcuos/0
9 0 rcuos/1
10 0 rcuos/2
11 0 rcuos/3
12 0 rcu_bh
13 0 rcuob/0
14 0 rcuob/1
15 0 rcuob/2
16 0 rcuob/3
17 0 migration/0
18 0 watchdog/0
19 0 watchdog/1
20 0 migration/1
21 0 ksoftirqd/1
23 0 kworker/1:0H
24 0 watchdog/2
25 0 migration/2
26 0 ksoftirqd/2
28 0 kworker/2:0H
29 0 watchdog/3
30 0 migration/3
31 0 ksoftirqd/3
32 0 kworker/3:0
33 0 kworker/3:0H
34 0 khelper
35 0 kdevtmpfs
36 0 netns
37 0 writeback
38 0 kintegrityd
39 0 bioset
41 0 kblockd
42 0 ata_sff
43 0 khubd
44 0 md
45 0 devfreq_wq
46 0 kworker/0:1
47 0 kworker/1:1
48 0 kworker/2:1
50 0 khungtaskd
51 0 kswapd0
52 0 ksmd
53 0 khugepaged
54 0 fsnotify_mark
55 0 ecryptfs-kthrea
56 0 crypto
68 0 kthrotld
70 0 scsi_eh_0
71 0 scsi_eh_1
92 0 deferwq
93 0 charger_manager
94 0 kworker/1:2
95 0 kworker/3:2
149 0 kpsmoused
155 0 jbd2/sda1-8
156 0 ext4-rsv-conver
316 0 jbd2/sda3-8
317 0 ext4-rsv-conver
565 0 kmemstick
770 0 cfg80211
818 0 hd-audio0
853 0 kworker/2:2
953 0 rpciod
PID VSZ
1714 0 kauditd
11335 0 kworker/0:2
12202 0 kworker/u8:2
20228 0 kworker/u8:0
25529 0 kworker/u9:1
28305 0 kworker/u9:2
29822 0 lockd
4919 4368 acpid
4074 7136 ps
6681 10232 dhclient
4880 14540 getty
4883 14540 getty
4890 14540 getty
4891 14540 getty
4894 14540 getty
6160 14540 getty
14486 15260 upstart-socket-
14489 15276 upstart-file-br
12009 18308 bash
12114 18308 bash
12289 18308 bash
4075 19008 sort
11290 19140 atd
14483 19476 upstart-udev-br
11385 19972 bash
11553 19972 bash
11890 19972 bash
29503 21544 rpc.statd
2847 23384 htop
850 23420 rpcbind
29588 23480 rpc.idmapd
4947 23656 cron
29833 24048 rpc.mountd
5077 25324 hostapd
11301 26912 openvpn
1 37356 init
1152 39508 dbus-daemon
14673 43452 systemd-logind
14450 51204 systemd-udevd
4921 61364 sshd
12008 67376 su
12113 67376 su
12288 67376 su
12007 67796 sudo
12112 67796 sudo
12287 67796 sudo
11336 107692 sshd
11384 107692 sshd
11502 107692 sshd
11841 107692 sshd
11552 108008 sshd
11889 108008 sshd
1212 256228 rsyslogd
1791 279832 polkitd
4064 335684 whoopsie
```
Here is a full list of all running processes:
```
root@XanBox:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33644 1988 ? Ss Jul21 2:29 /sbin/init
root 2 0.0 0.0 0 0 ? S Jul21 0:03 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul21 1:04 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Jul21 16:03 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Jul21 4:08 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Jul21 4:10 [rcuos/1]
root 10 0.0 0.0 0 0 ? S Jul21 4:30 [rcuos/2]
root 11 0.0 0.0 0 0 ? S Jul21 4:28 [rcuos/3]
root 12 0.0 0.0 0 0 ? S Jul21 0:00 [rcu_bh]
root 13 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/0]
root 14 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/1]
root 15 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/2]
root 16 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/3]
root 17 0.0 0.0 0 0 ? S Jul21 0:13 [migration/0]
root 18 0.0 0.0 0 0 ? S Jul21 0:08 [watchdog/0]
root 19 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/1]
root 20 0.0 0.0 0 0 ? S Jul21 0:13 [migration/1]
root 21 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/1]
root 23 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/1:0H]
root 24 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/2]
root 25 0.0 0.0 0 0 ? S Jul21 0:23 [migration/2]
root 26 0.0 0.0 0 0 ? S Jul21 1:01 [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/2:0H]
root 29 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/3]
root 30 0.0 0.0 0 0 ? S Jul21 0:23 [migration/3]
root 31 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/3]
root 32 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/3:0]
root 33 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/3:0H]
root 34 0.0 0.0 0 0 ? S< Jul21 0:00 [khelper]
root 35 0.0 0.0 0 0 ? S Jul21 0:00 [kdevtmpfs]
root 36 0.0 0.0 0 0 ? S< Jul21 0:00 [netns]
root 37 0.0 0.0 0 0 ? S< Jul21 0:00 [writeback]
root 38 0.0 0.0 0 0 ? S< Jul21 0:00 [kintegrityd]
root 39 0.0 0.0 0 0 ? S< Jul21 0:00 [bioset]
root 41 0.0 0.0 0 0 ? S< Jul21 0:00 [kblockd]
root 42 0.0 0.0 0 0 ? S< Jul21 0:00 [ata_sff]
root 43 0.0 0.0 0 0 ? S Jul21 0:00 [khubd]
root 44 0.0 0.0 0 0 ? S< Jul21 0:00 [md]
root 45 0.0 0.0 0 0 ? S< Jul21 0:00 [devfreq_wq]
root 46 0.0 0.0 0 0 ? S Jul21 18:51 [kworker/0:1]
root 47 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/1:1]
root 48 0.0 0.0 0 0 ? S Jul21 1:14 [kworker/2:1]
root 50 0.0 0.0 0 0 ? S Jul21 0:01 [khungtaskd]
root 51 0.4 0.0 0 0 ? S Jul21 95:51 [kswapd0]
root 52 0.0 0.0 0 0 ? SN Jul21 0:00 [ksmd]
root 53 0.0 0.0 0 0 ? SN Jul21 0:28 [khugepaged]
root 54 0.0 0.0 0 0 ? S Jul21 0:00 [fsnotify_mark]
root 55 0.0 0.0 0 0 ? S Jul21 0:00 [ecryptfs-kthrea]
root 56 0.0 0.0 0 0 ? S< Jul21 0:00 [crypto]
root 68 0.0 0.0 0 0 ? S< Jul21 0:00 [kthrotld]
root 70 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_0]
root 71 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_1]
root 92 0.0 0.0 0 0 ? S< Jul21 0:00 [deferwq]
root 93 0.0 0.0 0 0 ? S< Jul21 0:00 [charger_manager]
root 94 0.0 0.0 0 0 ? S Jul21 1:05 [kworker/1:2]
root 95 0.0 0.0 0 0 ? S Jul21 1:08 [kworker/3:2]
root 149 0.0 0.0 0 0 ? S< Jul21 0:00 [kpsmoused]
root 155 0.0 0.0 0 0 ? S Jul21 3:39 [jbd2/sda1-8]
root 156 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 316 0.0 0.0 0 0 ? S Jul21 1:28 [jbd2/sda3-8]
root 317 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 336 0.0 0.0 19476 512 ? S Jul21 0:07 upstart-udev-bridge --daemon
root 342 0.0 0.0 51228 468 ? Ss Jul21 0:00 /lib/systemd/systemd-udevd --daemon
root 565 0.0 0.0 0 0 ? S< Jul21 0:00 [kmemstick]
root 745 0.0 0.0 15364 252 ? S Jul21 0:06 upstart-socket-bridge --daemon
root 770 0.0 0.0 0 0 ? S< Jul21 0:00 [cfg80211]
root 818 0.0 0.0 0 0 ? S< Jul21 0:00 [hd-audio0]
root 850 0.0 0.0 23420 80 ? Ss Jul21 0:11 rpcbind
root 853 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/2:2]
statd 872 0.0 0.0 21544 8 ? Ss Jul21 0:00 rpc.statd -L
root 953 0.0 0.0 0 0 ? S< Jul21 0:00 [rpciod]
root 1097 0.0 0.0 15276 364 ? S Jul21 0:06 upstart-file-bridge --daemon
message+ 1152 0.0 0.0 39508 1448 ? Ss Jul21 1:40 dbus-daemon --system --fork
root 1157 0.0 0.0 23480 0 ? Ss Jul21 0:00 rpc.idmapd
root 1186 0.0 0.0 43736 984 ? Ss Jul21 1:13 /lib/systemd/systemd-logind
syslog 1212 0.0 0.0 256228 688 ? Ssl Jul21 1:41 rsyslogd
root 1714 0.0 0.0 0 0 ? S Jul21 0:00 [kauditd]
root 1791 0.0 0.0 279832 1312 ? Sl Jul21 1:16 /usr/lib/policykit-1/polkitd --no-debug
root 4880 0.0 0.0 14540 4 tty4 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty4
root 4883 0.0 0.0 14540 4 tty5 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty5
root 4890 0.0 0.0 14540 4 tty2 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty2
root 4891 0.0 0.0 14540 4 tty3 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty3
root 4894 0.0 0.0 14540 4 tty6 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty6
root 4919 0.0 0.0 4368 4 ? Ss Jul21 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 4921 0.0 0.0 61364 364 ? Ss Jul21 0:00 /usr/sbin/sshd -D
root 4947 0.0 0.0 23656 168 ? Ss Jul21 0:14 cron
root 5077 0.0 0.0 25324 648 ? Ss Jul21 0:34 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
root 5192 0.0 0.0 0 0 ? S Jul21 0:00 [lockd]
root 5224 0.0 0.0 24048 4 ? Ss Jul21 0:00 /usr/sbin/rpc.mountd --manage-gids
root 6160 0.0 0.0 14540 4 tty1 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty1
root 6681 0.0 0.0 10232 0 ? Ss 11:07 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
root 9452 0.0 0.0 0 0 ? S 11:28 0:00 [kworker/u8:1]
root 9943 0.0 0.0 0 0 ? S 11:42 0:00 [kworker/u8:0]
daemon 11290 0.0 0.0 19140 164 ? Ss 11:59 0:00 atd
root 11301 0.2 0.1 26772 3436 ? Ss 12:00 0:01 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/zanvie
root 11335 0.0 0.0 0 0 ? S 12:01 0:00 [kworker/0:2]
root 11336 0.0 0.2 107692 4248 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11384 0.0 0.0 107692 1948 ? S 12:01 0:00 sshd: deployer@pts/0
deployer 11385 0.0 0.1 19972 2392 pts/0 Ss+ 12:01 0:00 -bash
root 11502 0.0 0.2 107692 4252 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11552 0.0 0.0 107692 1952 ? S 12:01 0:00 sshd: deployer@pts/2
deployer 11553 0.0 0.1 19972 2388 pts/2 Ss 12:01 0:00 -bash
root 11841 0.0 0.2 107692 4248 ? Ss 12:02 0:00 sshd: deployer [priv]
deployer 11889 0.0 0.1 108008 2280 ? S 12:02 0:00 sshd: deployer@pts/3
deployer 11890 0.0 0.1 19972 2388 pts/3 Ss 12:02 0:00 -bash
root 12007 0.0 0.1 67796 2136 pts/3 S 12:02 0:00 sudo su -
root 12008 0.0 0.0 67376 2016 pts/3 S 12:02 0:00 su -
root 12009 0.0 0.1 18308 2228 pts/3 S+ 12:02 0:00 -su
root 12112 0.0 0.1 67796 2136 pts/2 S 12:08 0:00 sudo su -
root 12113 0.0 0.0 67376 2012 pts/2 S 12:08 0:00 su -
root 12114 0.0 0.1 18308 2192 pts/2 S 12:08 0:00 -su
root 12180 0.0 0.0 15568 1160 pts/2 R+ 12:09 0:00 ps aux
root 25529 0.0 0.0 0 0 ? S< Jul28 0:09 [kworker/u9:1]
root 28305 0.0 0.0 0 0 ? S< Aug05 0:00 [kworker/u9:2]
```
PS Mem Output
=============
I also tried the ps\_mem.py from <https://github.com/pixelb/ps_mem>
```
root@XanBox:~/ps_mem# python ps_mem.py
Private + Shared = RAM used Program
144.0 KiB + 9.5 KiB = 153.5 KiB acpid
172.0 KiB + 29.5 KiB = 201.5 KiB atd
248.0 KiB + 35.0 KiB = 283.0 KiB cron
272.0 KiB + 84.0 KiB = 356.0 KiB upstart-file-bridge
276.0 KiB + 84.5 KiB = 360.5 KiB upstart-socket-bridge
280.0 KiB + 102.5 KiB = 382.5 KiB upstart-udev-bridge
332.0 KiB + 54.5 KiB = 386.5 KiB rpc.idmapd
368.0 KiB + 91.5 KiB = 459.5 KiB rpcbind
388.0 KiB + 251.5 KiB = 639.5 KiB systemd-logind
668.0 KiB + 43.5 KiB = 711.5 KiB hostapd
576.0 KiB + 157.5 KiB = 733.5 KiB systemd-udevd
676.0 KiB + 65.5 KiB = 741.5 KiB rpc.mountd
604.0 KiB + 163.0 KiB = 767.0 KiB rpc.statd
908.0 KiB + 62.5 KiB = 970.5 KiB dbus-daemon [updated]
932.0 KiB + 117.0 KiB = 1.0 MiB getty [updated] (6)
1.0 MiB + 69.5 KiB = 1.1 MiB openvpn
1.0 MiB + 137.0 KiB = 1.2 MiB polkitd
1.5 MiB + 202.0 KiB = 1.7 MiB htop
1.4 MiB + 306.5 KiB = 1.7 MiB whoopsie
1.4 MiB + 279.0 KiB = 1.7 MiB su (3)
1.5 MiB + 268.5 KiB = 1.8 MiB sudo (3)
2.2 MiB + 11.5 KiB = 2.3 MiB dhclient
3.9 MiB + 741.0 KiB = 4.6 MiB bash (6)
5.3 MiB + 254.5 KiB = 5.5 MiB init
2.7 MiB + 3.3 MiB = 6.1 MiB sshd (7)
18.1 MiB + 56.5 KiB = 18.2 MiB rsyslogd
---------------------------------
53.7 MiB
=================================
```
Slabtop Output
==============
I also tried slabtop:
```
root@XanBox:~# slabtop -sc
Active / Total Objects (% used) : 131306 / 137558 (95.5%)
Active / Total Slabs (% used) : 3888 / 3888 (100.0%)
Active / Total Caches (% used) : 63 / 105 (60.0%)
Active / Total Size (% used) : 27419.31K / 29580.53K (92.7%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8288 7975 96% 0.57K 296 28 4736K inode_cache
14259 12858 90% 0.19K 679 21 2716K dentry
2384 1943 81% 0.96K 149 16 2384K ext4_inode_cache
20916 20494 97% 0.11K 581 36 2324K sysfs_dir_cache
624 554 88% 2.00K 39 16 1248K kmalloc-2048
195 176 90% 5.98K 39 5 1248K task_struct
6447 6387 99% 0.19K 307 21 1228K kmalloc-192
2128 1207 56% 0.55K 76 28 1216K radix_tree_node
768 761 99% 1.00K 48 16 768K kmalloc-1024
176 155 88% 4.00K 22 8 704K kmalloc-4096
1100 1100 100% 0.63K 44 25 704K proc_inode_cache
1008 1008 100% 0.66K 42 24 672K shmem_inode_cache
2640 2262 85% 0.25K 165 16 660K kmalloc-256
300 300 100% 2.06K 20 15 640K sighand_cache
5967 5967 100% 0.10K 153 39 612K buffer_head
1152 1053 91% 0.50K 72 16 576K kmalloc-512
3810 3810 100% 0.13K 127 30 508K ext4_allocation_context
60 60 100% 8.00K 15 4 480K kmalloc-8192
225 225 100% 2.06K 15 15 480K idr_layer_cache
7616 7324 96% 0.06K 119 64 476K kmalloc-64
700 700 100% 0.62K 28 25 448K sock_inode_cache
252 252 100% 1.75K 14 18 448K TCP
8925 8544 95% 0.05K 105 85 420K shared_policy_node
3072 2351 76% 0.12K 96 32 384K kmalloc-128
360 360 100% 1.06K 12 30 384K signal_cache
432 337 78% 0.88K 24 18 384K mm_struct
```
Other
=====
I also tried scanning for a rootkit with rkhunter - it found nothing. And I tried to sync and dump cache with:
```
sync; sync; sync; echo 3 > /proc/sys/vm/drop_caches
```
It made no difference also.
I also tried to force swap or disable swap with:
```
sudo sysctl -w vm.swappiness=100
sudo swapoff /dev/sda2
```
I also tried using htop and sorting by memory and it is not showing where the memory is going either. The kernel version is Linux 3.13.0-40-generic #69-Ubuntu SMP.
Dmesg output: <http://pastie.org/9558255>
smem output: <http://pastie.org/9558290>
Conclusion
==========
What is going on? - Where is all the memory going? - How do I find out? | 2014/08/06 | [
"https://superuser.com/questions/793192",
"https://superuser.com",
"https://superuser.com/users/50300/"
] | Story
=====
I can reproduce your issue using [ZFS on Linux](http://zfsonlinux.org/).
Here is a server called `node51` with `20GB` of RAM. I marked `16GiB` of RAM to be allocatable to the [ZFS adaptive replacement cache (ARC)](http://open-zfs.org/wiki/Performance_tuning#Adaptive_Replacement_Cache):
```
root@node51 [~]# echo 17179869184 > /sys/module/zfs/parameters/zfs_arc_max
root@node51 [~]# grep c_max /proc/spl/kstat/zfs/arcstats
c_max 4 17179869184
```
Then, I read a `45GiB` file using [Pipe Viewer](http://www.ivarch.com/programs/pv.shtml) in my ZFS pool `zeltik` to fill up the ARC:
```
root@node51 [~]# pv /zeltik/backup-backups/2014.04.11.squashfs > /dev/zero
45GB 0:01:20 [ 575MB/s] [==================================>] 100%
```
Now look at the free memory:
```
root@node51 [~]# free -m
total used free shared buffers cached
Mem: 20013 19810 203 1 51 69
-/+ buffers/cache: 19688 324
Swap: 7557 0 7556
```
Look!
`51MiB` in buffers
`69MiB` in cache
`120MiB` in both
`19688MiB` of RAM in use, including buffers and cache
`19568MiB` of RAM in use, excluding buffers and cache
The Python script that you referenced reports that applications are only using a small amount of RAM:
```
root@node51 [~]# python ps_mem.py
Private + Shared = RAM used Program
148.0 KiB + 54.0 KiB = 202.0 KiB acpid
176.0 KiB + 47.0 KiB = 223.0 KiB swapspace
184.0 KiB + 51.0 KiB = 235.0 KiB atd
220.0 KiB + 57.0 KiB = 277.0 KiB rpc.idmapd
304.0 KiB + 62.0 KiB = 366.0 KiB irqbalance
312.0 KiB + 64.0 KiB = 376.0 KiB sftp-server
308.0 KiB + 89.0 KiB = 397.0 KiB rpcbind
300.0 KiB + 104.5 KiB = 404.5 KiB cron
368.0 KiB + 99.0 KiB = 467.0 KiB upstart-socket-bridge
560.0 KiB + 180.0 KiB = 740.0 KiB systemd-logind
724.0 KiB + 93.0 KiB = 817.0 KiB dbus-daemon
720.0 KiB + 136.0 KiB = 856.0 KiB systemd-udevd
912.0 KiB + 118.5 KiB = 1.0 MiB upstart-udev-bridge
920.0 KiB + 180.0 KiB = 1.1 MiB rpc.statd (2)
1.0 MiB + 129.5 KiB = 1.1 MiB screen
1.1 MiB + 84.5 KiB = 1.2 MiB upstart-file-bridge
960.0 KiB + 452.0 KiB = 1.4 MiB getty (6)
1.6 MiB + 143.0 KiB = 1.7 MiB init
5.1 MiB + 1.5 MiB = 6.5 MiB bash (3)
5.7 MiB + 5.2 MiB = 10.9 MiB sshd (8)
11.7 MiB + 322.0 KiB = 12.0 MiB glusterd
27.3 MiB + 99.0 KiB = 27.4 MiB rsyslogd
67.4 MiB + 453.0 KiB = 67.8 MiB glusterfsd (2)
---------------------------------
137.4 MiB
=================================
```
**`19568MiB - 137.4MiB ≈ 19431MiB` of unaccounted RAM**
Explanation
===========
The `120MiB` of buffers and cache used that you saw in the story above account for the kernel's efficient behavior of caching data sent to or received from an external device.
>
> The first row, labeled *Mem*, displays physical memory utilization,
> including the amount of memory allocated to buffers and caches. A
> buffer, also called *buffer memory*, is usually defined as a portion of
> memory that is set aside as a temporary holding place for data that is
> being sent to or received from an external device, such as a HDD,
> keyboard, printer or network.
>
>
> The second line of data, which begins with *-/+ buffers/cache*, shows
> the amount of physical memory currently devoted to system *buffer
> cache*. This is particularly meaningful with regard to application
> programs, as all data accessed from files on the system that are
> performed through the use of *read()* and *write()* *system calls* pass
> through this cache. This cache can greatly speed up access to data by
> reducing or eliminating the need to read from or write to the HDD or
> other disk.
>
>
>
Source: <http://www.linfo.org/free.html>
Now how do we account for the missing `19431MiB`?
In the `free -m` output above, the `19688MiB` "*used*" in "*-/+ buffers/cache*" comes from this formula:
```
(kb_main_used) - (buffers_plus_cached) =
(kb_main_total - kb_main_free) - (kb_main_buffers + kb_main_cached)
kb_main_total: MemTotal from /proc/meminfo
kb_main_free: MemFree from /proc/meminfo
kb_main_buffers: Buffers from /proc/meminfo
kb_main_cached: Cached from /proc/meminfo
```
Source: [procps/free.c](http://procps.cvs.sourceforge.net/viewvc/procps/procps/free.c?revision=1.2&view=markup) and [procps/proc/sysinfo.c](http://procps.cvs.sourceforge.net/viewvc/procps/procps/proc/sysinfo.c?revision=1.41&view=markup)
(If you do the numbers based on my `free -m` output, you'll notice that `2MiB` aren't accounted for, but that's because of rounding errors introduced by this code: `#define S(X) ( ((unsigned long long)(X) << 10) >> shift)`)
The numbers don't add up in `/proc/meminfo`, either (I didn't record `/proc/meminfo` when I ran `free -m`, but we can see from your question that `/proc/meminfo` doesn't show where the missing RAM is), so we can conclude from the above that `/proc/meminfo` doesn't tell the whole story.
In my testing conditions, I know as a control that ZFS on Linux is responsible for the high RAM usage. I told its ARC that it could use up to `16GiB` of the server's RAM.
ZFS on Linux isn't a process. It's a kernel module.
From what I've found so far, the RAM usage of a kernel module wouldn't show up using process information tools because the module isn't a process.
Troubleshooting
===============
Unfortunately, I don't know enough about Linux to offer you a way to build a list of how much RAM non-process components (like the kernel and its modules) are using.
At this point, we can speculate, guess, and check.
You provided a `dmesg` output. Well-designed kernel modules would log some of their details to `dmesg`.
After looking through `dmesg`, one item stood out to me: `FS-Cache`
`FS-Cache` is part of the `cachefiles` kernel module and relates to the package `cachefilesd` on Debian and Red Hat Enterprise Linux.
Perhaps some time ago, you configured `FS-Cache` on a RAM disk to reduce the impact of network I/O as your server analyzes the video data.
Try disabling any suspicious kernel modules that could be eating up RAM. They can probably be disabled with [`blacklist`](https://wiki.debian.org/KernelModuleBlacklisting) in `/etc/modprobe.d/`, followed by a `sudo update-initramfs -u` (commands and locations may vary by Linux distribution).
Conclusion
==========
A memory leak is eating up `8MB/hr` of your RAM and won't release the RAM, seemingly no matter what you do. I was not able to determine the source of your memory leak based on the information that you provided, nor was I able to offer a way to find that memory leak.
Someone who is more experienced with Linux than I will need to provide input on how we can determine where the "other" RAM usage is going.
I have started a bounty on this question to see if we can get a better answer than "speculate, guess, and check". | Do you change the [Swapiness](http://en.wikipedia.org/wiki/Swappiness) of your Kernel manualy or disable it?
you can whatch you current swappyness-level with
```
cat /proc/sys/vm/swappiness
```
You could try to force your kernel to swap aggressively with
```
sudo sysctl -w vm.swappiness=100
```
if this decrease you problems find a good value between 1 and 100, fitting your requirement. |
793,192 | August 2015 Summary
===================
Please note, this is still happening. This is **not** related to linuxatemyram.com - the memory is not used for disk cache/buffers. This is what it looks like in NewRelic - the system leaks all the memory, uses up all swap space and then crashes. In this screenshot I rebooted the server before it crashed:
[](https://i.stack.imgur.com/vIkEa.png)
It is impossible to identify the source of the leak using common userspace tools. There is now a chat room to discuss this issue: <http://chat.stackexchange.com/rooms/27309/invisible-memory-leak-on-linux>
Only way to recover the "missing" memory appears to be rebooting the server. This has been a long standing issue reproduced in Ubuntu Server 14.04, 14.10 and 15.04.
Top
===
The memory use does not show in top and cannot be recovered even after killing just about every process (excluding things like kernel processes and ssh). Look at the "cached Mem", "buffers" and "free" fields in top, they are not using up the memory, the memory used is "missing" and unrecoverable without a reboot.
Attempting to use this "missing" memory causes the server to swap, slow to a crawl and eventually freeze.
```
root@XanBox:~# top -o +%MEM
top - 12:12:13 up 15 days, 20:39, 3 users, load average: 0.00, 0.06, 0.77
Tasks: 126 total, 1 running, 125 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem: 2,040,256 total, 1,881,228 used, 159,028 free, 1,348 buffers
KiB Swap: 1,999,868 total, 27,436 used, 1,972,432 free. 67,228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11502 root 20 0 107692 4252 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11336 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11841 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11301 root 20 0 26772 3436 2688 S 0.7 0.2 0:01.30 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/z+
11385 deployer 20 0 19972 2392 1708 S 0.0 0.1 0:00.03 -bash
11553 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.03 -bash
11890 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.02 -bash
11889 deployer 20 0 108008 2280 944 S 0.0 0.1 0:00.25 sshd: deployer@pts/3
12009 root 20 0 18308 2228 1608 S 0.0 0.1 0:00.09 -su
12114 root 20 0 18308 2192 1564 S 0.0 0.1 0:00.04 -su
12007 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12112 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12008 root 20 0 67376 2016 1528 S 0.0 0.1 0:00.01 su -
12113 root 20 0 67376 2012 1528 S 0.0 0.1 0:00.01 su -
1 root 20 0 33644 1988 764 S 0.0 0.1 2:29.77 /sbin/init
11552 deployer 20 0 107692 1952 936 S 0.0 0.1 0:00.07 sshd: deployer@pts/2
11384 deployer 20 0 107692 1948 936 S 0.0 0.1 0:00.06 sshd: deployer@pts/0
12182 root 20 0 20012 1516 1012 R 0.7 0.1 0:00.08 top -o +%MEM
1152 message+ 20 0 39508 1448 920 S 0.0 0.1 1:40.01 dbus-daemon --system --fork
1791 root 20 0 279832 1312 816 S 0.0 0.1 1:16.18 /usr/lib/policykit-1/polkitd --no-debug
1186 root 20 0 43736 984 796 S 0.0 0.0 1:13.07 /lib/systemd/systemd-logind
1212 syslog 20 0 256228 688 184 S 0.0 0.0 1:41.29 rsyslogd
5077 root 20 0 25324 648 520 S 0.0 0.0 0:34.35 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
336 root 20 0 19476 512 376 S 0.0 0.0 0:07.40 upstart-udev-bridge --daemon
342 root 20 0 51228 468 344 S 0.0 0.0 0:00.85 /lib/systemd/systemd-udevd --daemon
1097 root 20 0 15276 364 256 S 0.0 0.0 0:06.39 upstart-file-bridge --daemon
4921 root 20 0 61364 364 240 S 0.0 0.0 0:00.05 /usr/sbin/sshd -D
745 root 20 0 15364 252 180 S 0.0 0.0 0:06.51 upstart-socket-bridge --daemon
4947 root 20 0 23656 168 100 S 0.0 0.0 0:14.70 cron
11290 daemon 20 0 19140 164 0 S 0.0 0.0 0:00.00 atd
850 root 20 0 23420 80 16 S 0.0 0.0 0:11.00 rpcbind
872 statd 20 0 21544 8 4 S 0.0 0.0 0:00.00 rpc.statd -L
4880 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty4
4883 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty5
4890 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty2
4891 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty3
4894 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty6
4919 root 20 0 4368 4 0 S 0.0 0.0 0:00.00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
5224 root 20 0 24048 4 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd --manage-gids
6160 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty1
2 root 20 0 0 0 0 S 0.0 0.0 0:03.44 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:04.63 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 16:03.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 4:08.79 [rcuos/0]
9 root 20 0 0 0 0 S 0.0 0.0 4:10.42 [rcuos/1]
10 root 20 0 0 0 0 S 0.0 0.0 4:30.71 [rcuos/2]
```
Hardware
========
I have observed this on 3 servers out of around 100 so far (though others may be affected). One is an Intel Atom D525 @1.8ghz and the other 2 are Core2Duo E4600 and Q6600. One is using a JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller, the others are using Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0).
I ran lshw on the trouble servers as well as on an example OK server. Problem Servers: <http://pastie.org/10370534> <http://pastie.org/10370537> and <http://pastie.org/10370541> -- OK Server: <http://pastie.org/10370544>
Application
===========
This is an entirely headless application. There is no monitor connected and in fact no XServer installed at all. This should rule out graphics drivers/issues.
The server is used to proxy and analyse RTSP video using live555ProxyServer, ffmpeg and openCV. These servers do crunch through a lot of traffic because this is a CCTV application: <http://pastie.org/9558324>
I have tried both very old and latest trunk versions of live555, ffmpeg and openCV without change. I have also tried using opencv through the python2 and python3 modules, no change.
The exact same software/configuration has been loaded onto close to 100 servers, so far 3 are confirmed to leak memory. The servers slowly and stealthily leak around xMB (one leaking 8MB, one is slower, one is faster) per hour until all ram is gone, the servers start swapping heavily, slow to a crawl and require a reboot.
Meminfo
=======
Again, you can see the Cached and Buffers not using up much memory at all. HugePages are also disabled so this is not the culprit.
```
root@XanBox:~# cat /proc/meminfo
MemTotal: 2,040,256 kB
MemFree: 159,004 kB
Buffers: 1,348 kB
Cached: 67,228 kB
SwapCached: 9,940 kB
Active: 10,788 kB
Inactive: 81,120 kB
Active(anon): 1,900 kB
Inactive(anon): 21,512 kB
Active(file): 8,888 kB
Inactive(file): 59,608 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1,999,868 kB
SwapFree: 1,972,432 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14,496 kB
Mapped: 8,160 kB
Shmem: 80 kB
Slab: 33,472 kB
SReclaimable: 17,660 kB
SUnreclaim: 15,812 kB
KernelStack: 1,064 kB
PageTables: 3,992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3,019,996 kB
Committed_AS: 94,520 kB
VmallocTotal: 34,359,738,367 kB
VmallocUsed: 535,936 kB
VmallocChunk: 34,359,147,772 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2,048 kB
DirectMap4k: 62,144 kB
DirectMap2M: 2,025,472 kB
```
Free Output
===========
Free shows the following (note cached and buffers are both low so this is not disk cache or buffers!) - the memory is not recoverable without a reboot:
```
root@XanBox:~# free -m
total used free shared buffers cached
Mem: 1,992 1,838 153 0 1 66
```
If we subtract/add the buffers/cache to Used and Free, we see:
* 1,772MB Really Used (- Buffers/Cache) = 1,838MB used - 1MB buffers - 66MB cache
* 220MB Really Free (+ Buffers/Cache) = 154MB free + 1MB buffers + 66MB cache
Exactly as we expect:
```
-/+ buffers/cache: 1,772 220
```
So around 1.7GB is not used by userspace and in fact used by the kernel as the system is actually using 53.7MB (see PS Mem output below).
I'm surprised with the amount of comments that think 1.7GB is used for caching/buffers - this is **fundamentally misreading the output!** - this line means used memory **excluding buffers/cache**, see linuxatemyram.com for details.
PS Output
=========
Here is a full list of running processes sorted by memory:
```
# ps -e -o pid,vsz,comm= | sort -n -k 2
2 0 kthreadd
3 0 ksoftirqd/0
5 0 kworker/0:0H
7 0 rcu_sched
8 0 rcuos/0
9 0 rcuos/1
10 0 rcuos/2
11 0 rcuos/3
12 0 rcu_bh
13 0 rcuob/0
14 0 rcuob/1
15 0 rcuob/2
16 0 rcuob/3
17 0 migration/0
18 0 watchdog/0
19 0 watchdog/1
20 0 migration/1
21 0 ksoftirqd/1
23 0 kworker/1:0H
24 0 watchdog/2
25 0 migration/2
26 0 ksoftirqd/2
28 0 kworker/2:0H
29 0 watchdog/3
30 0 migration/3
31 0 ksoftirqd/3
32 0 kworker/3:0
33 0 kworker/3:0H
34 0 khelper
35 0 kdevtmpfs
36 0 netns
37 0 writeback
38 0 kintegrityd
39 0 bioset
41 0 kblockd
42 0 ata_sff
43 0 khubd
44 0 md
45 0 devfreq_wq
46 0 kworker/0:1
47 0 kworker/1:1
48 0 kworker/2:1
50 0 khungtaskd
51 0 kswapd0
52 0 ksmd
53 0 khugepaged
54 0 fsnotify_mark
55 0 ecryptfs-kthrea
56 0 crypto
68 0 kthrotld
70 0 scsi_eh_0
71 0 scsi_eh_1
92 0 deferwq
93 0 charger_manager
94 0 kworker/1:2
95 0 kworker/3:2
149 0 kpsmoused
155 0 jbd2/sda1-8
156 0 ext4-rsv-conver
316 0 jbd2/sda3-8
317 0 ext4-rsv-conver
565 0 kmemstick
770 0 cfg80211
818 0 hd-audio0
853 0 kworker/2:2
953 0 rpciod
PID VSZ
1714 0 kauditd
11335 0 kworker/0:2
12202 0 kworker/u8:2
20228 0 kworker/u8:0
25529 0 kworker/u9:1
28305 0 kworker/u9:2
29822 0 lockd
4919 4368 acpid
4074 7136 ps
6681 10232 dhclient
4880 14540 getty
4883 14540 getty
4890 14540 getty
4891 14540 getty
4894 14540 getty
6160 14540 getty
14486 15260 upstart-socket-
14489 15276 upstart-file-br
12009 18308 bash
12114 18308 bash
12289 18308 bash
4075 19008 sort
11290 19140 atd
14483 19476 upstart-udev-br
11385 19972 bash
11553 19972 bash
11890 19972 bash
29503 21544 rpc.statd
2847 23384 htop
850 23420 rpcbind
29588 23480 rpc.idmapd
4947 23656 cron
29833 24048 rpc.mountd
5077 25324 hostapd
11301 26912 openvpn
1 37356 init
1152 39508 dbus-daemon
14673 43452 systemd-logind
14450 51204 systemd-udevd
4921 61364 sshd
12008 67376 su
12113 67376 su
12288 67376 su
12007 67796 sudo
12112 67796 sudo
12287 67796 sudo
11336 107692 sshd
11384 107692 sshd
11502 107692 sshd
11841 107692 sshd
11552 108008 sshd
11889 108008 sshd
1212 256228 rsyslogd
1791 279832 polkitd
4064 335684 whoopsie
```
Here is a full list of all running processes:
```
root@XanBox:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33644 1988 ? Ss Jul21 2:29 /sbin/init
root 2 0.0 0.0 0 0 ? S Jul21 0:03 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul21 1:04 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Jul21 16:03 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Jul21 4:08 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Jul21 4:10 [rcuos/1]
root 10 0.0 0.0 0 0 ? S Jul21 4:30 [rcuos/2]
root 11 0.0 0.0 0 0 ? S Jul21 4:28 [rcuos/3]
root 12 0.0 0.0 0 0 ? S Jul21 0:00 [rcu_bh]
root 13 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/0]
root 14 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/1]
root 15 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/2]
root 16 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/3]
root 17 0.0 0.0 0 0 ? S Jul21 0:13 [migration/0]
root 18 0.0 0.0 0 0 ? S Jul21 0:08 [watchdog/0]
root 19 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/1]
root 20 0.0 0.0 0 0 ? S Jul21 0:13 [migration/1]
root 21 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/1]
root 23 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/1:0H]
root 24 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/2]
root 25 0.0 0.0 0 0 ? S Jul21 0:23 [migration/2]
root 26 0.0 0.0 0 0 ? S Jul21 1:01 [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/2:0H]
root 29 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/3]
root 30 0.0 0.0 0 0 ? S Jul21 0:23 [migration/3]
root 31 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/3]
root 32 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/3:0]
root 33 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/3:0H]
root 34 0.0 0.0 0 0 ? S< Jul21 0:00 [khelper]
root 35 0.0 0.0 0 0 ? S Jul21 0:00 [kdevtmpfs]
root 36 0.0 0.0 0 0 ? S< Jul21 0:00 [netns]
root 37 0.0 0.0 0 0 ? S< Jul21 0:00 [writeback]
root 38 0.0 0.0 0 0 ? S< Jul21 0:00 [kintegrityd]
root 39 0.0 0.0 0 0 ? S< Jul21 0:00 [bioset]
root 41 0.0 0.0 0 0 ? S< Jul21 0:00 [kblockd]
root 42 0.0 0.0 0 0 ? S< Jul21 0:00 [ata_sff]
root 43 0.0 0.0 0 0 ? S Jul21 0:00 [khubd]
root 44 0.0 0.0 0 0 ? S< Jul21 0:00 [md]
root 45 0.0 0.0 0 0 ? S< Jul21 0:00 [devfreq_wq]
root 46 0.0 0.0 0 0 ? S Jul21 18:51 [kworker/0:1]
root 47 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/1:1]
root 48 0.0 0.0 0 0 ? S Jul21 1:14 [kworker/2:1]
root 50 0.0 0.0 0 0 ? S Jul21 0:01 [khungtaskd]
root 51 0.4 0.0 0 0 ? S Jul21 95:51 [kswapd0]
root 52 0.0 0.0 0 0 ? SN Jul21 0:00 [ksmd]
root 53 0.0 0.0 0 0 ? SN Jul21 0:28 [khugepaged]
root 54 0.0 0.0 0 0 ? S Jul21 0:00 [fsnotify_mark]
root 55 0.0 0.0 0 0 ? S Jul21 0:00 [ecryptfs-kthrea]
root 56 0.0 0.0 0 0 ? S< Jul21 0:00 [crypto]
root 68 0.0 0.0 0 0 ? S< Jul21 0:00 [kthrotld]
root 70 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_0]
root 71 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_1]
root 92 0.0 0.0 0 0 ? S< Jul21 0:00 [deferwq]
root 93 0.0 0.0 0 0 ? S< Jul21 0:00 [charger_manager]
root 94 0.0 0.0 0 0 ? S Jul21 1:05 [kworker/1:2]
root 95 0.0 0.0 0 0 ? S Jul21 1:08 [kworker/3:2]
root 149 0.0 0.0 0 0 ? S< Jul21 0:00 [kpsmoused]
root 155 0.0 0.0 0 0 ? S Jul21 3:39 [jbd2/sda1-8]
root 156 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 316 0.0 0.0 0 0 ? S Jul21 1:28 [jbd2/sda3-8]
root 317 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 336 0.0 0.0 19476 512 ? S Jul21 0:07 upstart-udev-bridge --daemon
root 342 0.0 0.0 51228 468 ? Ss Jul21 0:00 /lib/systemd/systemd-udevd --daemon
root 565 0.0 0.0 0 0 ? S< Jul21 0:00 [kmemstick]
root 745 0.0 0.0 15364 252 ? S Jul21 0:06 upstart-socket-bridge --daemon
root 770 0.0 0.0 0 0 ? S< Jul21 0:00 [cfg80211]
root 818 0.0 0.0 0 0 ? S< Jul21 0:00 [hd-audio0]
root 850 0.0 0.0 23420 80 ? Ss Jul21 0:11 rpcbind
root 853 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/2:2]
statd 872 0.0 0.0 21544 8 ? Ss Jul21 0:00 rpc.statd -L
root 953 0.0 0.0 0 0 ? S< Jul21 0:00 [rpciod]
root 1097 0.0 0.0 15276 364 ? S Jul21 0:06 upstart-file-bridge --daemon
message+ 1152 0.0 0.0 39508 1448 ? Ss Jul21 1:40 dbus-daemon --system --fork
root 1157 0.0 0.0 23480 0 ? Ss Jul21 0:00 rpc.idmapd
root 1186 0.0 0.0 43736 984 ? Ss Jul21 1:13 /lib/systemd/systemd-logind
syslog 1212 0.0 0.0 256228 688 ? Ssl Jul21 1:41 rsyslogd
root 1714 0.0 0.0 0 0 ? S Jul21 0:00 [kauditd]
root 1791 0.0 0.0 279832 1312 ? Sl Jul21 1:16 /usr/lib/policykit-1/polkitd --no-debug
root 4880 0.0 0.0 14540 4 tty4 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty4
root 4883 0.0 0.0 14540 4 tty5 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty5
root 4890 0.0 0.0 14540 4 tty2 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty2
root 4891 0.0 0.0 14540 4 tty3 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty3
root 4894 0.0 0.0 14540 4 tty6 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty6
root 4919 0.0 0.0 4368 4 ? Ss Jul21 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 4921 0.0 0.0 61364 364 ? Ss Jul21 0:00 /usr/sbin/sshd -D
root 4947 0.0 0.0 23656 168 ? Ss Jul21 0:14 cron
root 5077 0.0 0.0 25324 648 ? Ss Jul21 0:34 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
root 5192 0.0 0.0 0 0 ? S Jul21 0:00 [lockd]
root 5224 0.0 0.0 24048 4 ? Ss Jul21 0:00 /usr/sbin/rpc.mountd --manage-gids
root 6160 0.0 0.0 14540 4 tty1 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty1
root 6681 0.0 0.0 10232 0 ? Ss 11:07 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
root 9452 0.0 0.0 0 0 ? S 11:28 0:00 [kworker/u8:1]
root 9943 0.0 0.0 0 0 ? S 11:42 0:00 [kworker/u8:0]
daemon 11290 0.0 0.0 19140 164 ? Ss 11:59 0:00 atd
root 11301 0.2 0.1 26772 3436 ? Ss 12:00 0:01 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/zanvie
root 11335 0.0 0.0 0 0 ? S 12:01 0:00 [kworker/0:2]
root 11336 0.0 0.2 107692 4248 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11384 0.0 0.0 107692 1948 ? S 12:01 0:00 sshd: deployer@pts/0
deployer 11385 0.0 0.1 19972 2392 pts/0 Ss+ 12:01 0:00 -bash
root 11502 0.0 0.2 107692 4252 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11552 0.0 0.0 107692 1952 ? S 12:01 0:00 sshd: deployer@pts/2
deployer 11553 0.0 0.1 19972 2388 pts/2 Ss 12:01 0:00 -bash
root 11841 0.0 0.2 107692 4248 ? Ss 12:02 0:00 sshd: deployer [priv]
deployer 11889 0.0 0.1 108008 2280 ? S 12:02 0:00 sshd: deployer@pts/3
deployer 11890 0.0 0.1 19972 2388 pts/3 Ss 12:02 0:00 -bash
root 12007 0.0 0.1 67796 2136 pts/3 S 12:02 0:00 sudo su -
root 12008 0.0 0.0 67376 2016 pts/3 S 12:02 0:00 su -
root 12009 0.0 0.1 18308 2228 pts/3 S+ 12:02 0:00 -su
root 12112 0.0 0.1 67796 2136 pts/2 S 12:08 0:00 sudo su -
root 12113 0.0 0.0 67376 2012 pts/2 S 12:08 0:00 su -
root 12114 0.0 0.1 18308 2192 pts/2 S 12:08 0:00 -su
root 12180 0.0 0.0 15568 1160 pts/2 R+ 12:09 0:00 ps aux
root 25529 0.0 0.0 0 0 ? S< Jul28 0:09 [kworker/u9:1]
root 28305 0.0 0.0 0 0 ? S< Aug05 0:00 [kworker/u9:2]
```
PS Mem Output
=============
I also tried the ps\_mem.py from <https://github.com/pixelb/ps_mem>
```
root@XanBox:~/ps_mem# python ps_mem.py
Private + Shared = RAM used Program
144.0 KiB + 9.5 KiB = 153.5 KiB acpid
172.0 KiB + 29.5 KiB = 201.5 KiB atd
248.0 KiB + 35.0 KiB = 283.0 KiB cron
272.0 KiB + 84.0 KiB = 356.0 KiB upstart-file-bridge
276.0 KiB + 84.5 KiB = 360.5 KiB upstart-socket-bridge
280.0 KiB + 102.5 KiB = 382.5 KiB upstart-udev-bridge
332.0 KiB + 54.5 KiB = 386.5 KiB rpc.idmapd
368.0 KiB + 91.5 KiB = 459.5 KiB rpcbind
388.0 KiB + 251.5 KiB = 639.5 KiB systemd-logind
668.0 KiB + 43.5 KiB = 711.5 KiB hostapd
576.0 KiB + 157.5 KiB = 733.5 KiB systemd-udevd
676.0 KiB + 65.5 KiB = 741.5 KiB rpc.mountd
604.0 KiB + 163.0 KiB = 767.0 KiB rpc.statd
908.0 KiB + 62.5 KiB = 970.5 KiB dbus-daemon [updated]
932.0 KiB + 117.0 KiB = 1.0 MiB getty [updated] (6)
1.0 MiB + 69.5 KiB = 1.1 MiB openvpn
1.0 MiB + 137.0 KiB = 1.2 MiB polkitd
1.5 MiB + 202.0 KiB = 1.7 MiB htop
1.4 MiB + 306.5 KiB = 1.7 MiB whoopsie
1.4 MiB + 279.0 KiB = 1.7 MiB su (3)
1.5 MiB + 268.5 KiB = 1.8 MiB sudo (3)
2.2 MiB + 11.5 KiB = 2.3 MiB dhclient
3.9 MiB + 741.0 KiB = 4.6 MiB bash (6)
5.3 MiB + 254.5 KiB = 5.5 MiB init
2.7 MiB + 3.3 MiB = 6.1 MiB sshd (7)
18.1 MiB + 56.5 KiB = 18.2 MiB rsyslogd
---------------------------------
53.7 MiB
=================================
```
Slabtop Output
==============
I also tried slabtop:
```
root@XanBox:~# slabtop -sc
Active / Total Objects (% used) : 131306 / 137558 (95.5%)
Active / Total Slabs (% used) : 3888 / 3888 (100.0%)
Active / Total Caches (% used) : 63 / 105 (60.0%)
Active / Total Size (% used) : 27419.31K / 29580.53K (92.7%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8288 7975 96% 0.57K 296 28 4736K inode_cache
14259 12858 90% 0.19K 679 21 2716K dentry
2384 1943 81% 0.96K 149 16 2384K ext4_inode_cache
20916 20494 97% 0.11K 581 36 2324K sysfs_dir_cache
624 554 88% 2.00K 39 16 1248K kmalloc-2048
195 176 90% 5.98K 39 5 1248K task_struct
6447 6387 99% 0.19K 307 21 1228K kmalloc-192
2128 1207 56% 0.55K 76 28 1216K radix_tree_node
768 761 99% 1.00K 48 16 768K kmalloc-1024
176 155 88% 4.00K 22 8 704K kmalloc-4096
1100 1100 100% 0.63K 44 25 704K proc_inode_cache
1008 1008 100% 0.66K 42 24 672K shmem_inode_cache
2640 2262 85% 0.25K 165 16 660K kmalloc-256
300 300 100% 2.06K 20 15 640K sighand_cache
5967 5967 100% 0.10K 153 39 612K buffer_head
1152 1053 91% 0.50K 72 16 576K kmalloc-512
3810 3810 100% 0.13K 127 30 508K ext4_allocation_context
60 60 100% 8.00K 15 4 480K kmalloc-8192
225 225 100% 2.06K 15 15 480K idr_layer_cache
7616 7324 96% 0.06K 119 64 476K kmalloc-64
700 700 100% 0.62K 28 25 448K sock_inode_cache
252 252 100% 1.75K 14 18 448K TCP
8925 8544 95% 0.05K 105 85 420K shared_policy_node
3072 2351 76% 0.12K 96 32 384K kmalloc-128
360 360 100% 1.06K 12 30 384K signal_cache
432 337 78% 0.88K 24 18 384K mm_struct
```
Other
=====
I also tried scanning for a rootkit with rkhunter - it found nothing. And I tried to sync and dump cache with:
```
sync; sync; sync; echo 3 > /proc/sys/vm/drop_caches
```
It made no difference also.
I also tried to force swap or disable swap with:
```
sudo sysctl -w vm.swappiness=100
sudo swapoff /dev/sda2
```
I also tried using htop and sorting by memory and it is not showing where the memory is going either. The kernel version is Linux 3.13.0-40-generic #69-Ubuntu SMP.
Dmesg output: <http://pastie.org/9558255>
smem output: <http://pastie.org/9558290>
Conclusion
==========
What is going on? - Where is all the memory going? - How do I find out? | 2014/08/06 | [
"https://superuser.com/questions/793192",
"https://superuser.com",
"https://superuser.com/users/50300/"
] | My conclusion is it is a kernel memory leak somewhere in the Linux kernel, this is why none of the userspace tools are able to show where memory is being leaked. Maybe it is related to this question: <https://serverfault.com/questions/670423/linux-memory-usage-higher-than-sum-of-processes>
I upgraded the kernel version from 3.13 to 3.19 and it seems the memory leak has stopped! - I will report back if I see a leak again.
It would still be useful to have some easy/easier way to see how much memory is used for different parts of the Linux kernel. It is still a mystery what was causing the leak in 3.13. | Do you change the [Swapiness](http://en.wikipedia.org/wiki/Swappiness) of your Kernel manualy or disable it?
you can whatch you current swappyness-level with
```
cat /proc/sys/vm/swappiness
```
You could try to force your kernel to swap aggressively with
```
sudo sysctl -w vm.swappiness=100
```
if this decrease you problems find a good value between 1 and 100, fitting your requirement. |
793,192 | August 2015 Summary
===================
Please note, this is still happening. This is **not** related to linuxatemyram.com - the memory is not used for disk cache/buffers. This is what it looks like in NewRelic - the system leaks all the memory, uses up all swap space and then crashes. In this screenshot I rebooted the server before it crashed:
[](https://i.stack.imgur.com/vIkEa.png)
It is impossible to identify the source of the leak using common userspace tools. There is now a chat room to discuss this issue: <http://chat.stackexchange.com/rooms/27309/invisible-memory-leak-on-linux>
Only way to recover the "missing" memory appears to be rebooting the server. This has been a long standing issue reproduced in Ubuntu Server 14.04, 14.10 and 15.04.
Top
===
The memory use does not show in top and cannot be recovered even after killing just about every process (excluding things like kernel processes and ssh). Look at the "cached Mem", "buffers" and "free" fields in top, they are not using up the memory, the memory used is "missing" and unrecoverable without a reboot.
Attempting to use this "missing" memory causes the server to swap, slow to a crawl and eventually freeze.
```
root@XanBox:~# top -o +%MEM
top - 12:12:13 up 15 days, 20:39, 3 users, load average: 0.00, 0.06, 0.77
Tasks: 126 total, 1 running, 125 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem: 2,040,256 total, 1,881,228 used, 159,028 free, 1,348 buffers
KiB Swap: 1,999,868 total, 27,436 used, 1,972,432 free. 67,228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11502 root 20 0 107692 4252 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11336 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11841 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11301 root 20 0 26772 3436 2688 S 0.7 0.2 0:01.30 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/z+
11385 deployer 20 0 19972 2392 1708 S 0.0 0.1 0:00.03 -bash
11553 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.03 -bash
11890 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.02 -bash
11889 deployer 20 0 108008 2280 944 S 0.0 0.1 0:00.25 sshd: deployer@pts/3
12009 root 20 0 18308 2228 1608 S 0.0 0.1 0:00.09 -su
12114 root 20 0 18308 2192 1564 S 0.0 0.1 0:00.04 -su
12007 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12112 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12008 root 20 0 67376 2016 1528 S 0.0 0.1 0:00.01 su -
12113 root 20 0 67376 2012 1528 S 0.0 0.1 0:00.01 su -
1 root 20 0 33644 1988 764 S 0.0 0.1 2:29.77 /sbin/init
11552 deployer 20 0 107692 1952 936 S 0.0 0.1 0:00.07 sshd: deployer@pts/2
11384 deployer 20 0 107692 1948 936 S 0.0 0.1 0:00.06 sshd: deployer@pts/0
12182 root 20 0 20012 1516 1012 R 0.7 0.1 0:00.08 top -o +%MEM
1152 message+ 20 0 39508 1448 920 S 0.0 0.1 1:40.01 dbus-daemon --system --fork
1791 root 20 0 279832 1312 816 S 0.0 0.1 1:16.18 /usr/lib/policykit-1/polkitd --no-debug
1186 root 20 0 43736 984 796 S 0.0 0.0 1:13.07 /lib/systemd/systemd-logind
1212 syslog 20 0 256228 688 184 S 0.0 0.0 1:41.29 rsyslogd
5077 root 20 0 25324 648 520 S 0.0 0.0 0:34.35 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
336 root 20 0 19476 512 376 S 0.0 0.0 0:07.40 upstart-udev-bridge --daemon
342 root 20 0 51228 468 344 S 0.0 0.0 0:00.85 /lib/systemd/systemd-udevd --daemon
1097 root 20 0 15276 364 256 S 0.0 0.0 0:06.39 upstart-file-bridge --daemon
4921 root 20 0 61364 364 240 S 0.0 0.0 0:00.05 /usr/sbin/sshd -D
745 root 20 0 15364 252 180 S 0.0 0.0 0:06.51 upstart-socket-bridge --daemon
4947 root 20 0 23656 168 100 S 0.0 0.0 0:14.70 cron
11290 daemon 20 0 19140 164 0 S 0.0 0.0 0:00.00 atd
850 root 20 0 23420 80 16 S 0.0 0.0 0:11.00 rpcbind
872 statd 20 0 21544 8 4 S 0.0 0.0 0:00.00 rpc.statd -L
4880 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty4
4883 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty5
4890 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty2
4891 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty3
4894 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty6
4919 root 20 0 4368 4 0 S 0.0 0.0 0:00.00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
5224 root 20 0 24048 4 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd --manage-gids
6160 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty1
2 root 20 0 0 0 0 S 0.0 0.0 0:03.44 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:04.63 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 16:03.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 4:08.79 [rcuos/0]
9 root 20 0 0 0 0 S 0.0 0.0 4:10.42 [rcuos/1]
10 root 20 0 0 0 0 S 0.0 0.0 4:30.71 [rcuos/2]
```
Hardware
========
I have observed this on 3 servers out of around 100 so far (though others may be affected). One is an Intel Atom D525 @1.8ghz and the other 2 are Core2Duo E4600 and Q6600. One is using a JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller, the others are using Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0).
I ran lshw on the trouble servers as well as on an example OK server. Problem Servers: <http://pastie.org/10370534> <http://pastie.org/10370537> and <http://pastie.org/10370541> -- OK Server: <http://pastie.org/10370544>
Application
===========
This is an entirely headless application. There is no monitor connected and in fact no XServer installed at all. This should rule out graphics drivers/issues.
The server is used to proxy and analyse RTSP video using live555ProxyServer, ffmpeg and openCV. These servers do crunch through a lot of traffic because this is a CCTV application: <http://pastie.org/9558324>
I have tried both very old and latest trunk versions of live555, ffmpeg and openCV without change. I have also tried using opencv through the python2 and python3 modules, no change.
The exact same software/configuration has been loaded onto close to 100 servers, so far 3 are confirmed to leak memory. The servers slowly and stealthily leak around xMB (one leaking 8MB, one is slower, one is faster) per hour until all ram is gone, the servers start swapping heavily, slow to a crawl and require a reboot.
Meminfo
=======
Again, you can see the Cached and Buffers not using up much memory at all. HugePages are also disabled so this is not the culprit.
```
root@XanBox:~# cat /proc/meminfo
MemTotal: 2,040,256 kB
MemFree: 159,004 kB
Buffers: 1,348 kB
Cached: 67,228 kB
SwapCached: 9,940 kB
Active: 10,788 kB
Inactive: 81,120 kB
Active(anon): 1,900 kB
Inactive(anon): 21,512 kB
Active(file): 8,888 kB
Inactive(file): 59,608 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1,999,868 kB
SwapFree: 1,972,432 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14,496 kB
Mapped: 8,160 kB
Shmem: 80 kB
Slab: 33,472 kB
SReclaimable: 17,660 kB
SUnreclaim: 15,812 kB
KernelStack: 1,064 kB
PageTables: 3,992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3,019,996 kB
Committed_AS: 94,520 kB
VmallocTotal: 34,359,738,367 kB
VmallocUsed: 535,936 kB
VmallocChunk: 34,359,147,772 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2,048 kB
DirectMap4k: 62,144 kB
DirectMap2M: 2,025,472 kB
```
Free Output
===========
Free shows the following (note cached and buffers are both low so this is not disk cache or buffers!) - the memory is not recoverable without a reboot:
```
root@XanBox:~# free -m
total used free shared buffers cached
Mem: 1,992 1,838 153 0 1 66
```
If we subtract/add the buffers/cache to Used and Free, we see:
* 1,772MB Really Used (- Buffers/Cache) = 1,838MB used - 1MB buffers - 66MB cache
* 220MB Really Free (+ Buffers/Cache) = 154MB free + 1MB buffers + 66MB cache
Exactly as we expect:
```
-/+ buffers/cache: 1,772 220
```
So around 1.7GB is not used by userspace and in fact used by the kernel as the system is actually using 53.7MB (see PS Mem output below).
I'm surprised with the amount of comments that think 1.7GB is used for caching/buffers - this is **fundamentally misreading the output!** - this line means used memory **excluding buffers/cache**, see linuxatemyram.com for details.
PS Output
=========
Here is a full list of running processes sorted by memory:
```
# ps -e -o pid,vsz,comm= | sort -n -k 2
2 0 kthreadd
3 0 ksoftirqd/0
5 0 kworker/0:0H
7 0 rcu_sched
8 0 rcuos/0
9 0 rcuos/1
10 0 rcuos/2
11 0 rcuos/3
12 0 rcu_bh
13 0 rcuob/0
14 0 rcuob/1
15 0 rcuob/2
16 0 rcuob/3
17 0 migration/0
18 0 watchdog/0
19 0 watchdog/1
20 0 migration/1
21 0 ksoftirqd/1
23 0 kworker/1:0H
24 0 watchdog/2
25 0 migration/2
26 0 ksoftirqd/2
28 0 kworker/2:0H
29 0 watchdog/3
30 0 migration/3
31 0 ksoftirqd/3
32 0 kworker/3:0
33 0 kworker/3:0H
34 0 khelper
35 0 kdevtmpfs
36 0 netns
37 0 writeback
38 0 kintegrityd
39 0 bioset
41 0 kblockd
42 0 ata_sff
43 0 khubd
44 0 md
45 0 devfreq_wq
46 0 kworker/0:1
47 0 kworker/1:1
48 0 kworker/2:1
50 0 khungtaskd
51 0 kswapd0
52 0 ksmd
53 0 khugepaged
54 0 fsnotify_mark
55 0 ecryptfs-kthrea
56 0 crypto
68 0 kthrotld
70 0 scsi_eh_0
71 0 scsi_eh_1
92 0 deferwq
93 0 charger_manager
94 0 kworker/1:2
95 0 kworker/3:2
149 0 kpsmoused
155 0 jbd2/sda1-8
156 0 ext4-rsv-conver
316 0 jbd2/sda3-8
317 0 ext4-rsv-conver
565 0 kmemstick
770 0 cfg80211
818 0 hd-audio0
853 0 kworker/2:2
953 0 rpciod
PID VSZ
1714 0 kauditd
11335 0 kworker/0:2
12202 0 kworker/u8:2
20228 0 kworker/u8:0
25529 0 kworker/u9:1
28305 0 kworker/u9:2
29822 0 lockd
4919 4368 acpid
4074 7136 ps
6681 10232 dhclient
4880 14540 getty
4883 14540 getty
4890 14540 getty
4891 14540 getty
4894 14540 getty
6160 14540 getty
14486 15260 upstart-socket-
14489 15276 upstart-file-br
12009 18308 bash
12114 18308 bash
12289 18308 bash
4075 19008 sort
11290 19140 atd
14483 19476 upstart-udev-br
11385 19972 bash
11553 19972 bash
11890 19972 bash
29503 21544 rpc.statd
2847 23384 htop
850 23420 rpcbind
29588 23480 rpc.idmapd
4947 23656 cron
29833 24048 rpc.mountd
5077 25324 hostapd
11301 26912 openvpn
1 37356 init
1152 39508 dbus-daemon
14673 43452 systemd-logind
14450 51204 systemd-udevd
4921 61364 sshd
12008 67376 su
12113 67376 su
12288 67376 su
12007 67796 sudo
12112 67796 sudo
12287 67796 sudo
11336 107692 sshd
11384 107692 sshd
11502 107692 sshd
11841 107692 sshd
11552 108008 sshd
11889 108008 sshd
1212 256228 rsyslogd
1791 279832 polkitd
4064 335684 whoopsie
```
Here is a full list of all running processes:
```
root@XanBox:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33644 1988 ? Ss Jul21 2:29 /sbin/init
root 2 0.0 0.0 0 0 ? S Jul21 0:03 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul21 1:04 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Jul21 16:03 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Jul21 4:08 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Jul21 4:10 [rcuos/1]
root 10 0.0 0.0 0 0 ? S Jul21 4:30 [rcuos/2]
root 11 0.0 0.0 0 0 ? S Jul21 4:28 [rcuos/3]
root 12 0.0 0.0 0 0 ? S Jul21 0:00 [rcu_bh]
root 13 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/0]
root 14 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/1]
root 15 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/2]
root 16 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/3]
root 17 0.0 0.0 0 0 ? S Jul21 0:13 [migration/0]
root 18 0.0 0.0 0 0 ? S Jul21 0:08 [watchdog/0]
root 19 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/1]
root 20 0.0 0.0 0 0 ? S Jul21 0:13 [migration/1]
root 21 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/1]
root 23 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/1:0H]
root 24 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/2]
root 25 0.0 0.0 0 0 ? S Jul21 0:23 [migration/2]
root 26 0.0 0.0 0 0 ? S Jul21 1:01 [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/2:0H]
root 29 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/3]
root 30 0.0 0.0 0 0 ? S Jul21 0:23 [migration/3]
root 31 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/3]
root 32 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/3:0]
root 33 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/3:0H]
root 34 0.0 0.0 0 0 ? S< Jul21 0:00 [khelper]
root 35 0.0 0.0 0 0 ? S Jul21 0:00 [kdevtmpfs]
root 36 0.0 0.0 0 0 ? S< Jul21 0:00 [netns]
root 37 0.0 0.0 0 0 ? S< Jul21 0:00 [writeback]
root 38 0.0 0.0 0 0 ? S< Jul21 0:00 [kintegrityd]
root 39 0.0 0.0 0 0 ? S< Jul21 0:00 [bioset]
root 41 0.0 0.0 0 0 ? S< Jul21 0:00 [kblockd]
root 42 0.0 0.0 0 0 ? S< Jul21 0:00 [ata_sff]
root 43 0.0 0.0 0 0 ? S Jul21 0:00 [khubd]
root 44 0.0 0.0 0 0 ? S< Jul21 0:00 [md]
root 45 0.0 0.0 0 0 ? S< Jul21 0:00 [devfreq_wq]
root 46 0.0 0.0 0 0 ? S Jul21 18:51 [kworker/0:1]
root 47 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/1:1]
root 48 0.0 0.0 0 0 ? S Jul21 1:14 [kworker/2:1]
root 50 0.0 0.0 0 0 ? S Jul21 0:01 [khungtaskd]
root 51 0.4 0.0 0 0 ? S Jul21 95:51 [kswapd0]
root 52 0.0 0.0 0 0 ? SN Jul21 0:00 [ksmd]
root 53 0.0 0.0 0 0 ? SN Jul21 0:28 [khugepaged]
root 54 0.0 0.0 0 0 ? S Jul21 0:00 [fsnotify_mark]
root 55 0.0 0.0 0 0 ? S Jul21 0:00 [ecryptfs-kthrea]
root 56 0.0 0.0 0 0 ? S< Jul21 0:00 [crypto]
root 68 0.0 0.0 0 0 ? S< Jul21 0:00 [kthrotld]
root 70 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_0]
root 71 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_1]
root 92 0.0 0.0 0 0 ? S< Jul21 0:00 [deferwq]
root 93 0.0 0.0 0 0 ? S< Jul21 0:00 [charger_manager]
root 94 0.0 0.0 0 0 ? S Jul21 1:05 [kworker/1:2]
root 95 0.0 0.0 0 0 ? S Jul21 1:08 [kworker/3:2]
root 149 0.0 0.0 0 0 ? S< Jul21 0:00 [kpsmoused]
root 155 0.0 0.0 0 0 ? S Jul21 3:39 [jbd2/sda1-8]
root 156 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 316 0.0 0.0 0 0 ? S Jul21 1:28 [jbd2/sda3-8]
root 317 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 336 0.0 0.0 19476 512 ? S Jul21 0:07 upstart-udev-bridge --daemon
root 342 0.0 0.0 51228 468 ? Ss Jul21 0:00 /lib/systemd/systemd-udevd --daemon
root 565 0.0 0.0 0 0 ? S< Jul21 0:00 [kmemstick]
root 745 0.0 0.0 15364 252 ? S Jul21 0:06 upstart-socket-bridge --daemon
root 770 0.0 0.0 0 0 ? S< Jul21 0:00 [cfg80211]
root 818 0.0 0.0 0 0 ? S< Jul21 0:00 [hd-audio0]
root 850 0.0 0.0 23420 80 ? Ss Jul21 0:11 rpcbind
root 853 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/2:2]
statd 872 0.0 0.0 21544 8 ? Ss Jul21 0:00 rpc.statd -L
root 953 0.0 0.0 0 0 ? S< Jul21 0:00 [rpciod]
root 1097 0.0 0.0 15276 364 ? S Jul21 0:06 upstart-file-bridge --daemon
message+ 1152 0.0 0.0 39508 1448 ? Ss Jul21 1:40 dbus-daemon --system --fork
root 1157 0.0 0.0 23480 0 ? Ss Jul21 0:00 rpc.idmapd
root 1186 0.0 0.0 43736 984 ? Ss Jul21 1:13 /lib/systemd/systemd-logind
syslog 1212 0.0 0.0 256228 688 ? Ssl Jul21 1:41 rsyslogd
root 1714 0.0 0.0 0 0 ? S Jul21 0:00 [kauditd]
root 1791 0.0 0.0 279832 1312 ? Sl Jul21 1:16 /usr/lib/policykit-1/polkitd --no-debug
root 4880 0.0 0.0 14540 4 tty4 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty4
root 4883 0.0 0.0 14540 4 tty5 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty5
root 4890 0.0 0.0 14540 4 tty2 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty2
root 4891 0.0 0.0 14540 4 tty3 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty3
root 4894 0.0 0.0 14540 4 tty6 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty6
root 4919 0.0 0.0 4368 4 ? Ss Jul21 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 4921 0.0 0.0 61364 364 ? Ss Jul21 0:00 /usr/sbin/sshd -D
root 4947 0.0 0.0 23656 168 ? Ss Jul21 0:14 cron
root 5077 0.0 0.0 25324 648 ? Ss Jul21 0:34 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
root 5192 0.0 0.0 0 0 ? S Jul21 0:00 [lockd]
root 5224 0.0 0.0 24048 4 ? Ss Jul21 0:00 /usr/sbin/rpc.mountd --manage-gids
root 6160 0.0 0.0 14540 4 tty1 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty1
root 6681 0.0 0.0 10232 0 ? Ss 11:07 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
root 9452 0.0 0.0 0 0 ? S 11:28 0:00 [kworker/u8:1]
root 9943 0.0 0.0 0 0 ? S 11:42 0:00 [kworker/u8:0]
daemon 11290 0.0 0.0 19140 164 ? Ss 11:59 0:00 atd
root 11301 0.2 0.1 26772 3436 ? Ss 12:00 0:01 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/zanvie
root 11335 0.0 0.0 0 0 ? S 12:01 0:00 [kworker/0:2]
root 11336 0.0 0.2 107692 4248 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11384 0.0 0.0 107692 1948 ? S 12:01 0:00 sshd: deployer@pts/0
deployer 11385 0.0 0.1 19972 2392 pts/0 Ss+ 12:01 0:00 -bash
root 11502 0.0 0.2 107692 4252 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11552 0.0 0.0 107692 1952 ? S 12:01 0:00 sshd: deployer@pts/2
deployer 11553 0.0 0.1 19972 2388 pts/2 Ss 12:01 0:00 -bash
root 11841 0.0 0.2 107692 4248 ? Ss 12:02 0:00 sshd: deployer [priv]
deployer 11889 0.0 0.1 108008 2280 ? S 12:02 0:00 sshd: deployer@pts/3
deployer 11890 0.0 0.1 19972 2388 pts/3 Ss 12:02 0:00 -bash
root 12007 0.0 0.1 67796 2136 pts/3 S 12:02 0:00 sudo su -
root 12008 0.0 0.0 67376 2016 pts/3 S 12:02 0:00 su -
root 12009 0.0 0.1 18308 2228 pts/3 S+ 12:02 0:00 -su
root 12112 0.0 0.1 67796 2136 pts/2 S 12:08 0:00 sudo su -
root 12113 0.0 0.0 67376 2012 pts/2 S 12:08 0:00 su -
root 12114 0.0 0.1 18308 2192 pts/2 S 12:08 0:00 -su
root 12180 0.0 0.0 15568 1160 pts/2 R+ 12:09 0:00 ps aux
root 25529 0.0 0.0 0 0 ? S< Jul28 0:09 [kworker/u9:1]
root 28305 0.0 0.0 0 0 ? S< Aug05 0:00 [kworker/u9:2]
```
PS Mem Output
=============
I also tried the ps\_mem.py from <https://github.com/pixelb/ps_mem>
```
root@XanBox:~/ps_mem# python ps_mem.py
Private + Shared = RAM used Program
144.0 KiB + 9.5 KiB = 153.5 KiB acpid
172.0 KiB + 29.5 KiB = 201.5 KiB atd
248.0 KiB + 35.0 KiB = 283.0 KiB cron
272.0 KiB + 84.0 KiB = 356.0 KiB upstart-file-bridge
276.0 KiB + 84.5 KiB = 360.5 KiB upstart-socket-bridge
280.0 KiB + 102.5 KiB = 382.5 KiB upstart-udev-bridge
332.0 KiB + 54.5 KiB = 386.5 KiB rpc.idmapd
368.0 KiB + 91.5 KiB = 459.5 KiB rpcbind
388.0 KiB + 251.5 KiB = 639.5 KiB systemd-logind
668.0 KiB + 43.5 KiB = 711.5 KiB hostapd
576.0 KiB + 157.5 KiB = 733.5 KiB systemd-udevd
676.0 KiB + 65.5 KiB = 741.5 KiB rpc.mountd
604.0 KiB + 163.0 KiB = 767.0 KiB rpc.statd
908.0 KiB + 62.5 KiB = 970.5 KiB dbus-daemon [updated]
932.0 KiB + 117.0 KiB = 1.0 MiB getty [updated] (6)
1.0 MiB + 69.5 KiB = 1.1 MiB openvpn
1.0 MiB + 137.0 KiB = 1.2 MiB polkitd
1.5 MiB + 202.0 KiB = 1.7 MiB htop
1.4 MiB + 306.5 KiB = 1.7 MiB whoopsie
1.4 MiB + 279.0 KiB = 1.7 MiB su (3)
1.5 MiB + 268.5 KiB = 1.8 MiB sudo (3)
2.2 MiB + 11.5 KiB = 2.3 MiB dhclient
3.9 MiB + 741.0 KiB = 4.6 MiB bash (6)
5.3 MiB + 254.5 KiB = 5.5 MiB init
2.7 MiB + 3.3 MiB = 6.1 MiB sshd (7)
18.1 MiB + 56.5 KiB = 18.2 MiB rsyslogd
---------------------------------
53.7 MiB
=================================
```
Slabtop Output
==============
I also tried slabtop:
```
root@XanBox:~# slabtop -sc
Active / Total Objects (% used) : 131306 / 137558 (95.5%)
Active / Total Slabs (% used) : 3888 / 3888 (100.0%)
Active / Total Caches (% used) : 63 / 105 (60.0%)
Active / Total Size (% used) : 27419.31K / 29580.53K (92.7%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8288 7975 96% 0.57K 296 28 4736K inode_cache
14259 12858 90% 0.19K 679 21 2716K dentry
2384 1943 81% 0.96K 149 16 2384K ext4_inode_cache
20916 20494 97% 0.11K 581 36 2324K sysfs_dir_cache
624 554 88% 2.00K 39 16 1248K kmalloc-2048
195 176 90% 5.98K 39 5 1248K task_struct
6447 6387 99% 0.19K 307 21 1228K kmalloc-192
2128 1207 56% 0.55K 76 28 1216K radix_tree_node
768 761 99% 1.00K 48 16 768K kmalloc-1024
176 155 88% 4.00K 22 8 704K kmalloc-4096
1100 1100 100% 0.63K 44 25 704K proc_inode_cache
1008 1008 100% 0.66K 42 24 672K shmem_inode_cache
2640 2262 85% 0.25K 165 16 660K kmalloc-256
300 300 100% 2.06K 20 15 640K sighand_cache
5967 5967 100% 0.10K 153 39 612K buffer_head
1152 1053 91% 0.50K 72 16 576K kmalloc-512
3810 3810 100% 0.13K 127 30 508K ext4_allocation_context
60 60 100% 8.00K 15 4 480K kmalloc-8192
225 225 100% 2.06K 15 15 480K idr_layer_cache
7616 7324 96% 0.06K 119 64 476K kmalloc-64
700 700 100% 0.62K 28 25 448K sock_inode_cache
252 252 100% 1.75K 14 18 448K TCP
8925 8544 95% 0.05K 105 85 420K shared_policy_node
3072 2351 76% 0.12K 96 32 384K kmalloc-128
360 360 100% 1.06K 12 30 384K signal_cache
432 337 78% 0.88K 24 18 384K mm_struct
```
Other
=====
I also tried scanning for a rootkit with rkhunter - it found nothing. And I tried to sync and dump cache with:
```
sync; sync; sync; echo 3 > /proc/sys/vm/drop_caches
```
It made no difference also.
I also tried to force swap or disable swap with:
```
sudo sysctl -w vm.swappiness=100
sudo swapoff /dev/sda2
```
I also tried using htop and sorting by memory and it is not showing where the memory is going either. The kernel version is Linux 3.13.0-40-generic #69-Ubuntu SMP.
Dmesg output: <http://pastie.org/9558255>
smem output: <http://pastie.org/9558290>
Conclusion
==========
What is going on? - Where is all the memory going? - How do I find out? | 2014/08/06 | [
"https://superuser.com/questions/793192",
"https://superuser.com",
"https://superuser.com/users/50300/"
] | Story
=====
I can reproduce your issue using [ZFS on Linux](http://zfsonlinux.org/).
Here is a server called `node51` with `20GB` of RAM. I marked `16GiB` of RAM to be allocatable to the [ZFS adaptive replacement cache (ARC)](http://open-zfs.org/wiki/Performance_tuning#Adaptive_Replacement_Cache):
```
root@node51 [~]# echo 17179869184 > /sys/module/zfs/parameters/zfs_arc_max
root@node51 [~]# grep c_max /proc/spl/kstat/zfs/arcstats
c_max 4 17179869184
```
Then, I read a `45GiB` file using [Pipe Viewer](http://www.ivarch.com/programs/pv.shtml) in my ZFS pool `zeltik` to fill up the ARC:
```
root@node51 [~]# pv /zeltik/backup-backups/2014.04.11.squashfs > /dev/zero
45GB 0:01:20 [ 575MB/s] [==================================>] 100%
```
Now look at the free memory:
```
root@node51 [~]# free -m
total used free shared buffers cached
Mem: 20013 19810 203 1 51 69
-/+ buffers/cache: 19688 324
Swap: 7557 0 7556
```
Look!
`51MiB` in buffers
`69MiB` in cache
`120MiB` in both
`19688MiB` of RAM in use, including buffers and cache
`19568MiB` of RAM in use, excluding buffers and cache
The Python script that you referenced reports that applications are only using a small amount of RAM:
```
root@node51 [~]# python ps_mem.py
Private + Shared = RAM used Program
148.0 KiB + 54.0 KiB = 202.0 KiB acpid
176.0 KiB + 47.0 KiB = 223.0 KiB swapspace
184.0 KiB + 51.0 KiB = 235.0 KiB atd
220.0 KiB + 57.0 KiB = 277.0 KiB rpc.idmapd
304.0 KiB + 62.0 KiB = 366.0 KiB irqbalance
312.0 KiB + 64.0 KiB = 376.0 KiB sftp-server
308.0 KiB + 89.0 KiB = 397.0 KiB rpcbind
300.0 KiB + 104.5 KiB = 404.5 KiB cron
368.0 KiB + 99.0 KiB = 467.0 KiB upstart-socket-bridge
560.0 KiB + 180.0 KiB = 740.0 KiB systemd-logind
724.0 KiB + 93.0 KiB = 817.0 KiB dbus-daemon
720.0 KiB + 136.0 KiB = 856.0 KiB systemd-udevd
912.0 KiB + 118.5 KiB = 1.0 MiB upstart-udev-bridge
920.0 KiB + 180.0 KiB = 1.1 MiB rpc.statd (2)
1.0 MiB + 129.5 KiB = 1.1 MiB screen
1.1 MiB + 84.5 KiB = 1.2 MiB upstart-file-bridge
960.0 KiB + 452.0 KiB = 1.4 MiB getty (6)
1.6 MiB + 143.0 KiB = 1.7 MiB init
5.1 MiB + 1.5 MiB = 6.5 MiB bash (3)
5.7 MiB + 5.2 MiB = 10.9 MiB sshd (8)
11.7 MiB + 322.0 KiB = 12.0 MiB glusterd
27.3 MiB + 99.0 KiB = 27.4 MiB rsyslogd
67.4 MiB + 453.0 KiB = 67.8 MiB glusterfsd (2)
---------------------------------
137.4 MiB
=================================
```
**`19568MiB - 137.4MiB ≈ 19431MiB` of unaccounted RAM**
Explanation
===========
The `120MiB` of buffers and cache used that you saw in the story above account for the kernel's efficient behavior of caching data sent to or received from an external device.
>
> The first row, labeled *Mem*, displays physical memory utilization,
> including the amount of memory allocated to buffers and caches. A
> buffer, also called *buffer memory*, is usually defined as a portion of
> memory that is set aside as a temporary holding place for data that is
> being sent to or received from an external device, such as a HDD,
> keyboard, printer or network.
>
>
> The second line of data, which begins with *-/+ buffers/cache*, shows
> the amount of physical memory currently devoted to system *buffer
> cache*. This is particularly meaningful with regard to application
> programs, as all data accessed from files on the system that are
> performed through the use of *read()* and *write()* *system calls* pass
> through this cache. This cache can greatly speed up access to data by
> reducing or eliminating the need to read from or write to the HDD or
> other disk.
>
>
>
Source: <http://www.linfo.org/free.html>
Now how do we account for the missing `19431MiB`?
In the `free -m` output above, the `19688MiB` "*used*" in "*-/+ buffers/cache*" comes from this formula:
```
(kb_main_used) - (buffers_plus_cached) =
(kb_main_total - kb_main_free) - (kb_main_buffers + kb_main_cached)
kb_main_total: MemTotal from /proc/meminfo
kb_main_free: MemFree from /proc/meminfo
kb_main_buffers: Buffers from /proc/meminfo
kb_main_cached: Cached from /proc/meminfo
```
Source: [procps/free.c](http://procps.cvs.sourceforge.net/viewvc/procps/procps/free.c?revision=1.2&view=markup) and [procps/proc/sysinfo.c](http://procps.cvs.sourceforge.net/viewvc/procps/procps/proc/sysinfo.c?revision=1.41&view=markup)
(If you do the numbers based on my `free -m` output, you'll notice that `2MiB` aren't accounted for, but that's because of rounding errors introduced by this code: `#define S(X) ( ((unsigned long long)(X) << 10) >> shift)`)
The numbers don't add up in `/proc/meminfo`, either (I didn't record `/proc/meminfo` when I ran `free -m`, but we can see from your question that `/proc/meminfo` doesn't show where the missing RAM is), so we can conclude from the above that `/proc/meminfo` doesn't tell the whole story.
In my testing conditions, I know as a control that ZFS on Linux is responsible for the high RAM usage. I told its ARC that it could use up to `16GiB` of the server's RAM.
ZFS on Linux isn't a process. It's a kernel module.
From what I've found so far, the RAM usage of a kernel module wouldn't show up using process information tools because the module isn't a process.
Troubleshooting
===============
Unfortunately, I don't know enough about Linux to offer you a way to build a list of how much RAM non-process components (like the kernel and its modules) are using.
At this point, we can speculate, guess, and check.
You provided a `dmesg` output. Well-designed kernel modules would log some of their details to `dmesg`.
After looking through `dmesg`, one item stood out to me: `FS-Cache`
`FS-Cache` is part of the `cachefiles` kernel module and relates to the package `cachefilesd` on Debian and Red Hat Enterprise Linux.
Perhaps some time ago, you configured `FS-Cache` on a RAM disk to reduce the impact of network I/O as your server analyzes the video data.
Try disabling any suspicious kernel modules that could be eating up RAM. They can probably be disabled with [`blacklist`](https://wiki.debian.org/KernelModuleBlacklisting) in `/etc/modprobe.d/`, followed by a `sudo update-initramfs -u` (commands and locations may vary by Linux distribution).
Conclusion
==========
A memory leak is eating up `8MB/hr` of your RAM and won't release the RAM, seemingly no matter what you do. I was not able to determine the source of your memory leak based on the information that you provided, nor was I able to offer a way to find that memory leak.
Someone who is more experienced with Linux than I will need to provide input on how we can determine where the "other" RAM usage is going.
I have started a bounty on this question to see if we can get a better answer than "speculate, guess, and check". | You are not quite right – yes your free –m command is showing free 220MB but it is also showing that 1771MB is used as buffers.
Buffers and Cached is memory used by the kernel to optimize access to slow access data, usually disks.
So you should consider all memory marked as buffers as free memory because kernel can take it back whenever it is required.
See: <https://serverfault.com/questions/23433/in-linux-what-is-the-difference-between-buffers-and-cache-reported-by-the-f> |
793,192 | August 2015 Summary
===================
Please note, this is still happening. This is **not** related to linuxatemyram.com - the memory is not used for disk cache/buffers. This is what it looks like in NewRelic - the system leaks all the memory, uses up all swap space and then crashes. In this screenshot I rebooted the server before it crashed:
[](https://i.stack.imgur.com/vIkEa.png)
It is impossible to identify the source of the leak using common userspace tools. There is now a chat room to discuss this issue: <http://chat.stackexchange.com/rooms/27309/invisible-memory-leak-on-linux>
Only way to recover the "missing" memory appears to be rebooting the server. This has been a long standing issue reproduced in Ubuntu Server 14.04, 14.10 and 15.04.
Top
===
The memory use does not show in top and cannot be recovered even after killing just about every process (excluding things like kernel processes and ssh). Look at the "cached Mem", "buffers" and "free" fields in top, they are not using up the memory, the memory used is "missing" and unrecoverable without a reboot.
Attempting to use this "missing" memory causes the server to swap, slow to a crawl and eventually freeze.
```
root@XanBox:~# top -o +%MEM
top - 12:12:13 up 15 days, 20:39, 3 users, load average: 0.00, 0.06, 0.77
Tasks: 126 total, 1 running, 125 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem: 2,040,256 total, 1,881,228 used, 159,028 free, 1,348 buffers
KiB Swap: 1,999,868 total, 27,436 used, 1,972,432 free. 67,228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11502 root 20 0 107692 4252 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11336 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11841 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11301 root 20 0 26772 3436 2688 S 0.7 0.2 0:01.30 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/z+
11385 deployer 20 0 19972 2392 1708 S 0.0 0.1 0:00.03 -bash
11553 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.03 -bash
11890 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.02 -bash
11889 deployer 20 0 108008 2280 944 S 0.0 0.1 0:00.25 sshd: deployer@pts/3
12009 root 20 0 18308 2228 1608 S 0.0 0.1 0:00.09 -su
12114 root 20 0 18308 2192 1564 S 0.0 0.1 0:00.04 -su
12007 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12112 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12008 root 20 0 67376 2016 1528 S 0.0 0.1 0:00.01 su -
12113 root 20 0 67376 2012 1528 S 0.0 0.1 0:00.01 su -
1 root 20 0 33644 1988 764 S 0.0 0.1 2:29.77 /sbin/init
11552 deployer 20 0 107692 1952 936 S 0.0 0.1 0:00.07 sshd: deployer@pts/2
11384 deployer 20 0 107692 1948 936 S 0.0 0.1 0:00.06 sshd: deployer@pts/0
12182 root 20 0 20012 1516 1012 R 0.7 0.1 0:00.08 top -o +%MEM
1152 message+ 20 0 39508 1448 920 S 0.0 0.1 1:40.01 dbus-daemon --system --fork
1791 root 20 0 279832 1312 816 S 0.0 0.1 1:16.18 /usr/lib/policykit-1/polkitd --no-debug
1186 root 20 0 43736 984 796 S 0.0 0.0 1:13.07 /lib/systemd/systemd-logind
1212 syslog 20 0 256228 688 184 S 0.0 0.0 1:41.29 rsyslogd
5077 root 20 0 25324 648 520 S 0.0 0.0 0:34.35 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
336 root 20 0 19476 512 376 S 0.0 0.0 0:07.40 upstart-udev-bridge --daemon
342 root 20 0 51228 468 344 S 0.0 0.0 0:00.85 /lib/systemd/systemd-udevd --daemon
1097 root 20 0 15276 364 256 S 0.0 0.0 0:06.39 upstart-file-bridge --daemon
4921 root 20 0 61364 364 240 S 0.0 0.0 0:00.05 /usr/sbin/sshd -D
745 root 20 0 15364 252 180 S 0.0 0.0 0:06.51 upstart-socket-bridge --daemon
4947 root 20 0 23656 168 100 S 0.0 0.0 0:14.70 cron
11290 daemon 20 0 19140 164 0 S 0.0 0.0 0:00.00 atd
850 root 20 0 23420 80 16 S 0.0 0.0 0:11.00 rpcbind
872 statd 20 0 21544 8 4 S 0.0 0.0 0:00.00 rpc.statd -L
4880 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty4
4883 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty5
4890 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty2
4891 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty3
4894 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty6
4919 root 20 0 4368 4 0 S 0.0 0.0 0:00.00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
5224 root 20 0 24048 4 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd --manage-gids
6160 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty1
2 root 20 0 0 0 0 S 0.0 0.0 0:03.44 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:04.63 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 16:03.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 4:08.79 [rcuos/0]
9 root 20 0 0 0 0 S 0.0 0.0 4:10.42 [rcuos/1]
10 root 20 0 0 0 0 S 0.0 0.0 4:30.71 [rcuos/2]
```
Hardware
========
I have observed this on 3 servers out of around 100 so far (though others may be affected). One is an Intel Atom D525 @1.8ghz and the other 2 are Core2Duo E4600 and Q6600. One is using a JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller, the others are using Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0).
I ran lshw on the trouble servers as well as on an example OK server. Problem Servers: <http://pastie.org/10370534> <http://pastie.org/10370537> and <http://pastie.org/10370541> -- OK Server: <http://pastie.org/10370544>
Application
===========
This is an entirely headless application. There is no monitor connected and in fact no XServer installed at all. This should rule out graphics drivers/issues.
The server is used to proxy and analyse RTSP video using live555ProxyServer, ffmpeg and openCV. These servers do crunch through a lot of traffic because this is a CCTV application: <http://pastie.org/9558324>
I have tried both very old and latest trunk versions of live555, ffmpeg and openCV without change. I have also tried using opencv through the python2 and python3 modules, no change.
The exact same software/configuration has been loaded onto close to 100 servers, so far 3 are confirmed to leak memory. The servers slowly and stealthily leak around xMB (one leaking 8MB, one is slower, one is faster) per hour until all ram is gone, the servers start swapping heavily, slow to a crawl and require a reboot.
Meminfo
=======
Again, you can see the Cached and Buffers not using up much memory at all. HugePages are also disabled so this is not the culprit.
```
root@XanBox:~# cat /proc/meminfo
MemTotal: 2,040,256 kB
MemFree: 159,004 kB
Buffers: 1,348 kB
Cached: 67,228 kB
SwapCached: 9,940 kB
Active: 10,788 kB
Inactive: 81,120 kB
Active(anon): 1,900 kB
Inactive(anon): 21,512 kB
Active(file): 8,888 kB
Inactive(file): 59,608 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1,999,868 kB
SwapFree: 1,972,432 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14,496 kB
Mapped: 8,160 kB
Shmem: 80 kB
Slab: 33,472 kB
SReclaimable: 17,660 kB
SUnreclaim: 15,812 kB
KernelStack: 1,064 kB
PageTables: 3,992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3,019,996 kB
Committed_AS: 94,520 kB
VmallocTotal: 34,359,738,367 kB
VmallocUsed: 535,936 kB
VmallocChunk: 34,359,147,772 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2,048 kB
DirectMap4k: 62,144 kB
DirectMap2M: 2,025,472 kB
```
Free Output
===========
Free shows the following (note cached and buffers are both low so this is not disk cache or buffers!) - the memory is not recoverable without a reboot:
```
root@XanBox:~# free -m
total used free shared buffers cached
Mem: 1,992 1,838 153 0 1 66
```
If we subtract/add the buffers/cache to Used and Free, we see:
* 1,772MB Really Used (- Buffers/Cache) = 1,838MB used - 1MB buffers - 66MB cache
* 220MB Really Free (+ Buffers/Cache) = 154MB free + 1MB buffers + 66MB cache
Exactly as we expect:
```
-/+ buffers/cache: 1,772 220
```
So around 1.7GB is not used by userspace and in fact used by the kernel as the system is actually using 53.7MB (see PS Mem output below).
I'm surprised with the amount of comments that think 1.7GB is used for caching/buffers - this is **fundamentally misreading the output!** - this line means used memory **excluding buffers/cache**, see linuxatemyram.com for details.
PS Output
=========
Here is a full list of running processes sorted by memory:
```
# ps -e -o pid,vsz,comm= | sort -n -k 2
2 0 kthreadd
3 0 ksoftirqd/0
5 0 kworker/0:0H
7 0 rcu_sched
8 0 rcuos/0
9 0 rcuos/1
10 0 rcuos/2
11 0 rcuos/3
12 0 rcu_bh
13 0 rcuob/0
14 0 rcuob/1
15 0 rcuob/2
16 0 rcuob/3
17 0 migration/0
18 0 watchdog/0
19 0 watchdog/1
20 0 migration/1
21 0 ksoftirqd/1
23 0 kworker/1:0H
24 0 watchdog/2
25 0 migration/2
26 0 ksoftirqd/2
28 0 kworker/2:0H
29 0 watchdog/3
30 0 migration/3
31 0 ksoftirqd/3
32 0 kworker/3:0
33 0 kworker/3:0H
34 0 khelper
35 0 kdevtmpfs
36 0 netns
37 0 writeback
38 0 kintegrityd
39 0 bioset
41 0 kblockd
42 0 ata_sff
43 0 khubd
44 0 md
45 0 devfreq_wq
46 0 kworker/0:1
47 0 kworker/1:1
48 0 kworker/2:1
50 0 khungtaskd
51 0 kswapd0
52 0 ksmd
53 0 khugepaged
54 0 fsnotify_mark
55 0 ecryptfs-kthrea
56 0 crypto
68 0 kthrotld
70 0 scsi_eh_0
71 0 scsi_eh_1
92 0 deferwq
93 0 charger_manager
94 0 kworker/1:2
95 0 kworker/3:2
149 0 kpsmoused
155 0 jbd2/sda1-8
156 0 ext4-rsv-conver
316 0 jbd2/sda3-8
317 0 ext4-rsv-conver
565 0 kmemstick
770 0 cfg80211
818 0 hd-audio0
853 0 kworker/2:2
953 0 rpciod
PID VSZ
1714 0 kauditd
11335 0 kworker/0:2
12202 0 kworker/u8:2
20228 0 kworker/u8:0
25529 0 kworker/u9:1
28305 0 kworker/u9:2
29822 0 lockd
4919 4368 acpid
4074 7136 ps
6681 10232 dhclient
4880 14540 getty
4883 14540 getty
4890 14540 getty
4891 14540 getty
4894 14540 getty
6160 14540 getty
14486 15260 upstart-socket-
14489 15276 upstart-file-br
12009 18308 bash
12114 18308 bash
12289 18308 bash
4075 19008 sort
11290 19140 atd
14483 19476 upstart-udev-br
11385 19972 bash
11553 19972 bash
11890 19972 bash
29503 21544 rpc.statd
2847 23384 htop
850 23420 rpcbind
29588 23480 rpc.idmapd
4947 23656 cron
29833 24048 rpc.mountd
5077 25324 hostapd
11301 26912 openvpn
1 37356 init
1152 39508 dbus-daemon
14673 43452 systemd-logind
14450 51204 systemd-udevd
4921 61364 sshd
12008 67376 su
12113 67376 su
12288 67376 su
12007 67796 sudo
12112 67796 sudo
12287 67796 sudo
11336 107692 sshd
11384 107692 sshd
11502 107692 sshd
11841 107692 sshd
11552 108008 sshd
11889 108008 sshd
1212 256228 rsyslogd
1791 279832 polkitd
4064 335684 whoopsie
```
Here is a full list of all running processes:
```
root@XanBox:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33644 1988 ? Ss Jul21 2:29 /sbin/init
root 2 0.0 0.0 0 0 ? S Jul21 0:03 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul21 1:04 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Jul21 16:03 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Jul21 4:08 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Jul21 4:10 [rcuos/1]
root 10 0.0 0.0 0 0 ? S Jul21 4:30 [rcuos/2]
root 11 0.0 0.0 0 0 ? S Jul21 4:28 [rcuos/3]
root 12 0.0 0.0 0 0 ? S Jul21 0:00 [rcu_bh]
root 13 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/0]
root 14 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/1]
root 15 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/2]
root 16 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/3]
root 17 0.0 0.0 0 0 ? S Jul21 0:13 [migration/0]
root 18 0.0 0.0 0 0 ? S Jul21 0:08 [watchdog/0]
root 19 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/1]
root 20 0.0 0.0 0 0 ? S Jul21 0:13 [migration/1]
root 21 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/1]
root 23 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/1:0H]
root 24 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/2]
root 25 0.0 0.0 0 0 ? S Jul21 0:23 [migration/2]
root 26 0.0 0.0 0 0 ? S Jul21 1:01 [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/2:0H]
root 29 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/3]
root 30 0.0 0.0 0 0 ? S Jul21 0:23 [migration/3]
root 31 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/3]
root 32 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/3:0]
root 33 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/3:0H]
root 34 0.0 0.0 0 0 ? S< Jul21 0:00 [khelper]
root 35 0.0 0.0 0 0 ? S Jul21 0:00 [kdevtmpfs]
root 36 0.0 0.0 0 0 ? S< Jul21 0:00 [netns]
root 37 0.0 0.0 0 0 ? S< Jul21 0:00 [writeback]
root 38 0.0 0.0 0 0 ? S< Jul21 0:00 [kintegrityd]
root 39 0.0 0.0 0 0 ? S< Jul21 0:00 [bioset]
root 41 0.0 0.0 0 0 ? S< Jul21 0:00 [kblockd]
root 42 0.0 0.0 0 0 ? S< Jul21 0:00 [ata_sff]
root 43 0.0 0.0 0 0 ? S Jul21 0:00 [khubd]
root 44 0.0 0.0 0 0 ? S< Jul21 0:00 [md]
root 45 0.0 0.0 0 0 ? S< Jul21 0:00 [devfreq_wq]
root 46 0.0 0.0 0 0 ? S Jul21 18:51 [kworker/0:1]
root 47 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/1:1]
root 48 0.0 0.0 0 0 ? S Jul21 1:14 [kworker/2:1]
root 50 0.0 0.0 0 0 ? S Jul21 0:01 [khungtaskd]
root 51 0.4 0.0 0 0 ? S Jul21 95:51 [kswapd0]
root 52 0.0 0.0 0 0 ? SN Jul21 0:00 [ksmd]
root 53 0.0 0.0 0 0 ? SN Jul21 0:28 [khugepaged]
root 54 0.0 0.0 0 0 ? S Jul21 0:00 [fsnotify_mark]
root 55 0.0 0.0 0 0 ? S Jul21 0:00 [ecryptfs-kthrea]
root 56 0.0 0.0 0 0 ? S< Jul21 0:00 [crypto]
root 68 0.0 0.0 0 0 ? S< Jul21 0:00 [kthrotld]
root 70 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_0]
root 71 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_1]
root 92 0.0 0.0 0 0 ? S< Jul21 0:00 [deferwq]
root 93 0.0 0.0 0 0 ? S< Jul21 0:00 [charger_manager]
root 94 0.0 0.0 0 0 ? S Jul21 1:05 [kworker/1:2]
root 95 0.0 0.0 0 0 ? S Jul21 1:08 [kworker/3:2]
root 149 0.0 0.0 0 0 ? S< Jul21 0:00 [kpsmoused]
root 155 0.0 0.0 0 0 ? S Jul21 3:39 [jbd2/sda1-8]
root 156 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 316 0.0 0.0 0 0 ? S Jul21 1:28 [jbd2/sda3-8]
root 317 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 336 0.0 0.0 19476 512 ? S Jul21 0:07 upstart-udev-bridge --daemon
root 342 0.0 0.0 51228 468 ? Ss Jul21 0:00 /lib/systemd/systemd-udevd --daemon
root 565 0.0 0.0 0 0 ? S< Jul21 0:00 [kmemstick]
root 745 0.0 0.0 15364 252 ? S Jul21 0:06 upstart-socket-bridge --daemon
root 770 0.0 0.0 0 0 ? S< Jul21 0:00 [cfg80211]
root 818 0.0 0.0 0 0 ? S< Jul21 0:00 [hd-audio0]
root 850 0.0 0.0 23420 80 ? Ss Jul21 0:11 rpcbind
root 853 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/2:2]
statd 872 0.0 0.0 21544 8 ? Ss Jul21 0:00 rpc.statd -L
root 953 0.0 0.0 0 0 ? S< Jul21 0:00 [rpciod]
root 1097 0.0 0.0 15276 364 ? S Jul21 0:06 upstart-file-bridge --daemon
message+ 1152 0.0 0.0 39508 1448 ? Ss Jul21 1:40 dbus-daemon --system --fork
root 1157 0.0 0.0 23480 0 ? Ss Jul21 0:00 rpc.idmapd
root 1186 0.0 0.0 43736 984 ? Ss Jul21 1:13 /lib/systemd/systemd-logind
syslog 1212 0.0 0.0 256228 688 ? Ssl Jul21 1:41 rsyslogd
root 1714 0.0 0.0 0 0 ? S Jul21 0:00 [kauditd]
root 1791 0.0 0.0 279832 1312 ? Sl Jul21 1:16 /usr/lib/policykit-1/polkitd --no-debug
root 4880 0.0 0.0 14540 4 tty4 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty4
root 4883 0.0 0.0 14540 4 tty5 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty5
root 4890 0.0 0.0 14540 4 tty2 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty2
root 4891 0.0 0.0 14540 4 tty3 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty3
root 4894 0.0 0.0 14540 4 tty6 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty6
root 4919 0.0 0.0 4368 4 ? Ss Jul21 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 4921 0.0 0.0 61364 364 ? Ss Jul21 0:00 /usr/sbin/sshd -D
root 4947 0.0 0.0 23656 168 ? Ss Jul21 0:14 cron
root 5077 0.0 0.0 25324 648 ? Ss Jul21 0:34 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
root 5192 0.0 0.0 0 0 ? S Jul21 0:00 [lockd]
root 5224 0.0 0.0 24048 4 ? Ss Jul21 0:00 /usr/sbin/rpc.mountd --manage-gids
root 6160 0.0 0.0 14540 4 tty1 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty1
root 6681 0.0 0.0 10232 0 ? Ss 11:07 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
root 9452 0.0 0.0 0 0 ? S 11:28 0:00 [kworker/u8:1]
root 9943 0.0 0.0 0 0 ? S 11:42 0:00 [kworker/u8:0]
daemon 11290 0.0 0.0 19140 164 ? Ss 11:59 0:00 atd
root 11301 0.2 0.1 26772 3436 ? Ss 12:00 0:01 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/zanvie
root 11335 0.0 0.0 0 0 ? S 12:01 0:00 [kworker/0:2]
root 11336 0.0 0.2 107692 4248 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11384 0.0 0.0 107692 1948 ? S 12:01 0:00 sshd: deployer@pts/0
deployer 11385 0.0 0.1 19972 2392 pts/0 Ss+ 12:01 0:00 -bash
root 11502 0.0 0.2 107692 4252 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11552 0.0 0.0 107692 1952 ? S 12:01 0:00 sshd: deployer@pts/2
deployer 11553 0.0 0.1 19972 2388 pts/2 Ss 12:01 0:00 -bash
root 11841 0.0 0.2 107692 4248 ? Ss 12:02 0:00 sshd: deployer [priv]
deployer 11889 0.0 0.1 108008 2280 ? S 12:02 0:00 sshd: deployer@pts/3
deployer 11890 0.0 0.1 19972 2388 pts/3 Ss 12:02 0:00 -bash
root 12007 0.0 0.1 67796 2136 pts/3 S 12:02 0:00 sudo su -
root 12008 0.0 0.0 67376 2016 pts/3 S 12:02 0:00 su -
root 12009 0.0 0.1 18308 2228 pts/3 S+ 12:02 0:00 -su
root 12112 0.0 0.1 67796 2136 pts/2 S 12:08 0:00 sudo su -
root 12113 0.0 0.0 67376 2012 pts/2 S 12:08 0:00 su -
root 12114 0.0 0.1 18308 2192 pts/2 S 12:08 0:00 -su
root 12180 0.0 0.0 15568 1160 pts/2 R+ 12:09 0:00 ps aux
root 25529 0.0 0.0 0 0 ? S< Jul28 0:09 [kworker/u9:1]
root 28305 0.0 0.0 0 0 ? S< Aug05 0:00 [kworker/u9:2]
```
PS Mem Output
=============
I also tried the ps\_mem.py from <https://github.com/pixelb/ps_mem>
```
root@XanBox:~/ps_mem# python ps_mem.py
Private + Shared = RAM used Program
144.0 KiB + 9.5 KiB = 153.5 KiB acpid
172.0 KiB + 29.5 KiB = 201.5 KiB atd
248.0 KiB + 35.0 KiB = 283.0 KiB cron
272.0 KiB + 84.0 KiB = 356.0 KiB upstart-file-bridge
276.0 KiB + 84.5 KiB = 360.5 KiB upstart-socket-bridge
280.0 KiB + 102.5 KiB = 382.5 KiB upstart-udev-bridge
332.0 KiB + 54.5 KiB = 386.5 KiB rpc.idmapd
368.0 KiB + 91.5 KiB = 459.5 KiB rpcbind
388.0 KiB + 251.5 KiB = 639.5 KiB systemd-logind
668.0 KiB + 43.5 KiB = 711.5 KiB hostapd
576.0 KiB + 157.5 KiB = 733.5 KiB systemd-udevd
676.0 KiB + 65.5 KiB = 741.5 KiB rpc.mountd
604.0 KiB + 163.0 KiB = 767.0 KiB rpc.statd
908.0 KiB + 62.5 KiB = 970.5 KiB dbus-daemon [updated]
932.0 KiB + 117.0 KiB = 1.0 MiB getty [updated] (6)
1.0 MiB + 69.5 KiB = 1.1 MiB openvpn
1.0 MiB + 137.0 KiB = 1.2 MiB polkitd
1.5 MiB + 202.0 KiB = 1.7 MiB htop
1.4 MiB + 306.5 KiB = 1.7 MiB whoopsie
1.4 MiB + 279.0 KiB = 1.7 MiB su (3)
1.5 MiB + 268.5 KiB = 1.8 MiB sudo (3)
2.2 MiB + 11.5 KiB = 2.3 MiB dhclient
3.9 MiB + 741.0 KiB = 4.6 MiB bash (6)
5.3 MiB + 254.5 KiB = 5.5 MiB init
2.7 MiB + 3.3 MiB = 6.1 MiB sshd (7)
18.1 MiB + 56.5 KiB = 18.2 MiB rsyslogd
---------------------------------
53.7 MiB
=================================
```
Slabtop Output
==============
I also tried slabtop:
```
root@XanBox:~# slabtop -sc
Active / Total Objects (% used) : 131306 / 137558 (95.5%)
Active / Total Slabs (% used) : 3888 / 3888 (100.0%)
Active / Total Caches (% used) : 63 / 105 (60.0%)
Active / Total Size (% used) : 27419.31K / 29580.53K (92.7%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8288 7975 96% 0.57K 296 28 4736K inode_cache
14259 12858 90% 0.19K 679 21 2716K dentry
2384 1943 81% 0.96K 149 16 2384K ext4_inode_cache
20916 20494 97% 0.11K 581 36 2324K sysfs_dir_cache
624 554 88% 2.00K 39 16 1248K kmalloc-2048
195 176 90% 5.98K 39 5 1248K task_struct
6447 6387 99% 0.19K 307 21 1228K kmalloc-192
2128 1207 56% 0.55K 76 28 1216K radix_tree_node
768 761 99% 1.00K 48 16 768K kmalloc-1024
176 155 88% 4.00K 22 8 704K kmalloc-4096
1100 1100 100% 0.63K 44 25 704K proc_inode_cache
1008 1008 100% 0.66K 42 24 672K shmem_inode_cache
2640 2262 85% 0.25K 165 16 660K kmalloc-256
300 300 100% 2.06K 20 15 640K sighand_cache
5967 5967 100% 0.10K 153 39 612K buffer_head
1152 1053 91% 0.50K 72 16 576K kmalloc-512
3810 3810 100% 0.13K 127 30 508K ext4_allocation_context
60 60 100% 8.00K 15 4 480K kmalloc-8192
225 225 100% 2.06K 15 15 480K idr_layer_cache
7616 7324 96% 0.06K 119 64 476K kmalloc-64
700 700 100% 0.62K 28 25 448K sock_inode_cache
252 252 100% 1.75K 14 18 448K TCP
8925 8544 95% 0.05K 105 85 420K shared_policy_node
3072 2351 76% 0.12K 96 32 384K kmalloc-128
360 360 100% 1.06K 12 30 384K signal_cache
432 337 78% 0.88K 24 18 384K mm_struct
```
Other
=====
I also tried scanning for a rootkit with rkhunter - it found nothing. And I tried to sync and dump cache with:
```
sync; sync; sync; echo 3 > /proc/sys/vm/drop_caches
```
It made no difference also.
I also tried to force swap or disable swap with:
```
sudo sysctl -w vm.swappiness=100
sudo swapoff /dev/sda2
```
I also tried using htop and sorting by memory and it is not showing where the memory is going either. The kernel version is Linux 3.13.0-40-generic #69-Ubuntu SMP.
Dmesg output: <http://pastie.org/9558255>
smem output: <http://pastie.org/9558290>
Conclusion
==========
What is going on? - Where is all the memory going? - How do I find out? | 2014/08/06 | [
"https://superuser.com/questions/793192",
"https://superuser.com",
"https://superuser.com/users/50300/"
] | My conclusion is it is a kernel memory leak somewhere in the Linux kernel, this is why none of the userspace tools are able to show where memory is being leaked. Maybe it is related to this question: <https://serverfault.com/questions/670423/linux-memory-usage-higher-than-sum-of-processes>
I upgraded the kernel version from 3.13 to 3.19 and it seems the memory leak has stopped! - I will report back if I see a leak again.
It would still be useful to have some easy/easier way to see how much memory is used for different parts of the Linux kernel. It is still a mystery what was causing the leak in 3.13. | You are not quite right – yes your free –m command is showing free 220MB but it is also showing that 1771MB is used as buffers.
Buffers and Cached is memory used by the kernel to optimize access to slow access data, usually disks.
So you should consider all memory marked as buffers as free memory because kernel can take it back whenever it is required.
See: <https://serverfault.com/questions/23433/in-linux-what-is-the-difference-between-buffers-and-cache-reported-by-the-f> |
793,192 | August 2015 Summary
===================
Please note, this is still happening. This is **not** related to linuxatemyram.com - the memory is not used for disk cache/buffers. This is what it looks like in NewRelic - the system leaks all the memory, uses up all swap space and then crashes. In this screenshot I rebooted the server before it crashed:
[](https://i.stack.imgur.com/vIkEa.png)
It is impossible to identify the source of the leak using common userspace tools. There is now a chat room to discuss this issue: <http://chat.stackexchange.com/rooms/27309/invisible-memory-leak-on-linux>
Only way to recover the "missing" memory appears to be rebooting the server. This has been a long standing issue reproduced in Ubuntu Server 14.04, 14.10 and 15.04.
Top
===
The memory use does not show in top and cannot be recovered even after killing just about every process (excluding things like kernel processes and ssh). Look at the "cached Mem", "buffers" and "free" fields in top, they are not using up the memory, the memory used is "missing" and unrecoverable without a reboot.
Attempting to use this "missing" memory causes the server to swap, slow to a crawl and eventually freeze.
```
root@XanBox:~# top -o +%MEM
top - 12:12:13 up 15 days, 20:39, 3 users, load average: 0.00, 0.06, 0.77
Tasks: 126 total, 1 running, 125 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.1 hi, 0.0 si, 0.0 st
KiB Mem: 2,040,256 total, 1,881,228 used, 159,028 free, 1,348 buffers
KiB Swap: 1,999,868 total, 27,436 used, 1,972,432 free. 67,228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11502 root 20 0 107692 4252 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11336 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11841 root 20 0 107692 4248 3240 S 0.0 0.2 0:00.06 sshd: deployer [priv]
11301 root 20 0 26772 3436 2688 S 0.7 0.2 0:01.30 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/z+
11385 deployer 20 0 19972 2392 1708 S 0.0 0.1 0:00.03 -bash
11553 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.03 -bash
11890 deployer 20 0 19972 2388 1708 S 0.0 0.1 0:00.02 -bash
11889 deployer 20 0 108008 2280 944 S 0.0 0.1 0:00.25 sshd: deployer@pts/3
12009 root 20 0 18308 2228 1608 S 0.0 0.1 0:00.09 -su
12114 root 20 0 18308 2192 1564 S 0.0 0.1 0:00.04 -su
12007 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12112 root 20 0 67796 2136 1644 S 0.0 0.1 0:00.01 sudo su -
12008 root 20 0 67376 2016 1528 S 0.0 0.1 0:00.01 su -
12113 root 20 0 67376 2012 1528 S 0.0 0.1 0:00.01 su -
1 root 20 0 33644 1988 764 S 0.0 0.1 2:29.77 /sbin/init
11552 deployer 20 0 107692 1952 936 S 0.0 0.1 0:00.07 sshd: deployer@pts/2
11384 deployer 20 0 107692 1948 936 S 0.0 0.1 0:00.06 sshd: deployer@pts/0
12182 root 20 0 20012 1516 1012 R 0.7 0.1 0:00.08 top -o +%MEM
1152 message+ 20 0 39508 1448 920 S 0.0 0.1 1:40.01 dbus-daemon --system --fork
1791 root 20 0 279832 1312 816 S 0.0 0.1 1:16.18 /usr/lib/policykit-1/polkitd --no-debug
1186 root 20 0 43736 984 796 S 0.0 0.0 1:13.07 /lib/systemd/systemd-logind
1212 syslog 20 0 256228 688 184 S 0.0 0.0 1:41.29 rsyslogd
5077 root 20 0 25324 648 520 S 0.0 0.0 0:34.35 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
336 root 20 0 19476 512 376 S 0.0 0.0 0:07.40 upstart-udev-bridge --daemon
342 root 20 0 51228 468 344 S 0.0 0.0 0:00.85 /lib/systemd/systemd-udevd --daemon
1097 root 20 0 15276 364 256 S 0.0 0.0 0:06.39 upstart-file-bridge --daemon
4921 root 20 0 61364 364 240 S 0.0 0.0 0:00.05 /usr/sbin/sshd -D
745 root 20 0 15364 252 180 S 0.0 0.0 0:06.51 upstart-socket-bridge --daemon
4947 root 20 0 23656 168 100 S 0.0 0.0 0:14.70 cron
11290 daemon 20 0 19140 164 0 S 0.0 0.0 0:00.00 atd
850 root 20 0 23420 80 16 S 0.0 0.0 0:11.00 rpcbind
872 statd 20 0 21544 8 4 S 0.0 0.0 0:00.00 rpc.statd -L
4880 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty4
4883 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty5
4890 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty2
4891 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty3
4894 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty6
4919 root 20 0 4368 4 0 S 0.0 0.0 0:00.00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
5224 root 20 0 24048 4 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd --manage-gids
6160 root 20 0 14540 4 0 S 0.0 0.0 0:00.00 /sbin/getty -8 38400 tty1
2 root 20 0 0 0 0 S 0.0 0.0 0:03.44 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 1:04.63 [ksoftirqd/0]
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H]
7 root 20 0 0 0 0 S 0.0 0.0 16:03.32 [rcu_sched]
8 root 20 0 0 0 0 S 0.0 0.0 4:08.79 [rcuos/0]
9 root 20 0 0 0 0 S 0.0 0.0 4:10.42 [rcuos/1]
10 root 20 0 0 0 0 S 0.0 0.0 4:30.71 [rcuos/2]
```
Hardware
========
I have observed this on 3 servers out of around 100 so far (though others may be affected). One is an Intel Atom D525 @1.8ghz and the other 2 are Core2Duo E4600 and Q6600. One is using a JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller, the others are using Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0).
I ran lshw on the trouble servers as well as on an example OK server. Problem Servers: <http://pastie.org/10370534> <http://pastie.org/10370537> and <http://pastie.org/10370541> -- OK Server: <http://pastie.org/10370544>
Application
===========
This is an entirely headless application. There is no monitor connected and in fact no XServer installed at all. This should rule out graphics drivers/issues.
The server is used to proxy and analyse RTSP video using live555ProxyServer, ffmpeg and openCV. These servers do crunch through a lot of traffic because this is a CCTV application: <http://pastie.org/9558324>
I have tried both very old and latest trunk versions of live555, ffmpeg and openCV without change. I have also tried using opencv through the python2 and python3 modules, no change.
The exact same software/configuration has been loaded onto close to 100 servers, so far 3 are confirmed to leak memory. The servers slowly and stealthily leak around xMB (one leaking 8MB, one is slower, one is faster) per hour until all ram is gone, the servers start swapping heavily, slow to a crawl and require a reboot.
Meminfo
=======
Again, you can see the Cached and Buffers not using up much memory at all. HugePages are also disabled so this is not the culprit.
```
root@XanBox:~# cat /proc/meminfo
MemTotal: 2,040,256 kB
MemFree: 159,004 kB
Buffers: 1,348 kB
Cached: 67,228 kB
SwapCached: 9,940 kB
Active: 10,788 kB
Inactive: 81,120 kB
Active(anon): 1,900 kB
Inactive(anon): 21,512 kB
Active(file): 8,888 kB
Inactive(file): 59,608 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1,999,868 kB
SwapFree: 1,972,432 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14,496 kB
Mapped: 8,160 kB
Shmem: 80 kB
Slab: 33,472 kB
SReclaimable: 17,660 kB
SUnreclaim: 15,812 kB
KernelStack: 1,064 kB
PageTables: 3,992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3,019,996 kB
Committed_AS: 94,520 kB
VmallocTotal: 34,359,738,367 kB
VmallocUsed: 535,936 kB
VmallocChunk: 34,359,147,772 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2,048 kB
DirectMap4k: 62,144 kB
DirectMap2M: 2,025,472 kB
```
Free Output
===========
Free shows the following (note cached and buffers are both low so this is not disk cache or buffers!) - the memory is not recoverable without a reboot:
```
root@XanBox:~# free -m
total used free shared buffers cached
Mem: 1,992 1,838 153 0 1 66
```
If we subtract/add the buffers/cache to Used and Free, we see:
* 1,772MB Really Used (- Buffers/Cache) = 1,838MB used - 1MB buffers - 66MB cache
* 220MB Really Free (+ Buffers/Cache) = 154MB free + 1MB buffers + 66MB cache
Exactly as we expect:
```
-/+ buffers/cache: 1,772 220
```
So around 1.7GB is not used by userspace and in fact used by the kernel as the system is actually using 53.7MB (see PS Mem output below).
I'm surprised with the amount of comments that think 1.7GB is used for caching/buffers - this is **fundamentally misreading the output!** - this line means used memory **excluding buffers/cache**, see linuxatemyram.com for details.
PS Output
=========
Here is a full list of running processes sorted by memory:
```
# ps -e -o pid,vsz,comm= | sort -n -k 2
2 0 kthreadd
3 0 ksoftirqd/0
5 0 kworker/0:0H
7 0 rcu_sched
8 0 rcuos/0
9 0 rcuos/1
10 0 rcuos/2
11 0 rcuos/3
12 0 rcu_bh
13 0 rcuob/0
14 0 rcuob/1
15 0 rcuob/2
16 0 rcuob/3
17 0 migration/0
18 0 watchdog/0
19 0 watchdog/1
20 0 migration/1
21 0 ksoftirqd/1
23 0 kworker/1:0H
24 0 watchdog/2
25 0 migration/2
26 0 ksoftirqd/2
28 0 kworker/2:0H
29 0 watchdog/3
30 0 migration/3
31 0 ksoftirqd/3
32 0 kworker/3:0
33 0 kworker/3:0H
34 0 khelper
35 0 kdevtmpfs
36 0 netns
37 0 writeback
38 0 kintegrityd
39 0 bioset
41 0 kblockd
42 0 ata_sff
43 0 khubd
44 0 md
45 0 devfreq_wq
46 0 kworker/0:1
47 0 kworker/1:1
48 0 kworker/2:1
50 0 khungtaskd
51 0 kswapd0
52 0 ksmd
53 0 khugepaged
54 0 fsnotify_mark
55 0 ecryptfs-kthrea
56 0 crypto
68 0 kthrotld
70 0 scsi_eh_0
71 0 scsi_eh_1
92 0 deferwq
93 0 charger_manager
94 0 kworker/1:2
95 0 kworker/3:2
149 0 kpsmoused
155 0 jbd2/sda1-8
156 0 ext4-rsv-conver
316 0 jbd2/sda3-8
317 0 ext4-rsv-conver
565 0 kmemstick
770 0 cfg80211
818 0 hd-audio0
853 0 kworker/2:2
953 0 rpciod
PID VSZ
1714 0 kauditd
11335 0 kworker/0:2
12202 0 kworker/u8:2
20228 0 kworker/u8:0
25529 0 kworker/u9:1
28305 0 kworker/u9:2
29822 0 lockd
4919 4368 acpid
4074 7136 ps
6681 10232 dhclient
4880 14540 getty
4883 14540 getty
4890 14540 getty
4891 14540 getty
4894 14540 getty
6160 14540 getty
14486 15260 upstart-socket-
14489 15276 upstart-file-br
12009 18308 bash
12114 18308 bash
12289 18308 bash
4075 19008 sort
11290 19140 atd
14483 19476 upstart-udev-br
11385 19972 bash
11553 19972 bash
11890 19972 bash
29503 21544 rpc.statd
2847 23384 htop
850 23420 rpcbind
29588 23480 rpc.idmapd
4947 23656 cron
29833 24048 rpc.mountd
5077 25324 hostapd
11301 26912 openvpn
1 37356 init
1152 39508 dbus-daemon
14673 43452 systemd-logind
14450 51204 systemd-udevd
4921 61364 sshd
12008 67376 su
12113 67376 su
12288 67376 su
12007 67796 sudo
12112 67796 sudo
12287 67796 sudo
11336 107692 sshd
11384 107692 sshd
11502 107692 sshd
11841 107692 sshd
11552 108008 sshd
11889 108008 sshd
1212 256228 rsyslogd
1791 279832 polkitd
4064 335684 whoopsie
```
Here is a full list of all running processes:
```
root@XanBox:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33644 1988 ? Ss Jul21 2:29 /sbin/init
root 2 0.0 0.0 0 0 ? S Jul21 0:03 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul21 1:04 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Jul21 16:03 [rcu_sched]
root 8 0.0 0.0 0 0 ? S Jul21 4:08 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Jul21 4:10 [rcuos/1]
root 10 0.0 0.0 0 0 ? S Jul21 4:30 [rcuos/2]
root 11 0.0 0.0 0 0 ? S Jul21 4:28 [rcuos/3]
root 12 0.0 0.0 0 0 ? S Jul21 0:00 [rcu_bh]
root 13 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/0]
root 14 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/1]
root 15 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/2]
root 16 0.0 0.0 0 0 ? S Jul21 0:00 [rcuob/3]
root 17 0.0 0.0 0 0 ? S Jul21 0:13 [migration/0]
root 18 0.0 0.0 0 0 ? S Jul21 0:08 [watchdog/0]
root 19 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/1]
root 20 0.0 0.0 0 0 ? S Jul21 0:13 [migration/1]
root 21 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/1]
root 23 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/1:0H]
root 24 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/2]
root 25 0.0 0.0 0 0 ? S Jul21 0:23 [migration/2]
root 26 0.0 0.0 0 0 ? S Jul21 1:01 [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/2:0H]
root 29 0.0 0.0 0 0 ? S Jul21 0:07 [watchdog/3]
root 30 0.0 0.0 0 0 ? S Jul21 0:23 [migration/3]
root 31 0.0 0.0 0 0 ? S Jul21 1:03 [ksoftirqd/3]
root 32 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/3:0]
root 33 0.0 0.0 0 0 ? S< Jul21 0:00 [kworker/3:0H]
root 34 0.0 0.0 0 0 ? S< Jul21 0:00 [khelper]
root 35 0.0 0.0 0 0 ? S Jul21 0:00 [kdevtmpfs]
root 36 0.0 0.0 0 0 ? S< Jul21 0:00 [netns]
root 37 0.0 0.0 0 0 ? S< Jul21 0:00 [writeback]
root 38 0.0 0.0 0 0 ? S< Jul21 0:00 [kintegrityd]
root 39 0.0 0.0 0 0 ? S< Jul21 0:00 [bioset]
root 41 0.0 0.0 0 0 ? S< Jul21 0:00 [kblockd]
root 42 0.0 0.0 0 0 ? S< Jul21 0:00 [ata_sff]
root 43 0.0 0.0 0 0 ? S Jul21 0:00 [khubd]
root 44 0.0 0.0 0 0 ? S< Jul21 0:00 [md]
root 45 0.0 0.0 0 0 ? S< Jul21 0:00 [devfreq_wq]
root 46 0.0 0.0 0 0 ? S Jul21 18:51 [kworker/0:1]
root 47 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/1:1]
root 48 0.0 0.0 0 0 ? S Jul21 1:14 [kworker/2:1]
root 50 0.0 0.0 0 0 ? S Jul21 0:01 [khungtaskd]
root 51 0.4 0.0 0 0 ? S Jul21 95:51 [kswapd0]
root 52 0.0 0.0 0 0 ? SN Jul21 0:00 [ksmd]
root 53 0.0 0.0 0 0 ? SN Jul21 0:28 [khugepaged]
root 54 0.0 0.0 0 0 ? S Jul21 0:00 [fsnotify_mark]
root 55 0.0 0.0 0 0 ? S Jul21 0:00 [ecryptfs-kthrea]
root 56 0.0 0.0 0 0 ? S< Jul21 0:00 [crypto]
root 68 0.0 0.0 0 0 ? S< Jul21 0:00 [kthrotld]
root 70 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_0]
root 71 0.0 0.0 0 0 ? S Jul21 0:00 [scsi_eh_1]
root 92 0.0 0.0 0 0 ? S< Jul21 0:00 [deferwq]
root 93 0.0 0.0 0 0 ? S< Jul21 0:00 [charger_manager]
root 94 0.0 0.0 0 0 ? S Jul21 1:05 [kworker/1:2]
root 95 0.0 0.0 0 0 ? S Jul21 1:08 [kworker/3:2]
root 149 0.0 0.0 0 0 ? S< Jul21 0:00 [kpsmoused]
root 155 0.0 0.0 0 0 ? S Jul21 3:39 [jbd2/sda1-8]
root 156 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 316 0.0 0.0 0 0 ? S Jul21 1:28 [jbd2/sda3-8]
root 317 0.0 0.0 0 0 ? S< Jul21 0:00 [ext4-rsv-conver]
root 336 0.0 0.0 19476 512 ? S Jul21 0:07 upstart-udev-bridge --daemon
root 342 0.0 0.0 51228 468 ? Ss Jul21 0:00 /lib/systemd/systemd-udevd --daemon
root 565 0.0 0.0 0 0 ? S< Jul21 0:00 [kmemstick]
root 745 0.0 0.0 15364 252 ? S Jul21 0:06 upstart-socket-bridge --daemon
root 770 0.0 0.0 0 0 ? S< Jul21 0:00 [cfg80211]
root 818 0.0 0.0 0 0 ? S< Jul21 0:00 [hd-audio0]
root 850 0.0 0.0 23420 80 ? Ss Jul21 0:11 rpcbind
root 853 0.0 0.0 0 0 ? S Jul21 0:00 [kworker/2:2]
statd 872 0.0 0.0 21544 8 ? Ss Jul21 0:00 rpc.statd -L
root 953 0.0 0.0 0 0 ? S< Jul21 0:00 [rpciod]
root 1097 0.0 0.0 15276 364 ? S Jul21 0:06 upstart-file-bridge --daemon
message+ 1152 0.0 0.0 39508 1448 ? Ss Jul21 1:40 dbus-daemon --system --fork
root 1157 0.0 0.0 23480 0 ? Ss Jul21 0:00 rpc.idmapd
root 1186 0.0 0.0 43736 984 ? Ss Jul21 1:13 /lib/systemd/systemd-logind
syslog 1212 0.0 0.0 256228 688 ? Ssl Jul21 1:41 rsyslogd
root 1714 0.0 0.0 0 0 ? S Jul21 0:00 [kauditd]
root 1791 0.0 0.0 279832 1312 ? Sl Jul21 1:16 /usr/lib/policykit-1/polkitd --no-debug
root 4880 0.0 0.0 14540 4 tty4 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty4
root 4883 0.0 0.0 14540 4 tty5 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty5
root 4890 0.0 0.0 14540 4 tty2 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty2
root 4891 0.0 0.0 14540 4 tty3 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty3
root 4894 0.0 0.0 14540 4 tty6 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty6
root 4919 0.0 0.0 4368 4 ? Ss Jul21 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 4921 0.0 0.0 61364 364 ? Ss Jul21 0:00 /usr/sbin/sshd -D
root 4947 0.0 0.0 23656 168 ? Ss Jul21 0:14 cron
root 5077 0.0 0.0 25324 648 ? Ss Jul21 0:34 /usr/sbin/hostapd -B -P /var/run/hostapd.pid /etc/hostapd/hostapd.conf
root 5192 0.0 0.0 0 0 ? S Jul21 0:00 [lockd]
root 5224 0.0 0.0 24048 4 ? Ss Jul21 0:00 /usr/sbin/rpc.mountd --manage-gids
root 6160 0.0 0.0 14540 4 tty1 Ss+ Jul21 0:00 /sbin/getty -8 38400 tty1
root 6681 0.0 0.0 10232 0 ? Ss 11:07 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
root 9452 0.0 0.0 0 0 ? S 11:28 0:00 [kworker/u8:1]
root 9943 0.0 0.0 0 0 ? S 11:42 0:00 [kworker/u8:0]
daemon 11290 0.0 0.0 19140 164 ? Ss 11:59 0:00 atd
root 11301 0.2 0.1 26772 3436 ? Ss 12:00 0:01 /usr/sbin/openvpn --writepid /var/run/openvpn.zanview.com.pid --status /var/run/openvpn.zanview.com.status 10 --cd /etc/openvpn --config /etc/openvpn/zanvie
root 11335 0.0 0.0 0 0 ? S 12:01 0:00 [kworker/0:2]
root 11336 0.0 0.2 107692 4248 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11384 0.0 0.0 107692 1948 ? S 12:01 0:00 sshd: deployer@pts/0
deployer 11385 0.0 0.1 19972 2392 pts/0 Ss+ 12:01 0:00 -bash
root 11502 0.0 0.2 107692 4252 ? Ss 12:01 0:00 sshd: deployer [priv]
deployer 11552 0.0 0.0 107692 1952 ? S 12:01 0:00 sshd: deployer@pts/2
deployer 11553 0.0 0.1 19972 2388 pts/2 Ss 12:01 0:00 -bash
root 11841 0.0 0.2 107692 4248 ? Ss 12:02 0:00 sshd: deployer [priv]
deployer 11889 0.0 0.1 108008 2280 ? S 12:02 0:00 sshd: deployer@pts/3
deployer 11890 0.0 0.1 19972 2388 pts/3 Ss 12:02 0:00 -bash
root 12007 0.0 0.1 67796 2136 pts/3 S 12:02 0:00 sudo su -
root 12008 0.0 0.0 67376 2016 pts/3 S 12:02 0:00 su -
root 12009 0.0 0.1 18308 2228 pts/3 S+ 12:02 0:00 -su
root 12112 0.0 0.1 67796 2136 pts/2 S 12:08 0:00 sudo su -
root 12113 0.0 0.0 67376 2012 pts/2 S 12:08 0:00 su -
root 12114 0.0 0.1 18308 2192 pts/2 S 12:08 0:00 -su
root 12180 0.0 0.0 15568 1160 pts/2 R+ 12:09 0:00 ps aux
root 25529 0.0 0.0 0 0 ? S< Jul28 0:09 [kworker/u9:1]
root 28305 0.0 0.0 0 0 ? S< Aug05 0:00 [kworker/u9:2]
```
PS Mem Output
=============
I also tried the ps\_mem.py from <https://github.com/pixelb/ps_mem>
```
root@XanBox:~/ps_mem# python ps_mem.py
Private + Shared = RAM used Program
144.0 KiB + 9.5 KiB = 153.5 KiB acpid
172.0 KiB + 29.5 KiB = 201.5 KiB atd
248.0 KiB + 35.0 KiB = 283.0 KiB cron
272.0 KiB + 84.0 KiB = 356.0 KiB upstart-file-bridge
276.0 KiB + 84.5 KiB = 360.5 KiB upstart-socket-bridge
280.0 KiB + 102.5 KiB = 382.5 KiB upstart-udev-bridge
332.0 KiB + 54.5 KiB = 386.5 KiB rpc.idmapd
368.0 KiB + 91.5 KiB = 459.5 KiB rpcbind
388.0 KiB + 251.5 KiB = 639.5 KiB systemd-logind
668.0 KiB + 43.5 KiB = 711.5 KiB hostapd
576.0 KiB + 157.5 KiB = 733.5 KiB systemd-udevd
676.0 KiB + 65.5 KiB = 741.5 KiB rpc.mountd
604.0 KiB + 163.0 KiB = 767.0 KiB rpc.statd
908.0 KiB + 62.5 KiB = 970.5 KiB dbus-daemon [updated]
932.0 KiB + 117.0 KiB = 1.0 MiB getty [updated] (6)
1.0 MiB + 69.5 KiB = 1.1 MiB openvpn
1.0 MiB + 137.0 KiB = 1.2 MiB polkitd
1.5 MiB + 202.0 KiB = 1.7 MiB htop
1.4 MiB + 306.5 KiB = 1.7 MiB whoopsie
1.4 MiB + 279.0 KiB = 1.7 MiB su (3)
1.5 MiB + 268.5 KiB = 1.8 MiB sudo (3)
2.2 MiB + 11.5 KiB = 2.3 MiB dhclient
3.9 MiB + 741.0 KiB = 4.6 MiB bash (6)
5.3 MiB + 254.5 KiB = 5.5 MiB init
2.7 MiB + 3.3 MiB = 6.1 MiB sshd (7)
18.1 MiB + 56.5 KiB = 18.2 MiB rsyslogd
---------------------------------
53.7 MiB
=================================
```
Slabtop Output
==============
I also tried slabtop:
```
root@XanBox:~# slabtop -sc
Active / Total Objects (% used) : 131306 / 137558 (95.5%)
Active / Total Slabs (% used) : 3888 / 3888 (100.0%)
Active / Total Caches (% used) : 63 / 105 (60.0%)
Active / Total Size (% used) : 27419.31K / 29580.53K (92.7%)
Minimum / Average / Maximum Object : 0.01K / 0.21K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8288 7975 96% 0.57K 296 28 4736K inode_cache
14259 12858 90% 0.19K 679 21 2716K dentry
2384 1943 81% 0.96K 149 16 2384K ext4_inode_cache
20916 20494 97% 0.11K 581 36 2324K sysfs_dir_cache
624 554 88% 2.00K 39 16 1248K kmalloc-2048
195 176 90% 5.98K 39 5 1248K task_struct
6447 6387 99% 0.19K 307 21 1228K kmalloc-192
2128 1207 56% 0.55K 76 28 1216K radix_tree_node
768 761 99% 1.00K 48 16 768K kmalloc-1024
176 155 88% 4.00K 22 8 704K kmalloc-4096
1100 1100 100% 0.63K 44 25 704K proc_inode_cache
1008 1008 100% 0.66K 42 24 672K shmem_inode_cache
2640 2262 85% 0.25K 165 16 660K kmalloc-256
300 300 100% 2.06K 20 15 640K sighand_cache
5967 5967 100% 0.10K 153 39 612K buffer_head
1152 1053 91% 0.50K 72 16 576K kmalloc-512
3810 3810 100% 0.13K 127 30 508K ext4_allocation_context
60 60 100% 8.00K 15 4 480K kmalloc-8192
225 225 100% 2.06K 15 15 480K idr_layer_cache
7616 7324 96% 0.06K 119 64 476K kmalloc-64
700 700 100% 0.62K 28 25 448K sock_inode_cache
252 252 100% 1.75K 14 18 448K TCP
8925 8544 95% 0.05K 105 85 420K shared_policy_node
3072 2351 76% 0.12K 96 32 384K kmalloc-128
360 360 100% 1.06K 12 30 384K signal_cache
432 337 78% 0.88K 24 18 384K mm_struct
```
Other
=====
I also tried scanning for a rootkit with rkhunter - it found nothing. And I tried to sync and dump cache with:
```
sync; sync; sync; echo 3 > /proc/sys/vm/drop_caches
```
It made no difference also.
I also tried to force swap or disable swap with:
```
sudo sysctl -w vm.swappiness=100
sudo swapoff /dev/sda2
```
I also tried using htop and sorting by memory and it is not showing where the memory is going either. The kernel version is Linux 3.13.0-40-generic #69-Ubuntu SMP.
Dmesg output: <http://pastie.org/9558255>
smem output: <http://pastie.org/9558290>
Conclusion
==========
What is going on? - Where is all the memory going? - How do I find out? | 2014/08/06 | [
"https://superuser.com/questions/793192",
"https://superuser.com",
"https://superuser.com/users/50300/"
] | My conclusion is it is a kernel memory leak somewhere in the Linux kernel, this is why none of the userspace tools are able to show where memory is being leaked. Maybe it is related to this question: <https://serverfault.com/questions/670423/linux-memory-usage-higher-than-sum-of-processes>
I upgraded the kernel version from 3.13 to 3.19 and it seems the memory leak has stopped! - I will report back if I see a leak again.
It would still be useful to have some easy/easier way to see how much memory is used for different parts of the Linux kernel. It is still a mystery what was causing the leak in 3.13. | Story
=====
I can reproduce your issue using [ZFS on Linux](http://zfsonlinux.org/).
Here is a server called `node51` with `20GB` of RAM. I marked `16GiB` of RAM to be allocatable to the [ZFS adaptive replacement cache (ARC)](http://open-zfs.org/wiki/Performance_tuning#Adaptive_Replacement_Cache):
```
root@node51 [~]# echo 17179869184 > /sys/module/zfs/parameters/zfs_arc_max
root@node51 [~]# grep c_max /proc/spl/kstat/zfs/arcstats
c_max 4 17179869184
```
Then, I read a `45GiB` file using [Pipe Viewer](http://www.ivarch.com/programs/pv.shtml) in my ZFS pool `zeltik` to fill up the ARC:
```
root@node51 [~]# pv /zeltik/backup-backups/2014.04.11.squashfs > /dev/zero
45GB 0:01:20 [ 575MB/s] [==================================>] 100%
```
Now look at the free memory:
```
root@node51 [~]# free -m
total used free shared buffers cached
Mem: 20013 19810 203 1 51 69
-/+ buffers/cache: 19688 324
Swap: 7557 0 7556
```
Look!
`51MiB` in buffers
`69MiB` in cache
`120MiB` in both
`19688MiB` of RAM in use, including buffers and cache
`19568MiB` of RAM in use, excluding buffers and cache
The Python script that you referenced reports that applications are only using a small amount of RAM:
```
root@node51 [~]# python ps_mem.py
Private + Shared = RAM used Program
148.0 KiB + 54.0 KiB = 202.0 KiB acpid
176.0 KiB + 47.0 KiB = 223.0 KiB swapspace
184.0 KiB + 51.0 KiB = 235.0 KiB atd
220.0 KiB + 57.0 KiB = 277.0 KiB rpc.idmapd
304.0 KiB + 62.0 KiB = 366.0 KiB irqbalance
312.0 KiB + 64.0 KiB = 376.0 KiB sftp-server
308.0 KiB + 89.0 KiB = 397.0 KiB rpcbind
300.0 KiB + 104.5 KiB = 404.5 KiB cron
368.0 KiB + 99.0 KiB = 467.0 KiB upstart-socket-bridge
560.0 KiB + 180.0 KiB = 740.0 KiB systemd-logind
724.0 KiB + 93.0 KiB = 817.0 KiB dbus-daemon
720.0 KiB + 136.0 KiB = 856.0 KiB systemd-udevd
912.0 KiB + 118.5 KiB = 1.0 MiB upstart-udev-bridge
920.0 KiB + 180.0 KiB = 1.1 MiB rpc.statd (2)
1.0 MiB + 129.5 KiB = 1.1 MiB screen
1.1 MiB + 84.5 KiB = 1.2 MiB upstart-file-bridge
960.0 KiB + 452.0 KiB = 1.4 MiB getty (6)
1.6 MiB + 143.0 KiB = 1.7 MiB init
5.1 MiB + 1.5 MiB = 6.5 MiB bash (3)
5.7 MiB + 5.2 MiB = 10.9 MiB sshd (8)
11.7 MiB + 322.0 KiB = 12.0 MiB glusterd
27.3 MiB + 99.0 KiB = 27.4 MiB rsyslogd
67.4 MiB + 453.0 KiB = 67.8 MiB glusterfsd (2)
---------------------------------
137.4 MiB
=================================
```
**`19568MiB - 137.4MiB ≈ 19431MiB` of unaccounted RAM**
Explanation
===========
The `120MiB` of buffers and cache used that you saw in the story above account for the kernel's efficient behavior of caching data sent to or received from an external device.
>
> The first row, labeled *Mem*, displays physical memory utilization,
> including the amount of memory allocated to buffers and caches. A
> buffer, also called *buffer memory*, is usually defined as a portion of
> memory that is set aside as a temporary holding place for data that is
> being sent to or received from an external device, such as a HDD,
> keyboard, printer or network.
>
>
> The second line of data, which begins with *-/+ buffers/cache*, shows
> the amount of physical memory currently devoted to system *buffer
> cache*. This is particularly meaningful with regard to application
> programs, as all data accessed from files on the system that are
> performed through the use of *read()* and *write()* *system calls* pass
> through this cache. This cache can greatly speed up access to data by
> reducing or eliminating the need to read from or write to the HDD or
> other disk.
>
>
>
Source: <http://www.linfo.org/free.html>
Now how do we account for the missing `19431MiB`?
In the `free -m` output above, the `19688MiB` "*used*" in "*-/+ buffers/cache*" comes from this formula:
```
(kb_main_used) - (buffers_plus_cached) =
(kb_main_total - kb_main_free) - (kb_main_buffers + kb_main_cached)
kb_main_total: MemTotal from /proc/meminfo
kb_main_free: MemFree from /proc/meminfo
kb_main_buffers: Buffers from /proc/meminfo
kb_main_cached: Cached from /proc/meminfo
```
Source: [procps/free.c](http://procps.cvs.sourceforge.net/viewvc/procps/procps/free.c?revision=1.2&view=markup) and [procps/proc/sysinfo.c](http://procps.cvs.sourceforge.net/viewvc/procps/procps/proc/sysinfo.c?revision=1.41&view=markup)
(If you do the numbers based on my `free -m` output, you'll notice that `2MiB` aren't accounted for, but that's because of rounding errors introduced by this code: `#define S(X) ( ((unsigned long long)(X) << 10) >> shift)`)
The numbers don't add up in `/proc/meminfo`, either (I didn't record `/proc/meminfo` when I ran `free -m`, but we can see from your question that `/proc/meminfo` doesn't show where the missing RAM is), so we can conclude from the above that `/proc/meminfo` doesn't tell the whole story.
In my testing conditions, I know as a control that ZFS on Linux is responsible for the high RAM usage. I told its ARC that it could use up to `16GiB` of the server's RAM.
ZFS on Linux isn't a process. It's a kernel module.
From what I've found so far, the RAM usage of a kernel module wouldn't show up using process information tools because the module isn't a process.
Troubleshooting
===============
Unfortunately, I don't know enough about Linux to offer you a way to build a list of how much RAM non-process components (like the kernel and its modules) are using.
At this point, we can speculate, guess, and check.
You provided a `dmesg` output. Well-designed kernel modules would log some of their details to `dmesg`.
After looking through `dmesg`, one item stood out to me: `FS-Cache`
`FS-Cache` is part of the `cachefiles` kernel module and relates to the package `cachefilesd` on Debian and Red Hat Enterprise Linux.
Perhaps some time ago, you configured `FS-Cache` on a RAM disk to reduce the impact of network I/O as your server analyzes the video data.
Try disabling any suspicious kernel modules that could be eating up RAM. They can probably be disabled with [`blacklist`](https://wiki.debian.org/KernelModuleBlacklisting) in `/etc/modprobe.d/`, followed by a `sudo update-initramfs -u` (commands and locations may vary by Linux distribution).
Conclusion
==========
A memory leak is eating up `8MB/hr` of your RAM and won't release the RAM, seemingly no matter what you do. I was not able to determine the source of your memory leak based on the information that you provided, nor was I able to offer a way to find that memory leak.
Someone who is more experienced with Linux than I will need to provide input on how we can determine where the "other" RAM usage is going.
I have started a bounty on this question to see if we can get a better answer than "speculate, guess, and check". |
2,148,783 | I want to know how to solve the question if you have a set of polynomials
$$
p\_1 = 1 + x,
$$
$$
p\_2 = 1 + 2x + x^2,
$$
$$
p\_3 = 1 + 3x + 3x^2 + x^3,
$$
and if the polynomial $p = 2017 + x^2 + 26x^3$ is spanned by the set
$S = \{p\_1,p\_2,p\_3\}$
Thank you | 2017/02/17 | [
"https://math.stackexchange.com/questions/2148783",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/417321/"
] | **Proof I**
>
> $$3^n + 1 \implies (2x + 1)+1 \implies 2x+2$$
>
>
>
Case: $n = 0$
$$ (2\cdot0 + 2) = 2 $$
Case: $n = n$
$$(2n + 2) = 2(n+1) $$
Case: $n = n+1$
$$(2(n+1) + 2) = 2n+4 = 2(n+2)$$
$$\Box$$
---
**Proof II**
>
> $$(3^n + 1)\text{ mod }2 \implies (3^n + 2 -1) \text{ mod }2 \implies (3^n -1)\text{ mod }2 $$
>
>
>
Case: $n = 0$
$$3^0 \equiv 1 (\text{ mod }2)\implies 3^0 -1 \equiv 0 (\text{ mod }2)$$
Case: $n = n$
$$3^n \equiv 1 (\text{ mod }2)\implies 3^n -1 \equiv 0 (\text{ mod }2)$$
Case: $n = n+1$
$$3^{n+1} \equiv 1 (\text{ mod }2)\implies 3^{n+1} -1 \equiv 0 (\text{ mod }2)$$
$$\Box$$
Case of the [Congruence Power Rule](https://math.stackexchange.com/a/879262/242) | An easier answer is to note that:
$3^n - 1 \equiv 1^n - 1 \equiv 0 (mod 2)$.Thus, we conclude that $2|(3^n -1)$. Hence, $3^n - 1$ is always even. |
2,148,783 | I want to know how to solve the question if you have a set of polynomials
$$
p\_1 = 1 + x,
$$
$$
p\_2 = 1 + 2x + x^2,
$$
$$
p\_3 = 1 + 3x + 3x^2 + x^3,
$$
and if the polynomial $p = 2017 + x^2 + 26x^3$ is spanned by the set
$S = \{p\_1,p\_2,p\_3\}$
Thank you | 2017/02/17 | [
"https://math.stackexchange.com/questions/2148783",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/417321/"
] | **Direct Proof:**
$3^n$ has no factor of $2$, so it is odd. $3^n+1$ is one greater than an odd number.
---
**Inductive Proof:**
$3^0+1=2$ is even.
Suppose $3^n+1$ is even, then
$$
3^{n+1}+1=3\left(3^n+1\right)-2
$$
is an even number minus an even number, hence even. | **Proof I**
>
> $$3^n + 1 \implies (2x + 1)+1 \implies 2x+2$$
>
>
>
Case: $n = 0$
$$ (2\cdot0 + 2) = 2 $$
Case: $n = n$
$$(2n + 2) = 2(n+1) $$
Case: $n = n+1$
$$(2(n+1) + 2) = 2n+4 = 2(n+2)$$
$$\Box$$
---
**Proof II**
>
> $$(3^n + 1)\text{ mod }2 \implies (3^n + 2 -1) \text{ mod }2 \implies (3^n -1)\text{ mod }2 $$
>
>
>
Case: $n = 0$
$$3^0 \equiv 1 (\text{ mod }2)\implies 3^0 -1 \equiv 0 (\text{ mod }2)$$
Case: $n = n$
$$3^n \equiv 1 (\text{ mod }2)\implies 3^n -1 \equiv 0 (\text{ mod }2)$$
Case: $n = n+1$
$$3^{n+1} \equiv 1 (\text{ mod }2)\implies 3^{n+1} -1 \equiv 0 (\text{ mod }2)$$
$$\Box$$
Case of the [Congruence Power Rule](https://math.stackexchange.com/a/879262/242) |
2,148,783 | I want to know how to solve the question if you have a set of polynomials
$$
p\_1 = 1 + x,
$$
$$
p\_2 = 1 + 2x + x^2,
$$
$$
p\_3 = 1 + 3x + 3x^2 + x^3,
$$
and if the polynomial $p = 2017 + x^2 + 26x^3$ is spanned by the set
$S = \{p\_1,p\_2,p\_3\}$
Thank you | 2017/02/17 | [
"https://math.stackexchange.com/questions/2148783",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/417321/"
] | **Direct Proof:**
$3^n$ has no factor of $2$, so it is odd. $3^n+1$ is one greater than an odd number.
---
**Inductive Proof:**
$3^0+1=2$ is even.
Suppose $3^n+1$ is even, then
$$
3^{n+1}+1=3\left(3^n+1\right)-2
$$
is an even number minus an even number, hence even. | An easier answer is to note that:
$3^n - 1 \equiv 1^n - 1 \equiv 0 (mod 2)$.Thus, we conclude that $2|(3^n -1)$. Hence, $3^n - 1$ is always even. |
5,073,402 | I have a script that works in several steps to e-mail students at a school who are tardy. The school essentially penalizes students who have 3 and 5 tardies. However, alongside that, there's a total tardy count. For example, a student can have 16 tardies but their "penalizing" count will `16 % 5 === 1`.
This is how it works:
**A cron job runs at 3:00 each day checking the following:**
If the amount of times the student (`tardy % 5 == 3`), that means they have 3 tardies. The script then updates a column called `tardyemail` for that particular student to equal 1.
If the amount of times the student is tardy % (mod) 5 = 0, that means they have 5 tardies. The script then updates the `tardyemail` column to equal 2.
Here's the relevant code:
```
if($row['times_tardy'] % 5 == 0) {
echo $row['fname']." - 5<br />";
$sql = "UPDATE student SET tardyemail = '2' WHERE rfid = '" . $row['StudentID'] . "'";
mysql_query($sql) or die (mysql_error());
}
if($row['times_tardy'] % 5 == 3) {
echo $row['fname']." - 3<br />";
$sql = "UPDATE student SET tardyemail = '1' WHERE rfid = '" . $row['StudentID'] . "'";
mysql_query($sql) or die (mysql_error());
}
```
**A separate cron script runs at 3:30 performing the following:**
Select students who's tardyemail column is equal to 1, and if it's equal to 1, send the template e-mail for 3 tardies out to those students. Then, update the tardyemail column to 0.
Select students who's tardyemail column is equal to 2, and if it's equal to 2, send the template e-mail for 5 tardies out to those students. Then, update the tardyemail column to 0.
Relevant code:
```
$sql = "SELECT * FROM student WHERE tardyemail = '1' AND grade_level > 10";
$result = mysql_query($sql) or die (mysql_error());
if(mysql_num_rows($result) > 0) {
while($row = mysql_fetch_array($result)) {
// send the email
}
}
// update tardyemail to equal 0
$sql = "SELECT * FROM student WHERE tardyemail = '2' AND grade_level > 10";
$result = mysql_query($sql) or die (mysql_error());
if(mysql_num_rows($result) > 0) {
while($row=mysql_fetch_array($result)) {
// sendthe mail
}
}
// update tardyemail to equal 0
```
The problem with this is the fact that, each day, student's at 3 or 5 tardies are continuously e-mailed because if they don't accrue any more tardies, they remain at 3 and 5. I need some help figuring out a way to do this to not e-mail them multiple times if they haven't accrued any more tardies. | 2011/02/22 | [
"https://Stackoverflow.com/questions/5073402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/588055/"
] | Really thanks u, i tested and it's true. This is over view of my problems: I have 132 images in device (~300 kb / 1 image), now my purpose is to merge each 2 images into 1 large image (side by side in horizontal orient). This is what I do :
```
int index = 1;
for(int i = 1;i <= 132;i++)
{
if(i % 2 == 0 && i > 1)
{
NSString *file = [NSString stringWithFormat:@"%@img_%d.jpg",path2,index];
NSLog(@"index %d",index);
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSData *data;
NSString *filename1 = [NSString stringWithFormat:@"originimg_%d.jpg",i];
NSString *filename2 = [NSString stringWithFormat:@"originimg_%d.jpg",i + 1];
NSString *file1 = [[NSBundle mainBundle] pathForResource:filename1 ofType:nil];
NSString *file2 = [[NSBundle mainBundle] pathForResource:filename2 ofType:nil];
UIImage *image1 = [[UIImage alloc]initWithContentsOfFile:file1];
UIImage *image2 = [[UIImage alloc]initWithContentsOfFile:file2];
UIImage *image = [self combineImages:image1 toImage:image2];
data = UIImageJPEGRepresentation(image, 0.7);
[data writeToFile:file atomically:NO];
[image1 release];
image1 = nil;
[image2 release];
image2 = nil;
[pool drain];
pool = nil;
[file release];
file = nil;
index++;
}
}
```
and function to combine 2 images
```
-(UIImage *)combineImages:(UIImage *)image1 toImage:(UIImage *)image2
{
CGSize size;
size= CGSizeMake(768 * 2, 1024);
UIGraphicsBeginImageContext(size);
// Draw image1
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
// Draw image2
[image2 drawInRect:CGRectMake(image1.size.width, 0, image2.size.width, image2.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage ;
}
```
* That is my way but when I run in instruments (Allocations) It takes 303.4 mb for that :( . Can u suggest me a better way? | I assume you omitted `NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];` before the quoted code.
You should release `image1` before sending `[pool drain]` because you allocated it. The `data` object is autoreleased, which means it gets released in `[pool drain]`. However, releasing the object does not magically set all the pointers to the object to nil, so `data` points to a deallocated object. Just for kicks, try the following instead of the last line:
```
NSLog(@"%@", data);
```
Your app should crash at this line because you can't send messages to deallocated objects. |
222,828 | Is the following sentence incorrect?
>
> Churchil was a great orator and a great politician of his time.
>
>
>
Some say that when article refers to a single person it must be used just once
As
>
> Churchil was a great orator and politician of his time.
>
>
>
But to me both the sentences sound correct.
The example is from [a study guide written in an Indian language for English language learners](http://successkhan.com/subject-verb-agreement/). The study guide misspells "Churchill" as "Churchil". | 2019/09/01 | [
"https://ell.stackexchange.com/questions/222828",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/-1/"
] | >
> 1. Churchill was a great orator and a great politician of his times.
> 2. Churchill was a great orator and politician of his times.
>
>
>
Both of the above sentences are not only correct but also mean exactly the same thing.
But the following sentences have different shades of meaning:
>
> 1. Tagore is a great poet, painter, singer, dramatist, novelist and patriot.
> 2. Tagore is a great poet, a painter, a singer, a dramatist, a novelist and a patriot.
>
>
>
Both the sentences are grammatically correct.
The first sentence emphatically says that Tagore is great in all those aspects.
The second sentence may mean that Tagore is great as a poet but just a painter, singer, novelist and painter. The greatness of Tagore may not be implied to his other qualities.
So if we want to say that Tagore is great in all aspects the sentence 1 is preferrable.
I would like to give another example which shows how the omission of the article brings a change in meaning:
>
> 1. A black and a white cow are grazing (two cows having different colours).
> 2. A black and white cow is grazing (a single cow having both colours).
>
>
>
I will provide the link which explains the topic
<https://www.englishforums.com/English/AdjectivesByThemselves/gqckb/post.htm>
My answer is based on the books I have read and comments on the site and my research on the internet. | A sentence can have multiple phrases, each of which refers to a different aspect of a single entity. Each phrase can have an appropriate article (or lack thereof). For example:
>
> A man, a plan, a canal -- Panama! -- A famous palindrome about U.S. President Teddy Roosevelt and the construction of the Panama Canal.
>
>
> [Any commissioned officer, cadet, or midshipman who is convicted of conduct unbecoming an officer and a gentleman shall be punished as a court-martial may direct.](https://www.law.cornell.edu/uscode/text/10/933) -- From the United States' version of the Uniform Code of Military Justice.
>
>
> [A great man, a humble servant, and a shepherd to millions has passed on. Billy Graham was a consequential leader.](https://billygraham.org/story/us-presidents-honor-their-pastor-and-friend-mr-graham/) -- From former U.S. President George W. Bush's formal statement after the Reverend Billy Graham died.
>
>
> |
222,828 | Is the following sentence incorrect?
>
> Churchil was a great orator and a great politician of his time.
>
>
>
Some say that when article refers to a single person it must be used just once
As
>
> Churchil was a great orator and politician of his time.
>
>
>
But to me both the sentences sound correct.
The example is from [a study guide written in an Indian language for English language learners](http://successkhan.com/subject-verb-agreement/). The study guide misspells "Churchill" as "Churchil". | 2019/09/01 | [
"https://ell.stackexchange.com/questions/222828",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/-1/"
] | >
> 1. Churchill was a great orator and a great politician of his times.
> 2. Churchill was a great orator and politician of his times.
>
>
>
Both of the above sentences are not only correct but also mean exactly the same thing.
But the following sentences have different shades of meaning:
>
> 1. Tagore is a great poet, painter, singer, dramatist, novelist and patriot.
> 2. Tagore is a great poet, a painter, a singer, a dramatist, a novelist and a patriot.
>
>
>
Both the sentences are grammatically correct.
The first sentence emphatically says that Tagore is great in all those aspects.
The second sentence may mean that Tagore is great as a poet but just a painter, singer, novelist and painter. The greatness of Tagore may not be implied to his other qualities.
So if we want to say that Tagore is great in all aspects the sentence 1 is preferrable.
I would like to give another example which shows how the omission of the article brings a change in meaning:
>
> 1. A black and a white cow are grazing (two cows having different colours).
> 2. A black and white cow is grazing (a single cow having both colours).
>
>
>
I will provide the link which explains the topic
<https://www.englishforums.com/English/AdjectivesByThemselves/gqckb/post.htm>
My answer is based on the books I have read and comments on the site and my research on the internet. | >
> Churchill was a great orator and a great politician of his time.
>
>
>
It *is* correct. As has already been mentioned, it is just a style choice whether you want to make a sentence shorter or not. However, I would argue that *most* people prefer more concise, succinct language.
The sentence could be shortened, but only because it is using the same adjective "great" to describe his achievements:
>
> Churchill was a great orator and politician.
>
>
>
This would be understood that he was both "great" at being an orator and at being a politician.
Obviously, if you wanted to ascribe *different* adjectives then there is no way to shorten it. You would have to write:
>
> Churchill was a great orator and an average politician.
>
>
> |
222,828 | Is the following sentence incorrect?
>
> Churchil was a great orator and a great politician of his time.
>
>
>
Some say that when article refers to a single person it must be used just once
As
>
> Churchil was a great orator and politician of his time.
>
>
>
But to me both the sentences sound correct.
The example is from [a study guide written in an Indian language for English language learners](http://successkhan.com/subject-verb-agreement/). The study guide misspells "Churchill" as "Churchil". | 2019/09/01 | [
"https://ell.stackexchange.com/questions/222828",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/-1/"
] | >
> 1. Churchill was a great orator and a great politician of his times.
> 2. Churchill was a great orator and politician of his times.
>
>
>
Both of the above sentences are not only correct but also mean exactly the same thing.
But the following sentences have different shades of meaning:
>
> 1. Tagore is a great poet, painter, singer, dramatist, novelist and patriot.
> 2. Tagore is a great poet, a painter, a singer, a dramatist, a novelist and a patriot.
>
>
>
Both the sentences are grammatically correct.
The first sentence emphatically says that Tagore is great in all those aspects.
The second sentence may mean that Tagore is great as a poet but just a painter, singer, novelist and painter. The greatness of Tagore may not be implied to his other qualities.
So if we want to say that Tagore is great in all aspects the sentence 1 is preferrable.
I would like to give another example which shows how the omission of the article brings a change in meaning:
>
> 1. A black and a white cow are grazing (two cows having different colours).
> 2. A black and white cow is grazing (a single cow having both colours).
>
>
>
I will provide the link which explains the topic
<https://www.englishforums.com/English/AdjectivesByThemselves/gqckb/post.htm>
My answer is based on the books I have read and comments on the site and my research on the internet. | Style Difference is a matter of context:
In a formal speech:
>
> Churchill was a great orator and a great politician.
>
>
>
In other contexts, it depends on how formal or emphatic you want to be. Separating out the terms using two articles emphasizes each individually.
I just don't think it is more complicated than that.
However, in this sentence:
>
> Tagore is a great poet, painter, singer, dramatist, novelist and patriot.
>
>
>
I would not repeat the article, ever, because the list of accomplishments speaks for itself. |
18,168,730 | Hello everyone I have a problem with my custom navigation bar.
I needed to create a custom navigation bar and this was to be used in several view controllers so i created it as a category for UIViewController and Used the following code for creating the customisation i needed.
```
- (void)setCustomLabel:(NSString *)labelText
{
UILabel *navigationLabel = [[UILabel alloc]initWithFrame:CGRectMake(60,10,40,40)];
[navigationLabel setBackgroundColor:[UIColor clearColor]];
navigationLabel.font = [UIFont fontWithName:@"Humanist 521 BT-Bold" size:15.0];
navigationLabel.font = [UIFont boldSystemFontOfSize:18.0];
navigationLabel.textColor = [UIColor whiteColor];
navigationLabel.text = labelText;
navigationLabel.shadowColor = [UIColor colorWithRed:241.0/255.0 green:241.0/255.0 blue:241.0/255.0 alpha:1.0];
navigationLabel.shadowOffset = CGSizeMake(0.0, -1.0);
[navigationLabel sizeToFit];
[self.navigationController.navigationBar addSubview:navigationLabel];
[navigationLabel release];
}
```
On first view there are 2 buttons signIn and Register when i click on signIn button it will take me to signIn View and when i click on register button it will take me to Register view.
I created 2 ViewControllers and set the navigationBarLabel in both views as Register and SignIn using the code :
```
[self setCustomLabel:@"REGISTER"];
```
and
```
[self setCustomLabel:@"SIGN IN"];
```
The views would have a title displayed as 
and

and it does look this way when i first run the application and click on either register or signIn Button but if i click on any of the 2 buttons navigate to the register or signIn view and then i go back and click on the second button the navigation bar changes to

Please help me out i have been at this for a very long time i set the navigation bar in viewDidAppear and i have also tried setting it to nil
```
[self setCustomLabel:nil];
```
in viewWillDisappear and in viewDidDisappear . I am new to iPhone development help me ou | 2013/08/11 | [
"https://Stackoverflow.com/questions/18168730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2354428/"
] | This is because you are **adding** the label to the navigation bar. Since it is the same navigation bar no matter how many views you go to, it just keeps adding new labels to the bar and leaves them there.
The way that I see it you have two options to fix this:
1. You can create a singular label one time and have it always on the navigation bar and just set its text in the method so that it always has the correct text and set it to hidden when you do not want it to be visible.
2. You can do the same thing you are doing, but add a `tag` to the label and before creating the new label, you can iterate through the `NavigationBar`'s subviews and remove the old label by checking the tags. | You're not removing the label from the navigationbar when you're addign a new one.
Perhaps you should try calling `[navigationLabel removeFromSuperview]` on it when you wish to set a new one. (This means you probably have to store it in an @property)
example:
header file:
```
@property (assign, nonatomic) UILabel *navigationLabel;
```
Implementation file:
```
@synthesize navigationLabel
- (void)setCustomLabel:(NSString *)labelText {
if (self.navigationLabel) [self.navigationLabel removeFromSuperview];
self.navigationLabel = [[UILabel alloc]initWithFrame:CGRectMake(60,10,40,40)];
[self.navigationLabel setBackgroundColor:[UIColor clearColor]];
self.navigationLabel.font = [UIFont fontWithName:@"Humanist 521 BT-Bold" size:15.0];
self.navigationLabel.font = [UIFont boldSystemFontOfSize:18.0];
self.navigationLabel.textColor = [UIColor whiteColor];
self.navigationLabel.text = labelText;
self.navigationLabel.shadowColor = [UIColor colorWithRed:241.0/255.0 green:241.0/255.0 blue:241.0/255.0 alpha:1.0];
self.navigationLabel.shadowOffset = CGSizeMake(0.0, -1.0);
[self.navigationLabel sizeToFit];
[self.navigationController.navigationBar addSubview:self.navigationLabel];
[self.navigationLabel release];
}
``` |
86,172 | Context: 95 Acura Glove compartment handle, latch and key cylinder.
The dark ring pins the arm to the keyed-cylinder. The goal is to remove the cylinder to reveal the 4 digit "key cut code".
1. What is the name of the beveled-dark-annual retaining ring?
2. What is the method to remove the ring without damaging it so it may be reused?
3. Bonus round: What is a different name for the open retaining clip photographed at 2 O'clock
It would be useful to have two different names to differentiate the two retaining rings.
The hope is that experience can help with the second question.
Click on the photo see it full high resolution:
[](https://i.stack.imgur.com/tYdC5.jpg) | 2021/11/28 | [
"https://mechanics.stackexchange.com/questions/86172",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/18905/"
] | Chuck a cat in and leave overnight.
There is the old joke about the Landrover engineers sent to BMW to see how to improve quality. They inspected the production line, looking at all the dimensional checks and tests carried out. Once they got to the end of the line they saw an operative open the door of the BMW coming off the line and throwing a cat in.
They asked what that was about and were told that the car is checked the next day and if the cat is unconscious then the seals are satisfactory. They then left back to the UK and rushed to Production in the factory to start putting cats in the Landrovers coming off the line.
Next day they went to check to see how the seals were performing and the cats had escaped :) :) | Not sure if this applies to your make / model.
<https://topclassactions.com/lawsuit-settlements/consumer-products/auto-news/1019074-class-action-toyota-soy-coated-wiring-attracts-vehicle-damaging-rats-can-proceed/> |
86,172 | Context: 95 Acura Glove compartment handle, latch and key cylinder.
The dark ring pins the arm to the keyed-cylinder. The goal is to remove the cylinder to reveal the 4 digit "key cut code".
1. What is the name of the beveled-dark-annual retaining ring?
2. What is the method to remove the ring without damaging it so it may be reused?
3. Bonus round: What is a different name for the open retaining clip photographed at 2 O'clock
It would be useful to have two different names to differentiate the two retaining rings.
The hope is that experience can help with the second question.
Click on the photo see it full high resolution:
[](https://i.stack.imgur.com/tYdC5.jpg) | 2021/11/28 | [
"https://mechanics.stackexchange.com/questions/86172",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/18905/"
] | I have the same problem - but only under the bonnet - of a Pug 307. Set a trap, secured with a tiewrap, and to date am catching a mouse a night (for around a fortnight). No, not the same one... Hope to run out of mice by next week.
So, traps anyway, or use a device which emits a high pitched noise to deter them. Search in all nooks and crannies for any nests, and for now, remove any nesting materials. Only last week, I looked in the small compartment in another car for the jack, and another mouse had used a toilet roll in there for a nest. Very cosy. | Not sure if this applies to your make / model.
<https://topclassactions.com/lawsuit-settlements/consumer-products/auto-news/1019074-class-action-toyota-soy-coated-wiring-attracts-vehicle-damaging-rats-can-proceed/> |
86,172 | Context: 95 Acura Glove compartment handle, latch and key cylinder.
The dark ring pins the arm to the keyed-cylinder. The goal is to remove the cylinder to reveal the 4 digit "key cut code".
1. What is the name of the beveled-dark-annual retaining ring?
2. What is the method to remove the ring without damaging it so it may be reused?
3. Bonus round: What is a different name for the open retaining clip photographed at 2 O'clock
It would be useful to have two different names to differentiate the two retaining rings.
The hope is that experience can help with the second question.
Click on the photo see it full high resolution:
[](https://i.stack.imgur.com/tYdC5.jpg) | 2021/11/28 | [
"https://mechanics.stackexchange.com/questions/86172",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/18905/"
] | Use "sticky" traps baited with peanut butter. If you don't want to kill the mouse, vegetable oil will free it from the sticky. I would do it sooner rather than later because he will chew electric wire insulation disabling something. | Not sure if this applies to your make / model.
<https://topclassactions.com/lawsuit-settlements/consumer-products/auto-news/1019074-class-action-toyota-soy-coated-wiring-attracts-vehicle-damaging-rats-can-proceed/> |
35,030,120 | I am missing a ZlibEx unit in my code. Can I used the ZlibEx library from [Mike Lischke's GraphicEx library](https://github.com/mike-lischke/GraphicEx/tree/master/3rd%20party/DelphiZlib)? The [Base2 Technologies site](http://www.base2ti.com/) mentions the library is only for "*Delphi 5, 6, 7, 8, 2005, 2006, 2007, 2009, 2010, xe, xe2, and xe3*".
Can I use it for Delphi 10 Seattle? | 2016/01/27 | [
"https://Stackoverflow.com/questions/35030120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4418929/"
] | I believe the ZlibEx unit was created to bring a newer version of the [ZLIB](http://zlib.net/) library into Delphi, since at the time the version that was shipping with the VCL was an older version and was missing some needed functionality.
Delphi 10 Seattle now has been updated to use the latest version of ZLIB, so you might be able to just use RTL ZLIB instead of ZLibEx. If I remember correctly, there were a few extra wrapper routines in ZlibEx that may not have a counterpart, but those should be easy to migrate over into your code. | >
> Can I use it for Delphi 10 Seattle?
>
>
>
Yes.
The code compiles in Delphi 10 Seattle and I see no reason why it will not work. The text that lists the supported compiler versions is simply out of date and has not been updated since the release of XE3. |
9,919,946 | I am new to wpf and i am working on first application i have some issues related with this
1.)Control allignment : My controls when set on the page is seem ok but when i run my application controls slightly change their position.
2.)Resolution Issue : When i try to run application on the machine with different resolutions some controls become invisible.
3.)Bind combobox :When i try to bind combobox with static or dynamic combobox items i am not able to get first item on page load for eg if i have a city combobox then i want to show "Select City" on page load.
Thanks in advance | 2012/03/29 | [
"https://Stackoverflow.com/questions/9919946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1230768/"
] | 1. WPF has its own alignment system that differs from those in WinForms and HTML. Be sure to study the issue before doing any markup - trust me, you will just lose time.
2. WPF is resolution independent - it's one of the most essential of its features. The problem should be related to the 1st one.
3. Could you provide additional info so that I can figure out what exactly are you trying to accomplish?
There are lots of resources on WPF. I would recommend visiting [Wpf Tutorial](http://www.wpftutorial.net/). And for more serious reading [Pro WPF in C# 2010 (by Matthew MacDonald)](http://www.apress.com/9781430272052) is great. | **1 & 2)** As EvAlex says, WPF has it's own alignment system using various `Panel` types. These grow and shrink content to take advantage of the available space and resolution.
You appear to be dragging controls onto the form in DevStudio, which is adding all those `Margin="323,182,0,0"` properties to you markup. This is effectively hardcoding an absolute position for your controls which is generally a bad idea.
**3)** You cannot set text in a WPF combo unless it is in the list of items or you have set `IsEditable="True"`.
Read some of the tutorials that EvAlex posted. You need to get a good grasp of the basic ideas before jumping in. WPF does have a fairly steep learning curve. |
65,764,908 | My title is a bit messy but hopefully the information below is specific enough.
I have a script that scrapes the name and price of items from a online store and stores them in a pandas dataframe with 2 columns, Name and Price. The script runs at regular time periods and exports the data to a csv.
Now I want to combine the data to analyze the trends of different product prices over time. The issue I have is that the items scraped on any day are not necessarily the same as other days and the order of the items also differ.
How would I be able to store the price data in a dataframe where each row represents a specific product.
EDIT:
My inputs would be a few tables like this where each table is from a specific date and items might differ and the order might differ as well
| Item | Price |
| --- | --- |
| Car | 100 |
| Bike | 200 |
| ... | ... |
Output that i desire:
| Item | yesterday | today | tomorrow | ... |
| --- | --- | --- | --- | --- |
| Car | 100 | 200 | 150 | NA |
| House | 2000 | 2000 | 2000 | ... |
| Bike | NA | 10 | 10 | ... |
| ... | ... | ... | ... | ... | | 2021/01/17 | [
"https://Stackoverflow.com/questions/65764908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14851943/"
] | You can go the CSS variable way. This [codepen](https://codepen.io/kmurphs/pen/bGwzEXy) demonstrates.
Basically, in the `React` file:
```
<div style={{"--img": "url('https://images.unsplash.com/photo-1610907083431-d36d8947c8e2')"}}>text</div>
```
And, in CSS:
```
background-image: linear-gradient(to bottom, rgba(245, 246, 252, 0.52), rgba(117, 19, 93, 0.73)), var(--img);
```
If the gradient must also be dynamic, a similar approach should work still. | The first answer works well.
I am just adding another option you would like to use.
You can have the img url as **PROPS** just to make your code more Dynamic and robust.
Here is my use case:
```
function SndHeader({ bgImage }) {
return (
<div className="sndHeader" style={{ "--img": `url(${bgImage}),
linear-gradient(#e66465, #9198e5)` }}>
<h1>Your Title</h1>
</div>
)
}
```
my CSS looks very minimal and helps you centre everything within the Div
```
.sndHeader {
display: block;
border-radius: 0px 0px 57px 57px;
background-repeat: no-repeat;
background-attachment: fixed;
background-position: center top;
text-align: center;
background-image: linear-gradient(to bottom, rgba(245, 246, 252,
0.52), rgba(117, 19, 93, 0.73)),
var(--img);
background-size: cover;
}
``` |
74,013 | As per GDPR, we need to pseudonymize the name and email address. How can we pseudonymize name and email address? For example, is the below is correct way to pseudonymize?
>
> Steve Anderson
> Stxxx Axxdxrxn
>
>
> test@testdomain.com
> txxx@xxxxxxxxxx.com
>
>
>
Is the above definition of pseudonymize is correct? Or we can reduce or increase the number of characters like,
>
> Steve Anderson
> Sxxx xxxrxn
>
>
>
I want to implement this in my code but don't now how can I implement pseudonymized Name and Email properly. | 2021/10/27 | [
"https://law.stackexchange.com/questions/74013",
"https://law.stackexchange.com",
"https://law.stackexchange.com/users/41447/"
] | The GDPR does not prescribe how information should be pseudonymized and does not even define the term properly. Yet, pseudonymization is suggested as an safety measure in various places, so that pseudonymization should be implemented wherever appropriate.
The most useful guidance the GDPR gives is by contrasting pseudonymization with anonymization, where it says that pseudonymized data is still personal data because the data subjects can be identified using additional information (see [Recital 26 GDPR](https://gdpr-info.eu/recitals/no-26/)). In contrast, re-identification is not reasonably likely for truly anonymized data.
A common technique for pseudonymization is to replace identifying information with a pseudonym, for example replacing a name with a random numeric ID. However, this can only be considered pseudonymous if the mapping between the ID and the real value is not available to whoever uses this data.
Replacing individual fields in a data set might not be sufficient for pseudonymization because apparently non-sensitive fields could still enable indirect identification. This requires careful analysis taking into account the entire context of the data, so it isn't possible to say whether merely removing names + email addresses achieves pseudonymization or anonymization.
What you are doing is redacting *parts* of sensitive data. I think this is a pretty weak pseudonymization method since it still leaks partial information about the true value, and leaks information about the length of the redacted value. It is possible to argue that this is OK (e.g. if you can show that you have multiple similar redactions so that you achieve a level of k-anonymity). But by default, such partial redactions are likely to be unsafe. Completely removing the sensitive data is much safer.
The “Article 29 Working Party”, a pre-GDPR EU body, has published an [opinion on anonymization techniques in 2014 (PDF)](https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf). It does not account for the GDPR's specific phrasing, but provides an overview of pseudonymization and anonymization techniques and puts them into context of European data protection law. It considers how such guarantees guard against attacks such as singling out data subjects, linking multiple records of the same individual, and making inferences about the data subject.
In this guidance, pseudonymization techniques that are suggested include encrypting or hashing the sensitive data. However, the guidance warns that such techniques do not provide strong protection against singling out or linking records of the data subject.
A bit earlier, the [UK ICO had published guidance on personal data, including on the matter of anonymization and pseudonymization (PDF)](https://ico.org.uk/media/for-organisations/documents/1061/anonymisation-code.pdf) with some good examples. For determining whether a data set has been successfully anonymized (or pseudonymized), they suggest a motivated intruder test: could someone without specialist knowledge or skills re-identify the data subject? In your example, such a person should not be able to infer that a record about `Stxxx Axxdxrxn` is about a `Steve Anderson`. In my opinion, your pseudonymization approach would fail the motivated intruder test unless you have multiple records that could all be about Steve Anderson (compare k-anonymity). | First of all, the GDPR doe not require you to pseudonymize anything. It requires that "appropriate" security measures be used. It also required that when Personal Data (which I shall call PI) is processed (which includes storing PI) that there be a lawful basis, as described in GDPR [Article 6](https://gdpr-info.eu/art-6-gdpr/). The person's consent is one of the six possible lawful bases.
Second, while using pseudonymous data may be a good security practice, and is recommended by the GDPR, such data is still PI (and often PII, Personally Identifiable Information) and still requires a lawful basis for any processing. PI also requires that the Data Subject (DS) be notified when it is collected ([Article 13](https://gdpr-info.eu/art-13-gdpr/) and [Article 14](https://gdpr-info.eu/art-14-gdpr/), and that the DS has the "right to Know" ([Article 15](https://gdpr-info.eu/art-15-gdpr/)), "right to modify" ([Article 16](https://gdpr-info.eu/art-16-gdpr/)), and "right of erasure" ([Article 17](https://gdpr-info.eu/art-17-gdpr/)), and proper security must still be used on pseudonymized PI.
To convert PI into something that is not PI, where these rights and requirements do not apply, the data must be so modified that it is not reasonably possible, given current technology, to re-associate the modified data with the person that they represent, either directly or with the assistance of other data held by others than the Data Controller (DC). It must also not be possible to "single out" the DS. That is, if you suspect that the DS is a specific person X, it must not be possible to confirm this using the modified data. A hash, for instance, is not good enough, because if you suspect a particular person, you can hash that person's info and compare, and if you are correct there will be a match. If you can eliminate many of the possible suspects, leaving a much smaller pool, that is also singling-out, and means that the data has not been successfully anonymized. Data so modified is said to be anonymized, not just pseudonymous. Data that has been successfully anonymized is not subject to the security or notification requirements of the GDPR.
The GDPR does not specify any particular methods that may be used to anonymize PI. Certainly anything that leaves a recognizable name in the modified data is not good enough. Even recognizable initials would mean that the modified data has not been anonymized. Because that would allow someone the drastically limit which of a group of suspected people the PI belongs to. That is "singling out".
It should be assumed that an attacker knows any and all algorithms used to anonymize data, and if the attacker can re-identify or single-out the DS with this knowledge, plus other potentially available information, then the data has not been successfully anonymized.
To judge if an algorithm successfully anonymizes data, one would need the details of the algorithm, and an idea of other information which might be available to use in re-identifying the data. To anonymize data is not a simple or trivial task, and any DC who depends on it must be prepared to demonstrate that there is no reasonable way to re-identify or single out the DS from the anonymized data,
Nor does the GDPR ever require anonymization. If the DC abides by the requirements in handling PI and PII, there is no need for it at all. |
38,162,803 | I have created a Database Access Object (DAO) class, entity class, and table script, but I am getting an error that cannot be mapped to entity.
I am using hibernate framework and the connections are made properly with the database but still error occurs. please check the code below and help in any ways you can, all the files are provided below.
Table Script
```
DROP TABLE rmc_user;
CREATE TABLE rmc_user(
user_id VARCHAR(50) NOT NULL,
user_name VARCHAR(20) NOT NULL,
user_email VARCHAR(50) NOT NULL,
user_password VARCHAR(20),
CONSTRAINT rmc_user_user_id_pk PRIMARY KEY (user_id),
CONSTRAINT rmc_user_user_email_un UNIQUE (user_email)
);
INSERT INTO rmc_user VALUES ('101','yashik','yas@gmail.com','gulati123');
SELECT * FROM rmc_user;
```
DAO Class
```
package rmc.dao;
import java.util.List;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.query.Query;
import rmc.bean.User;
import rmc.entity.UserEntity;
import rmc.resources.HibernateUtility;
public class LoginDAOImpl implements LoginDAO {
@SuppressWarnings("deprecation")
public User getUserDetails(String userName, String password) {
SessionFactory sessionFactory = HibernateUtility.createSessionFactory();
Session session = null;
User u1 = null;
session = sessionFactory.openSession();
session.beginTransaction();
System.out.println("begin trx");
Query q1 = session
.createNativeQuery("select * from rmc_user where user_name=?");
System.out.println("begin trx");
q1.setParameter(0, userName);
System.out.println("begin trx");
@SuppressWarnings("unchecked")
List<UserEntity> l1 = q1.list();
System.out.println("begin trx");
System.out.println("size is"+l1.size());
if (l1.size() == 0) {
System.out.println("no Such user Exist");
} else if (!(l1.get(0).getPassword().equals(password))) {
System.out.println("Invalid Password");
}
System.out.println("begin trx");
u1 = new User();
u1.setEmail(l1.get(0).getEmail());
u1.setPassword(l1.get(0).getPassword());
u1.setUserId(l1.get(0).getUserId());
u1.setUserName(l1.get(0).getUserName());
session.getTransaction().commit();
if (session != null) {
session.close();
}
return u1;
}
}
```
Entity Class
```
package rmc.entity;
@Id
@Column(name="user_id")
private String userId;
@Column(name="user_name")
private String userName;
@Column(name="user_email")
private String email;
@Column(name="user_password")
private String password;
//getter and setter
}
```
Error Message
```
Exception in thread "main" java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to rmc.entity.UserEntity
at rmc.dao.LoginDAOImpl.getUserDetails(LoginDAOImpl.java:32)
at rmc.test.UserInterface.main(UserInterface.java:9)
```
UPDATED
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration SYSTEM
"http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.dialect">
org.hibernate.dialect.MySQLDialect
</property>
<property name="hibernate.connection.driver_class">
com.mysql.jdbc.Driver
</property>
<!-- Assume test is the database name -->
<property name="hibernate.connection.url">
jdbc:mysql://localhost:3306/rmc
</property>
<property name="hibernate.connection.username">
******
</property>
<property name="hibernate.connection.password">
******
</property>
<!-- List of XML mapping files -->
<mapping class="rmc.entity.UserEntity"/>
</session-factory>
``` | 2016/07/02 | [
"https://Stackoverflow.com/questions/38162803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6542086/"
] | If your result row is compatible with `UserEntity` the following modification might solve your problem:
```
Query q1 = session
.createNativeQuery("select * from rmc_user where user_name=?", UserEntity.class);
``` | The problem probably lies here
```
Query q1 = session.createNativeQuery("select * from rmc_user where user_name=?");
```
When you execute this query, because it's an sql query, the return list of `q1.list()` will have this format : Object[] { row1col1, row1col2,row1,col3,row2col1,...} that is it spreads the columns , it doesn't map row to entity(UserEntity).
It doesn't map because that's not HQL nor JPQL, that's native SQL and SQL doesn't know about your entities.
You should instead do this :
```
Query q1=session.createQuery("select * from UserEntity where user_name=?");
```
This is HQL and with this Hibernate will map every row to entity so the return list of `q1.list()` will now have format : UserEntity[]{entity1,entity2,...}.
I hope that solves your issue. |
30,740,306 | I´m designing an android app that locate places, these places are in a database in a server but I only have the name and location of the place, so I need to locate it in my app and put a *marker* there. It's posible to get coordinates only with address or, Do I need to redo my database adding fields with latitude and longitude? | 2015/06/09 | [
"https://Stackoverflow.com/questions/30740306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4987172/"
] | You can use the [Geocoder](http://developer.android.com/reference/android/location/Geocoder.html) class to do a look-up of the addresses you have, and then populate the map with Markers using the `LanLng` objects that are returned.
Note that the `Geocoder` class will not be able to geocode every address, but it will be successful for most of them if they are in the correct format.
Taking code from [this question](https://stackoverflow.com/questions/30521933/how-to-get-latitude-longitude-from-more-than-500-addresses) as a guide, I just got this simple example working.
I created a custom class that stores location name, location address, and a `LatLng` object to store the lat/lon.
For this simple example, I just used three addresses.
Here is the full class code:
```
import android.location.Address;
import android.location.Geocoder;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.widget.Toast;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.BitmapDescriptorFactory;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
public class MapsActivity extends AppCompatActivity {
private GoogleMap mMap; // Might be null if Google Play services APK is not available.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_maps);
setUpMapIfNeeded();
}
@Override
protected void onResume() {
super.onResume();
setUpMapIfNeeded();
}
private void setUpMapIfNeeded() {
// Do a null check to confirm that we have not already instantiated the map.
if (mMap == null) {
// Try to obtain the map from the SupportMapFragment.
mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map))
.getMap();
// Check if we were successful in obtaining the map.
if (mMap != null) {
setUpMap();
}
}
}
private void setUpMap() {
mMap.setMyLocationEnabled(true);
List<CustomLocation> custLocs = new ArrayList<CustomLocation>();
//Testing with three addresses
custLocs.add(new CustomLocation("location 1", "100 market street san francisco ca"));
custLocs.add(new CustomLocation("location 2", "200 market street san francisco ca"));
custLocs.add(new CustomLocation("location 3", "300 market street san francisco ca"));
//set the location for each item in the list
for (CustomLocation custLoc : custLocs){
custLoc.setLocation(getSingleLocationFromAddress(custLoc.address));
}
//draw the Marker for each item in the list
for (CustomLocation custLoc : custLocs){
mMap.addMarker(new MarkerOptions().position(custLoc.latLng)
.title(custLoc.name).icon(BitmapDescriptorFactory
.defaultMarker(BitmapDescriptorFactory.HUE_MAGENTA)));
}
}
//method to do a lookup on the address
public LatLng getSingleLocationFromAddress(String strAddress)
{
Geocoder coder = new Geocoder(this, Locale.getDefault());
List<Address> address = null;
Address location = null;
LatLng temp = null;
String strAddresNew = strAddress.replace(",", " ");
try
{
address = coder.getFromLocationName(strAddresNew, 1);
if (!address.isEmpty())
{
location = address.get(0);
location.getLatitude();
location.getLongitude();
temp = new LatLng(location.getLatitude(), location.getLongitude());
Log.d("Latlng : ", temp + "");
}
} catch (IOException e)
{
Toast.makeText(this, e.toString(), Toast.LENGTH_LONG).show();
e.printStackTrace();
} catch (Exception e)
{
e.printStackTrace();
}
return temp;
}
//class to hold the name and address and location
public static class CustomLocation{
public String name;
public String address;
public LatLng latLng;
public CustomLocation(String n, String a){
name = n;
address = a;
}
public void setLocation(LatLng ll){
latLng = ll;
}
}
}
```
Result:
 | You can make a request to google maps API to get possible addresses by a address string. Check this [link](https://developers.google.com/maps/documentation/geocoding/#geocoding). |
30,740,306 | I´m designing an android app that locate places, these places are in a database in a server but I only have the name and location of the place, so I need to locate it in my app and put a *marker* there. It's posible to get coordinates only with address or, Do I need to redo my database adding fields with latitude and longitude? | 2015/06/09 | [
"https://Stackoverflow.com/questions/30740306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4987172/"
] | You can use the [Geocoder](http://developer.android.com/reference/android/location/Geocoder.html) class to do a look-up of the addresses you have, and then populate the map with Markers using the `LanLng` objects that are returned.
Note that the `Geocoder` class will not be able to geocode every address, but it will be successful for most of them if they are in the correct format.
Taking code from [this question](https://stackoverflow.com/questions/30521933/how-to-get-latitude-longitude-from-more-than-500-addresses) as a guide, I just got this simple example working.
I created a custom class that stores location name, location address, and a `LatLng` object to store the lat/lon.
For this simple example, I just used three addresses.
Here is the full class code:
```
import android.location.Address;
import android.location.Geocoder;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.widget.Toast;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.BitmapDescriptorFactory;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
public class MapsActivity extends AppCompatActivity {
private GoogleMap mMap; // Might be null if Google Play services APK is not available.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_maps);
setUpMapIfNeeded();
}
@Override
protected void onResume() {
super.onResume();
setUpMapIfNeeded();
}
private void setUpMapIfNeeded() {
// Do a null check to confirm that we have not already instantiated the map.
if (mMap == null) {
// Try to obtain the map from the SupportMapFragment.
mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map))
.getMap();
// Check if we were successful in obtaining the map.
if (mMap != null) {
setUpMap();
}
}
}
private void setUpMap() {
mMap.setMyLocationEnabled(true);
List<CustomLocation> custLocs = new ArrayList<CustomLocation>();
//Testing with three addresses
custLocs.add(new CustomLocation("location 1", "100 market street san francisco ca"));
custLocs.add(new CustomLocation("location 2", "200 market street san francisco ca"));
custLocs.add(new CustomLocation("location 3", "300 market street san francisco ca"));
//set the location for each item in the list
for (CustomLocation custLoc : custLocs){
custLoc.setLocation(getSingleLocationFromAddress(custLoc.address));
}
//draw the Marker for each item in the list
for (CustomLocation custLoc : custLocs){
mMap.addMarker(new MarkerOptions().position(custLoc.latLng)
.title(custLoc.name).icon(BitmapDescriptorFactory
.defaultMarker(BitmapDescriptorFactory.HUE_MAGENTA)));
}
}
//method to do a lookup on the address
public LatLng getSingleLocationFromAddress(String strAddress)
{
Geocoder coder = new Geocoder(this, Locale.getDefault());
List<Address> address = null;
Address location = null;
LatLng temp = null;
String strAddresNew = strAddress.replace(",", " ");
try
{
address = coder.getFromLocationName(strAddresNew, 1);
if (!address.isEmpty())
{
location = address.get(0);
location.getLatitude();
location.getLongitude();
temp = new LatLng(location.getLatitude(), location.getLongitude());
Log.d("Latlng : ", temp + "");
}
} catch (IOException e)
{
Toast.makeText(this, e.toString(), Toast.LENGTH_LONG).show();
e.printStackTrace();
} catch (Exception e)
{
e.printStackTrace();
}
return temp;
}
//class to hold the name and address and location
public static class CustomLocation{
public String name;
public String address;
public LatLng latLng;
public CustomLocation(String n, String a){
name = n;
address = a;
}
public void setLocation(LatLng ll){
latLng = ll;
}
}
}
```
Result:
 | You can use the Geocoder class, which will return the latitude and longitude for a given address in a specified format.However a specified format has to be followed for the address name.
Check out for more information on this from the below link :-
<https://developer.android.com/reference/android/location/Geocoder> |
49,014,610 | I have this simple Rust function:
```
#[no_mangle]
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0
}
}
```
I am compiling this to WebAssembly successfully, but don't manage to pass the `operator` parameter from JS to Rust.
The JS line which calls the Rust function looks like this:
```
instance.exports.compute(operator, n1, n2);
```
`operator` is a JS `String` and `n1`, `n2` are JS `Number`s.
`n1` and `n2` are passed properly and can be read inside the compiled function so I guess the problem is how I pass the string around. I imagine it is passed as a pointer from JS to WebAssembly but can't find evidence or material about how this works.
I am not using Emscripten and would like to keep it standalone (compilation target `wasm32-unknown-unknown`), but I see they wrap their compiled functions in `Module.cwrap`, maybe that could help? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49014610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2432221/"
] | Easiest and most idiomatic solution
===================================
Most people should use [wasm-bindgen](https://github.com/rustwasm/wasm-bindgen), which makes this whole process *much* simpler!
Low-level manual implementation
===============================
To transfer string data between JavaScript and Rust, you need to decide
1. The encoding of the text: UTF-8 (Rust native) or UTF-16 (JS native).
2. Who will own the memory buffer: the JS (caller) or Rust (callee).
3. How to represent the strings data and length: NUL-terminated (C-style) or distinct length (Rust-style).
4. How to communicate the data and length, if they are separate.
Common setup
------------
It's important to build C dylibs for WASM to help them be smaller in size.
**Cargo.toml**
```
[package]
name = "quick-maths"
version = "0.1.0"
authors = ["An Devloper <an.devloper@example.com>"]
[lib]
crate-type = ["cdylib"]
```
**.cargo/config**
```
[target.wasm32-unknown-unknown]
rustflags = [
"-C", "link-args=--import-memory",
]
```
**package.json**
```
{
"name": "quick-maths",
"version": "0.1.0",
"main": "index.js",
"author": "An Devloper <an.devloper@example.com>",
"license": "MIT",
"scripts": {
"example": "node ./index.js"
},
"dependencies": {
"fs-extra": "^8.0.1",
"text-encoding": "^0.7.0"
}
}
```
I'm using NodeJS 12.1.0.
**Execution**
```none
$ rustup component add rust-std --target wasm32-unknown-unknown
$ cargo build --release --target wasm32-unknown-unknown
```
Solution 1
----------
I decided:
1. To convert JS strings to UTF-8, which means that the [`TextEncoder`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder) JS API is the best fit.
2. The caller should own the memory buffer.
3. To have the length be a separate value.
4. Another struct and allocation should be made to hold the pointer and length.
**lib/src.rs**
```rust
// A struct with a known memory layout that we can pass string information in
#[repr(C)]
pub struct JsInteropString {
data: *const u8,
len: usize,
}
// Our FFI shim function
#[no_mangle]
pub unsafe extern "C" fn compute(s: *const JsInteropString, n1: i32, n2: i32) -> i32 {
// Check for NULL (see corresponding comment in JS)
let s = match s.as_ref() {
Some(s) => s,
None => return -1,
};
// Convert the pointer and length to a `&[u8]`.
let data = std::slice::from_raw_parts(s.data, s.len);
// Convert the `&[u8]` to a `&str`
match std::str::from_utf8(data) {
Ok(s) => real_code::compute(s, n1, n2),
Err(_) => -2,
}
}
// I advocate that you keep your interesting code in a different
// crate for easy development and testing. Have a separate crate
// with the FFI shims.
mod real_code {
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0,
}
}
}
```
**index.js**
```js
const fs = require('fs-extra');
const { TextEncoder } = require('text-encoding');
// Allocate some memory.
const memory = new WebAssembly.Memory({ initial: 20, maximum: 100 });
// Connect these memory regions to the imported module
const importObject = {
env: { memory }
};
// Create an object that handles converting our strings for us
const memoryManager = (memory) => {
var base = 0;
// NULL is conventionally at address 0, so we "use up" the first 4
// bytes of address space to make our lives a bit simpler.
base += 4;
return {
encodeString: (jsString) => {
// Convert the JS String to UTF-8 data
const encoder = new TextEncoder();
const encodedString = encoder.encode(jsString);
// Organize memory with space for the JsInteropString at the
// beginning, followed by the UTF-8 string bytes.
const asU32 = new Uint32Array(memory.buffer, base, 2);
const asBytes = new Uint8Array(memory.buffer, asU32.byteOffset + asU32.byteLength, encodedString.length);
// Copy the UTF-8 into the WASM memory.
asBytes.set(encodedString);
// Assign the data pointer and length values.
asU32[0] = asBytes.byteOffset;
asU32[1] = asBytes.length;
// Update our memory allocator base address for the next call
const originalBase = base;
base += asBytes.byteOffset + asBytes.byteLength;
return originalBase;
}
};
};
const myMemory = memoryManager(memory);
fs.readFile('./target/wasm32-unknown-unknown/release/quick_maths.wasm')
.then(bytes => WebAssembly.instantiate(bytes, importObject))
.then(({ instance }) => {
const argString = "MULT";
const argN1 = 42;
const argN2 = 100;
const s = myMemory.encodeString(argString);
const result = instance.exports.compute(s, argN1, argN2);
console.log(result);
});
```
**Execution**
```none
$ yarn run example
4200
```
Solution 2
----------
I decided:
1. To convert JS strings to UTF-8, which means that the [`TextEncoder`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder) JS API is the best fit.
2. The module should own the memory buffer.
3. To have the length be a separate value.
4. To use a `Box<String>` as the underlying data structure. This allows the allocation to be further used by Rust code.
**src/lib.rs**
```rust
// Very important to use `transparent` to prevent ABI issues
#[repr(transparent)]
pub struct JsInteropString(*mut String);
impl JsInteropString {
// Unsafe because we create a string and say it's full of valid
// UTF-8 data, but it isn't!
unsafe fn with_capacity(cap: usize) -> Self {
let mut d = Vec::with_capacity(cap);
d.set_len(cap);
let s = Box::new(String::from_utf8_unchecked(d));
JsInteropString(Box::into_raw(s))
}
unsafe fn as_string(&self) -> &String {
&*self.0
}
unsafe fn as_mut_string(&mut self) -> &mut String {
&mut *self.0
}
unsafe fn into_boxed_string(self) -> Box<String> {
Box::from_raw(self.0)
}
unsafe fn as_mut_ptr(&mut self) -> *mut u8 {
self.as_mut_string().as_mut_vec().as_mut_ptr()
}
}
#[no_mangle]
pub unsafe extern "C" fn stringPrepare(cap: usize) -> JsInteropString {
JsInteropString::with_capacity(cap)
}
#[no_mangle]
pub unsafe extern "C" fn stringData(mut s: JsInteropString) -> *mut u8 {
s.as_mut_ptr()
}
#[no_mangle]
pub unsafe extern "C" fn stringLen(s: JsInteropString) -> usize {
s.as_string().len()
}
#[no_mangle]
pub unsafe extern "C" fn compute(s: JsInteropString, n1: i32, n2: i32) -> i32 {
let s = s.into_boxed_string();
real_code::compute(&s, n1, n2)
}
mod real_code {
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0,
}
}
}
```
**index.js**
```js
const fs = require('fs-extra');
const { TextEncoder } = require('text-encoding');
class QuickMaths {
constructor(instance) {
this.instance = instance;
}
difference(n1, n2) {
const { compute } = this.instance.exports;
const op = this.copyJsStringToRust("DIFF");
return compute(op, n1, n2);
}
copyJsStringToRust(jsString) {
const { memory, stringPrepare, stringData, stringLen } = this.instance.exports;
const encoder = new TextEncoder();
const encodedString = encoder.encode(jsString);
// Ask Rust code to allocate a string inside of the module's memory
const rustString = stringPrepare(encodedString.length);
// Get a JS view of the string data
const rustStringData = stringData(rustString);
const asBytes = new Uint8Array(memory.buffer, rustStringData, encodedString.length);
// Copy the UTF-8 into the WASM memory.
asBytes.set(encodedString);
return rustString;
}
}
async function main() {
const bytes = await fs.readFile('./target/wasm32-unknown-unknown/release/quick_maths.wasm');
const { instance } = await WebAssembly.instantiate(bytes);
const maffs = new QuickMaths(instance);
console.log(maffs.difference(100, 201));
}
main();
```
**Execution**
```none
$ yarn run example
-101
```
---
Note that this process can be used for other types. You "just" have to decide how to represent data as a set of bytes that both sides agree on then send it across.
See also:
* [Using the WebAssembly JavaScript API](https://developer.mozilla.org/en-US/docs/WebAssembly/Using_the_JavaScript_API)
* [`TextEncoder` API](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder)
* [`Uint8Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array) / [`Uint32Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint32Array) / [`TypedArray`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray)
* [`WebAssembly.Memory`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Memory)
* [Hello, Rust! — Import memory buffer](https://www.hellorust.com/demos/import-memory/index.html)
* [How to return a string (or similar) from Rust in WebAssembly?](https://stackoverflow.com/q/47529643/155423) | As pointed out by Shepmaster, only numbers can be passed to WebAssembly, so we need to convert the string into an `Uint16Array`.
To do so we can use this `str2ab` function found [here](https://developers.google.com/web/updates/2012/06/How-to-convert-ArrayBuffer-to-and-from-String):
```
function str2ab(str) {
var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
var bufView = new Uint16Array(buf);
for (var i=0, strLen=str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
```
This now works:
```
instance.exports.compute(
str2ab(operator),
n1, n2
);
```
Because we're passing a reference to an array of unsigned integers. |
49,014,610 | I have this simple Rust function:
```
#[no_mangle]
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0
}
}
```
I am compiling this to WebAssembly successfully, but don't manage to pass the `operator` parameter from JS to Rust.
The JS line which calls the Rust function looks like this:
```
instance.exports.compute(operator, n1, n2);
```
`operator` is a JS `String` and `n1`, `n2` are JS `Number`s.
`n1` and `n2` are passed properly and can be read inside the compiled function so I guess the problem is how I pass the string around. I imagine it is passed as a pointer from JS to WebAssembly but can't find evidence or material about how this works.
I am not using Emscripten and would like to keep it standalone (compilation target `wasm32-unknown-unknown`), but I see they wrap their compiled functions in `Module.cwrap`, maybe that could help? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49014610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2432221/"
] | A WebAssembly program has it's own memory space. And this space is often managed by the WebAssembly program itself, with the help of an allocator library, such as the [wee\_alloc](http://fitzgeraldnick.com/2018/02/09/wee-alloc.html).
The JavaScript can see and modify that memory space, but it has no way of knowing how the allocator library structures are organized. So if we simply write to the WASM memory from the JavaScript then we'll likely overwrite something important and mess things up. Therefore the WebAssembly program itself must allocate the memory region first, pass it to JavaScript, and then the JavaScript can fill that region with the data.
In the following example we do just that: allocate a buffer in the WASM memory space, copy the UTF-8 bytes there, pass the buffer location to a Rust function, then free the buffer.
Rust:
```rust
#![feature(allocator_api)]
use std::heap::{Alloc, Heap, Layout};
#[no_mangle]
pub fn alloc(len: i32) -> *mut u8 {
let mut heap = Heap;
let layout = Layout::from_size_align(len as usize, 1).expect("!from_size_align");
unsafe { heap.alloc(layout).expect("!alloc") }
}
#[no_mangle]
pub fn dealloc(ptr: *mut u8, len: i32) {
let mut heap = Heap;
let layout = Layout::from_size_align(len as usize, 1).expect("!from_size_align");
unsafe { heap.dealloc(ptr, layout) }
}
#[no_mangle]
pub fn is_foobar(buf: *const u8, len: i32) -> i32 {
let js = unsafe { std::slice::from_raw_parts(buf, len as usize) };
let js = unsafe { std::str::from_utf8_unchecked(js) };
if js == "foobar" {
1
} else {
0
}
}
```
TypeScript:
```js
// cf. https://github.com/Microsoft/TypeScript/issues/18099
declare class TextEncoder {constructor (label?: string); encode (input?: string): Uint8Array}
declare class TextDecoder {constructor (utfLabel?: string); decode (input?: ArrayBufferView): string}
// https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/webassembly-js-api/index.d.ts
declare namespace WebAssembly {
class Instance {readonly exports: any}
interface ResultObject {instance: Instance}
function instantiateStreaming (file: Promise<Response>, options?: any): Promise<ResultObject>}
var main: {
memory: {readonly buffer: ArrayBuffer}
alloc (size: number): number
dealloc (ptr: number, len: number): void
is_foobar (buf: number, len: number): number}
function withRustString (str: string, cb: (ptr: number, len: number) => any): any {
// Convert the JavaScript string to an array of UTF-8 bytes.
const utf8 = (new TextEncoder()).encode (str)
// Reserve a WASM memory buffer for the UTF-8 array.
const rsBuf = main.alloc (utf8.length)
// Copy the UTF-8 array into the WASM memory.
new Uint8Array (main.memory.buffer, rsBuf, utf8.length) .set (utf8)
// Pass the WASM memory location and size into the callback.
const ret = cb (rsBuf, utf8.length)
// Free the WASM memory buffer.
main.dealloc (rsBuf, utf8.length)
return ret}
WebAssembly.instantiateStreaming (fetch ('main.wasm')) .then (results => {
main = results.instance.exports
// Prints "foobar is_foobar? 1".
console.log ('foobar is_foobar? ' +
withRustString ("foobar", function (buf, len) {return main.is_foobar (buf, len)}))
// Prints "woot is_foobar? 0".
console.log ('woot is_foobar? ' +
withRustString ("woot", function (buf, len) {return main.is_foobar (buf, len)}))})
```
P.S. [The `Module._malloc` in Emscripten](https://kripken.github.io/emscripten-site/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html#access-memory-from-javascript) might be semantically equivalent to the `alloc` function we implemented above. Under the "wasm32-unknown-emscripten" target [you can use the `Module._malloc` with Rust](https://bojandjurdjevic.com/2018/WASM-vs-JS-Realtime-pitch-detection/). | As pointed out by Shepmaster, only numbers can be passed to WebAssembly, so we need to convert the string into an `Uint16Array`.
To do so we can use this `str2ab` function found [here](https://developers.google.com/web/updates/2012/06/How-to-convert-ArrayBuffer-to-and-from-String):
```
function str2ab(str) {
var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
var bufView = new Uint16Array(buf);
for (var i=0, strLen=str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
```
This now works:
```
instance.exports.compute(
str2ab(operator),
n1, n2
);
```
Because we're passing a reference to an array of unsigned integers. |
49,014,610 | I have this simple Rust function:
```
#[no_mangle]
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0
}
}
```
I am compiling this to WebAssembly successfully, but don't manage to pass the `operator` parameter from JS to Rust.
The JS line which calls the Rust function looks like this:
```
instance.exports.compute(operator, n1, n2);
```
`operator` is a JS `String` and `n1`, `n2` are JS `Number`s.
`n1` and `n2` are passed properly and can be read inside the compiled function so I guess the problem is how I pass the string around. I imagine it is passed as a pointer from JS to WebAssembly but can't find evidence or material about how this works.
I am not using Emscripten and would like to keep it standalone (compilation target `wasm32-unknown-unknown`), but I see they wrap their compiled functions in `Module.cwrap`, maybe that could help? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49014610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2432221/"
] | Easiest and most idiomatic solution
===================================
Most people should use [wasm-bindgen](https://github.com/rustwasm/wasm-bindgen), which makes this whole process *much* simpler!
Low-level manual implementation
===============================
To transfer string data between JavaScript and Rust, you need to decide
1. The encoding of the text: UTF-8 (Rust native) or UTF-16 (JS native).
2. Who will own the memory buffer: the JS (caller) or Rust (callee).
3. How to represent the strings data and length: NUL-terminated (C-style) or distinct length (Rust-style).
4. How to communicate the data and length, if they are separate.
Common setup
------------
It's important to build C dylibs for WASM to help them be smaller in size.
**Cargo.toml**
```
[package]
name = "quick-maths"
version = "0.1.0"
authors = ["An Devloper <an.devloper@example.com>"]
[lib]
crate-type = ["cdylib"]
```
**.cargo/config**
```
[target.wasm32-unknown-unknown]
rustflags = [
"-C", "link-args=--import-memory",
]
```
**package.json**
```
{
"name": "quick-maths",
"version": "0.1.0",
"main": "index.js",
"author": "An Devloper <an.devloper@example.com>",
"license": "MIT",
"scripts": {
"example": "node ./index.js"
},
"dependencies": {
"fs-extra": "^8.0.1",
"text-encoding": "^0.7.0"
}
}
```
I'm using NodeJS 12.1.0.
**Execution**
```none
$ rustup component add rust-std --target wasm32-unknown-unknown
$ cargo build --release --target wasm32-unknown-unknown
```
Solution 1
----------
I decided:
1. To convert JS strings to UTF-8, which means that the [`TextEncoder`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder) JS API is the best fit.
2. The caller should own the memory buffer.
3. To have the length be a separate value.
4. Another struct and allocation should be made to hold the pointer and length.
**lib/src.rs**
```rust
// A struct with a known memory layout that we can pass string information in
#[repr(C)]
pub struct JsInteropString {
data: *const u8,
len: usize,
}
// Our FFI shim function
#[no_mangle]
pub unsafe extern "C" fn compute(s: *const JsInteropString, n1: i32, n2: i32) -> i32 {
// Check for NULL (see corresponding comment in JS)
let s = match s.as_ref() {
Some(s) => s,
None => return -1,
};
// Convert the pointer and length to a `&[u8]`.
let data = std::slice::from_raw_parts(s.data, s.len);
// Convert the `&[u8]` to a `&str`
match std::str::from_utf8(data) {
Ok(s) => real_code::compute(s, n1, n2),
Err(_) => -2,
}
}
// I advocate that you keep your interesting code in a different
// crate for easy development and testing. Have a separate crate
// with the FFI shims.
mod real_code {
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0,
}
}
}
```
**index.js**
```js
const fs = require('fs-extra');
const { TextEncoder } = require('text-encoding');
// Allocate some memory.
const memory = new WebAssembly.Memory({ initial: 20, maximum: 100 });
// Connect these memory regions to the imported module
const importObject = {
env: { memory }
};
// Create an object that handles converting our strings for us
const memoryManager = (memory) => {
var base = 0;
// NULL is conventionally at address 0, so we "use up" the first 4
// bytes of address space to make our lives a bit simpler.
base += 4;
return {
encodeString: (jsString) => {
// Convert the JS String to UTF-8 data
const encoder = new TextEncoder();
const encodedString = encoder.encode(jsString);
// Organize memory with space for the JsInteropString at the
// beginning, followed by the UTF-8 string bytes.
const asU32 = new Uint32Array(memory.buffer, base, 2);
const asBytes = new Uint8Array(memory.buffer, asU32.byteOffset + asU32.byteLength, encodedString.length);
// Copy the UTF-8 into the WASM memory.
asBytes.set(encodedString);
// Assign the data pointer and length values.
asU32[0] = asBytes.byteOffset;
asU32[1] = asBytes.length;
// Update our memory allocator base address for the next call
const originalBase = base;
base += asBytes.byteOffset + asBytes.byteLength;
return originalBase;
}
};
};
const myMemory = memoryManager(memory);
fs.readFile('./target/wasm32-unknown-unknown/release/quick_maths.wasm')
.then(bytes => WebAssembly.instantiate(bytes, importObject))
.then(({ instance }) => {
const argString = "MULT";
const argN1 = 42;
const argN2 = 100;
const s = myMemory.encodeString(argString);
const result = instance.exports.compute(s, argN1, argN2);
console.log(result);
});
```
**Execution**
```none
$ yarn run example
4200
```
Solution 2
----------
I decided:
1. To convert JS strings to UTF-8, which means that the [`TextEncoder`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder) JS API is the best fit.
2. The module should own the memory buffer.
3. To have the length be a separate value.
4. To use a `Box<String>` as the underlying data structure. This allows the allocation to be further used by Rust code.
**src/lib.rs**
```rust
// Very important to use `transparent` to prevent ABI issues
#[repr(transparent)]
pub struct JsInteropString(*mut String);
impl JsInteropString {
// Unsafe because we create a string and say it's full of valid
// UTF-8 data, but it isn't!
unsafe fn with_capacity(cap: usize) -> Self {
let mut d = Vec::with_capacity(cap);
d.set_len(cap);
let s = Box::new(String::from_utf8_unchecked(d));
JsInteropString(Box::into_raw(s))
}
unsafe fn as_string(&self) -> &String {
&*self.0
}
unsafe fn as_mut_string(&mut self) -> &mut String {
&mut *self.0
}
unsafe fn into_boxed_string(self) -> Box<String> {
Box::from_raw(self.0)
}
unsafe fn as_mut_ptr(&mut self) -> *mut u8 {
self.as_mut_string().as_mut_vec().as_mut_ptr()
}
}
#[no_mangle]
pub unsafe extern "C" fn stringPrepare(cap: usize) -> JsInteropString {
JsInteropString::with_capacity(cap)
}
#[no_mangle]
pub unsafe extern "C" fn stringData(mut s: JsInteropString) -> *mut u8 {
s.as_mut_ptr()
}
#[no_mangle]
pub unsafe extern "C" fn stringLen(s: JsInteropString) -> usize {
s.as_string().len()
}
#[no_mangle]
pub unsafe extern "C" fn compute(s: JsInteropString, n1: i32, n2: i32) -> i32 {
let s = s.into_boxed_string();
real_code::compute(&s, n1, n2)
}
mod real_code {
pub fn compute(operator: &str, n1: i32, n2: i32) -> i32 {
match operator {
"SUM" => n1 + n2,
"DIFF" => n1 - n2,
"MULT" => n1 * n2,
"DIV" => n1 / n2,
_ => 0,
}
}
}
```
**index.js**
```js
const fs = require('fs-extra');
const { TextEncoder } = require('text-encoding');
class QuickMaths {
constructor(instance) {
this.instance = instance;
}
difference(n1, n2) {
const { compute } = this.instance.exports;
const op = this.copyJsStringToRust("DIFF");
return compute(op, n1, n2);
}
copyJsStringToRust(jsString) {
const { memory, stringPrepare, stringData, stringLen } = this.instance.exports;
const encoder = new TextEncoder();
const encodedString = encoder.encode(jsString);
// Ask Rust code to allocate a string inside of the module's memory
const rustString = stringPrepare(encodedString.length);
// Get a JS view of the string data
const rustStringData = stringData(rustString);
const asBytes = new Uint8Array(memory.buffer, rustStringData, encodedString.length);
// Copy the UTF-8 into the WASM memory.
asBytes.set(encodedString);
return rustString;
}
}
async function main() {
const bytes = await fs.readFile('./target/wasm32-unknown-unknown/release/quick_maths.wasm');
const { instance } = await WebAssembly.instantiate(bytes);
const maffs = new QuickMaths(instance);
console.log(maffs.difference(100, 201));
}
main();
```
**Execution**
```none
$ yarn run example
-101
```
---
Note that this process can be used for other types. You "just" have to decide how to represent data as a set of bytes that both sides agree on then send it across.
See also:
* [Using the WebAssembly JavaScript API](https://developer.mozilla.org/en-US/docs/WebAssembly/Using_the_JavaScript_API)
* [`TextEncoder` API](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder)
* [`Uint8Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array) / [`Uint32Array`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint32Array) / [`TypedArray`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray)
* [`WebAssembly.Memory`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Memory)
* [Hello, Rust! — Import memory buffer](https://www.hellorust.com/demos/import-memory/index.html)
* [How to return a string (or similar) from Rust in WebAssembly?](https://stackoverflow.com/q/47529643/155423) | A WebAssembly program has it's own memory space. And this space is often managed by the WebAssembly program itself, with the help of an allocator library, such as the [wee\_alloc](http://fitzgeraldnick.com/2018/02/09/wee-alloc.html).
The JavaScript can see and modify that memory space, but it has no way of knowing how the allocator library structures are organized. So if we simply write to the WASM memory from the JavaScript then we'll likely overwrite something important and mess things up. Therefore the WebAssembly program itself must allocate the memory region first, pass it to JavaScript, and then the JavaScript can fill that region with the data.
In the following example we do just that: allocate a buffer in the WASM memory space, copy the UTF-8 bytes there, pass the buffer location to a Rust function, then free the buffer.
Rust:
```rust
#![feature(allocator_api)]
use std::heap::{Alloc, Heap, Layout};
#[no_mangle]
pub fn alloc(len: i32) -> *mut u8 {
let mut heap = Heap;
let layout = Layout::from_size_align(len as usize, 1).expect("!from_size_align");
unsafe { heap.alloc(layout).expect("!alloc") }
}
#[no_mangle]
pub fn dealloc(ptr: *mut u8, len: i32) {
let mut heap = Heap;
let layout = Layout::from_size_align(len as usize, 1).expect("!from_size_align");
unsafe { heap.dealloc(ptr, layout) }
}
#[no_mangle]
pub fn is_foobar(buf: *const u8, len: i32) -> i32 {
let js = unsafe { std::slice::from_raw_parts(buf, len as usize) };
let js = unsafe { std::str::from_utf8_unchecked(js) };
if js == "foobar" {
1
} else {
0
}
}
```
TypeScript:
```js
// cf. https://github.com/Microsoft/TypeScript/issues/18099
declare class TextEncoder {constructor (label?: string); encode (input?: string): Uint8Array}
declare class TextDecoder {constructor (utfLabel?: string); decode (input?: ArrayBufferView): string}
// https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/webassembly-js-api/index.d.ts
declare namespace WebAssembly {
class Instance {readonly exports: any}
interface ResultObject {instance: Instance}
function instantiateStreaming (file: Promise<Response>, options?: any): Promise<ResultObject>}
var main: {
memory: {readonly buffer: ArrayBuffer}
alloc (size: number): number
dealloc (ptr: number, len: number): void
is_foobar (buf: number, len: number): number}
function withRustString (str: string, cb: (ptr: number, len: number) => any): any {
// Convert the JavaScript string to an array of UTF-8 bytes.
const utf8 = (new TextEncoder()).encode (str)
// Reserve a WASM memory buffer for the UTF-8 array.
const rsBuf = main.alloc (utf8.length)
// Copy the UTF-8 array into the WASM memory.
new Uint8Array (main.memory.buffer, rsBuf, utf8.length) .set (utf8)
// Pass the WASM memory location and size into the callback.
const ret = cb (rsBuf, utf8.length)
// Free the WASM memory buffer.
main.dealloc (rsBuf, utf8.length)
return ret}
WebAssembly.instantiateStreaming (fetch ('main.wasm')) .then (results => {
main = results.instance.exports
// Prints "foobar is_foobar? 1".
console.log ('foobar is_foobar? ' +
withRustString ("foobar", function (buf, len) {return main.is_foobar (buf, len)}))
// Prints "woot is_foobar? 0".
console.log ('woot is_foobar? ' +
withRustString ("woot", function (buf, len) {return main.is_foobar (buf, len)}))})
```
P.S. [The `Module._malloc` in Emscripten](https://kripken.github.io/emscripten-site/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html#access-memory-from-javascript) might be semantically equivalent to the `alloc` function we implemented above. Under the "wasm32-unknown-emscripten" target [you can use the `Module._malloc` with Rust](https://bojandjurdjevic.com/2018/WASM-vs-JS-Realtime-pitch-detection/). |
56,137,343 | Angular 5 Project working on Chrome, Internet Explorer, Firefox etc.
But only not working on Safari Browser.
I am getting error
>
> Can't find variable: DragEvent
>
>
>
[](https://i.stack.imgur.com/gz7tS.png) | 2019/05/14 | [
"https://Stackoverflow.com/questions/56137343",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10590264/"
] | I have had same issue, just search for `DragEvent` and change them `any` | DragEvent is not supported in Safari. In your code change 'DragEvent' to any. |
65,687,852 | I'm building a prototype that fetches a Spotify user's playlist data in React. The data fetching is done inside a useEffect hook and sets the state of playlists variable. Consequently, I want to render the name of each playlist. However, it seems that the data is only fetched after rendering, causing an issue, because the state is not set before rendering, so playlists variable is undefined. How can I solve this problem while continuing to use React hooks? My code is below.
```
import './App.css'
import queryString from 'query-string'
import React, { useState, useEffect } from 'react'
const App = () => {
const [userData, setUserData] = useState({})
const [accesstoken, setAccesstoken] = useState('')
const [playlists, setPlaylists] = useState({})
useEffect(() => {
let parsed = queryString.parse(window.location.search)
let accesstoken = parsed.access_token
if (!accesstoken) {
return
}
setAccesstoken(accesstoken)
fetch('https://api.spotify.com/v1/me', {
headers: {'Authorization': 'Bearer ' + accesstoken}
}).then(response => response.json())
.then(data => setUserData(data))
fetch(`https://api.spotify.com/v1/me/playlists`, {
headers: {'Authorization': 'Bearer ' + accesstoken}
}).then(response => response.json())
.then(data => setPlaylists(data))
}, [])
return(
<div>
{accesstoken ? (
<div>
<h1>Welcome, {userData.display_name}</h1>
<h2>Playlists</h2>
<div>
{playlists.items.map(playlist =>
<p>
{playlist.name}
</p>
)}
</div>
</div>
) : (
<button onClick={() => window.location = 'http://localhost:8888/login'}>Login</button>
)}
</div>
);
}
export default App;
``` | 2021/01/12 | [
"https://Stackoverflow.com/questions/65687852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11383969/"
] | Change code to
```
<?php
//myFunction that will return a status
if(myFunction() === true){
$status = "success";
}else{
$status = "failure";
}
echo $status
```
or short it to
```
echo myFunction() ? "success" : "failure";
```
To wait for an answer - you can execute the request asynchronously, and get the result in the .done() callback
```
$.ajax({
url: $(this).attr('href'),
type: 'POST',
fail: function(){
//do something
},
done: function(m){
/// do something else
}
});
``` | Your PHP needs to return the value. If you want to keep the dataType Json (suggested) you just need to json\_encode your output.
So the PHP becomes:
```
<?php
$type=$_POST['type'];
//myFunction that will return a status
if(myFunction() === true){
$status = "success";
}else{
$status = "failure";
}
echo json_encode('status'=>$status);
?>
```
Then you need to tell Ajax what to do with the answer received using `.done()`
So your Ajax will become:
```
$.ajax({
url: '{$modulelink}&action=delete',
type: "post", //request type,
dataType: 'json',
data: { type: 'test'}
}).done(function(data){
console.log(data.status);
});
```
Now you can do what you want with status but only in the `.done()` function. The rest of your js will be executed without waiting for ajax to return a value since it is asyncronous. So add here all the logic like dom manipulation and so on depending on this response.
Obviously you can have more data returned by php in the json and acccess them by key as done for status. |
29,014,393 | I'm trying to make a simple voting-function to our site. Something in it does not work.
```
if(isset($_POST['vote']))
{
if($_POST['rate'] == "rate_1")
{
$rate = 'rate_1';
$rate_opload = ++$rest['rate_1'];
}
if else($_POST['rate'] == "rate_2")
{
$rate = 'rate_2';
$rate_opload = ++$rest['rate_2'];
}
if else($_POST['rate'] == "rate_3")
{
$rate = 'rate_3';
$rate_opload = ++$rest['rate_3'];
}
if else($_POST['rate'] == "rate_4")
{
$rate = 'rate_4';
$rate_opload = ++$rest['rate_4'];
}
if else($_POST['rate'] == "rate_5")
{
$rate = 'rate_5';
$rate_opload = ++$rest['rate_5'];
}
$sql = "INSERT INTO kaffe ('{$rate}') VALUES ('{$rate_opload}')";
mysql_query($sql);
}
```
There are five different columns as you can see because we needed an average in an other part.
I don't know if it's necessary but here is the option-form
```
<form method=\"post\" id=\"vote\">
<select name=\"rate\">
<option value=\"rate_1\" >★</option>
<option value=\"rate_2\" >★★</option>
<option value=\"rate_3\" >★★★</option>
<option value=\"rate_4\">★★★★</option>
<option value=\"rate_5\">★★★★★</option>
</select>
<button type=\"submit\" form=\"vote\" value=\"vote\" class=\"fsSubmitButton\">Rösta</button>
``` | 2015/03/12 | [
"https://Stackoverflow.com/questions/29014393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4655160/"
] | First you forgot to put name="vote" into your html form
```
<form method=\"post\" id=\"vote\">
<select name=\"rate\">
<option value=\"rate_1\" >★</option>
<option value=\"rate_2\" >★★</option>
<option value=\"rate_3\" >★★★</option>
<option value=\"rate_4\">★★★★</option>
<option value=\"rate_5\">★★★★★</option>
</select>
<button type=\"submit\" name="vote" form=\"vote\" value=\"vote\" class=\"fsSubmitButton\">Rösta</button>
```
Second it's `else if` and not `if else`, your code can be much more compact, nowaday mysql function are [deprecated](http://php.net/manual/en/migration55.deprecated.php) use mysqli
```
$link = mysqli_connect("localhost", "my_user", "my_password", "world");// Connect to database using mysqli function since mysql function are deprecated
if(isset($_POST['vote']))
{
if($_POST['rate'] == 'rate_1' | $_POST['rate'] == 'rate_2' | $_POST['rate'] == 'rate_3' | $_POST['rate'] == 'rate_4' |$_POST['rate'] == 'rate_5'){
$rate = $_POST['rate'];
$rate_opload = ++$rest[$rate];
$sql = mysqli_prepare($link, "INSERT INTO kaffe ('{$rate}') VALUES (?)");
mysqli_stmt_bind_param($sql , $rate_opload);
mysqli_stmt_execute($sql);
}
}
``` | Your
```
if(isset($_POST['vote']))
```
is not reached, you have to add name="vote" to your submit button |
11,509 | En mathématiques, il est fréquent de supposer. Parfois, lorsqu'on énonce un théorème, on peut lire, par exemple : « *Supposons que f atteint un max ou un min en un point a de I.* » Ou encore : « *Supposons que f atteigne un max ou un min en un point a de I.* »
En cherchant sur le Net le bon temps de verbe à employer, aucun choix définitif ne semble être fait. Une source dira qu'on peut employer l'indicatif alors qu'une autre prétendra qu'on doit absolument employer le subjonctif.
J'ai lu que lorsqu'on suppose quelque chose qui est impossible, on doit employer le subjonctif. Ainsi je croirais que dans l'*énoncé* d'un théorème l'indicatif est acceptable et lors d'une démonstration aussi, à moins que cette dernière ne se fasse par contradiction ?
**Question :** Y a-t-il un standard non ambigu à respecter ? | 2014/07/30 | [
"https://french.stackexchange.com/questions/11509",
"https://french.stackexchange.com",
"https://french.stackexchange.com/users/2845/"
] | Etant mathématicien, ni l'un ni l'autre ne me choquent et j'emploie les deux un peu au hasard chaque jour.
Cependant, tu as tout à fait raison quant à l'emploi du subjonctif dans les mathématiques : il est quasiment tout le temps utilisé dans les démonstrations par l'absurde, sans doute parce qu'une démonstration par l'absurde repose uniquement sur son hypothèse et donc le choix du subjonctif est là pour souligner cette absurdité. Mais en aucun cas il est interdit d'utiliser l'indicatif! C'est juste moins commun.
Une nouvelle fois, aucun des deux n'est choquant. Le choix des mots de logique est EXTRÊMEMENT important dans un théorème ou une démonstration car c'est là dessus que repose toute la justesse et la cohérence de l'énoncé, mais le choix du subjonctif ou de l'indicatif n'influe en aucun cas sur la justesse d'un théorème. | Grammaticalement, il faut utiliser le subjonctif dans cette construction de phrase avec 'que'. Je pense que l'utilisation de l'indicatif est un abus de langage. Personnellement, je trouve que 'Supposons que i est grand' sonne faux. |
11,509 | En mathématiques, il est fréquent de supposer. Parfois, lorsqu'on énonce un théorème, on peut lire, par exemple : « *Supposons que f atteint un max ou un min en un point a de I.* » Ou encore : « *Supposons que f atteigne un max ou un min en un point a de I.* »
En cherchant sur le Net le bon temps de verbe à employer, aucun choix définitif ne semble être fait. Une source dira qu'on peut employer l'indicatif alors qu'une autre prétendra qu'on doit absolument employer le subjonctif.
J'ai lu que lorsqu'on suppose quelque chose qui est impossible, on doit employer le subjonctif. Ainsi je croirais que dans l'*énoncé* d'un théorème l'indicatif est acceptable et lors d'une démonstration aussi, à moins que cette dernière ne se fasse par contradiction ?
**Question :** Y a-t-il un standard non ambigu à respecter ? | 2014/07/30 | [
"https://french.stackexchange.com/questions/11509",
"https://french.stackexchange.com",
"https://french.stackexchange.com/users/2845/"
] | Etant mathématicien, ni l'un ni l'autre ne me choquent et j'emploie les deux un peu au hasard chaque jour.
Cependant, tu as tout à fait raison quant à l'emploi du subjonctif dans les mathématiques : il est quasiment tout le temps utilisé dans les démonstrations par l'absurde, sans doute parce qu'une démonstration par l'absurde repose uniquement sur son hypothèse et donc le choix du subjonctif est là pour souligner cette absurdité. Mais en aucun cas il est interdit d'utiliser l'indicatif! C'est juste moins commun.
Une nouvelle fois, aucun des deux n'est choquant. Le choix des mots de logique est EXTRÊMEMENT important dans un théorème ou une démonstration car c'est là dessus que repose toute la justesse et la cohérence de l'énoncé, mais le choix du subjonctif ou de l'indicatif n'influe en aucun cas sur la justesse d'un théorème. | On ne connaît pas la valeur d'un objet, il dépend d'un autre objet, le subjonctif est recommandé :
>
> Supposons que *a* soit supérieur à *b*, alors nous pouvons démontrer ....
>
>
>
... ici on a fait une hypothèse sur *a* qui n'est pas connu, l'existence de *a* est subordonnée à celle de *b*.
Cela vient de l'étymologie du mot *subjonctif* : du latin *subjunctivus « attaché sous..., subordonné ».*
On attribut une valeur à cet objet singulier, le présent s'emploie alors :
>
> Maintenant supposons que *a* est nul, alors la valeur de *b* ...
>
>
>
... ici *a* est connu.
Le subjonctif est de moins en moins utilisé à l'oral et la grammaire mathématique est abstraite, ce qui peut amener à entendre l'indicatif à la place du subjonctif et à effacer ces nuances sans que la question perdent son sens. |
11,509 | En mathématiques, il est fréquent de supposer. Parfois, lorsqu'on énonce un théorème, on peut lire, par exemple : « *Supposons que f atteint un max ou un min en un point a de I.* » Ou encore : « *Supposons que f atteigne un max ou un min en un point a de I.* »
En cherchant sur le Net le bon temps de verbe à employer, aucun choix définitif ne semble être fait. Une source dira qu'on peut employer l'indicatif alors qu'une autre prétendra qu'on doit absolument employer le subjonctif.
J'ai lu que lorsqu'on suppose quelque chose qui est impossible, on doit employer le subjonctif. Ainsi je croirais que dans l'*énoncé* d'un théorème l'indicatif est acceptable et lors d'une démonstration aussi, à moins que cette dernière ne se fasse par contradiction ?
**Question :** Y a-t-il un standard non ambigu à respecter ? | 2014/07/30 | [
"https://french.stackexchange.com/questions/11509",
"https://french.stackexchange.com",
"https://french.stackexchange.com/users/2845/"
] | On ne connaît pas la valeur d'un objet, il dépend d'un autre objet, le subjonctif est recommandé :
>
> Supposons que *a* soit supérieur à *b*, alors nous pouvons démontrer ....
>
>
>
... ici on a fait une hypothèse sur *a* qui n'est pas connu, l'existence de *a* est subordonnée à celle de *b*.
Cela vient de l'étymologie du mot *subjonctif* : du latin *subjunctivus « attaché sous..., subordonné ».*
On attribut une valeur à cet objet singulier, le présent s'emploie alors :
>
> Maintenant supposons que *a* est nul, alors la valeur de *b* ...
>
>
>
... ici *a* est connu.
Le subjonctif est de moins en moins utilisé à l'oral et la grammaire mathématique est abstraite, ce qui peut amener à entendre l'indicatif à la place du subjonctif et à effacer ces nuances sans que la question perdent son sens. | Grammaticalement, il faut utiliser le subjonctif dans cette construction de phrase avec 'que'. Je pense que l'utilisation de l'indicatif est un abus de langage. Personnellement, je trouve que 'Supposons que i est grand' sonne faux. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.